qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* Re: [RFC 13/26] rust: Add RCU bindings
  2025-08-07 12:30 ` [RFC 13/26] rust: Add RCU bindings Zhao Liu
@ 2025-08-07 12:29   ` Manos Pitsidianakis
  2025-08-07 13:38     ` Paolo Bonzini
  0 siblings, 1 reply; 58+ messages in thread
From: Manos Pitsidianakis @ 2025-08-07 12:29 UTC (permalink / raw)
  To: Zhao Liu
  Cc: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 7, 2025 at 3:09 PM Zhao Liu <zhao1.liu@intel.com> wrote:
>
> Add rcu_read_lock() & rcu_read_unlock() bindings, then they can be used
> in memory access.
>
> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> ---
>  rust/qemu-api/meson.build |  1 +
>  rust/qemu-api/src/lib.rs  |  1 +
>  rust/qemu-api/src/rcu.rs  | 26 ++++++++++++++++++++++++++
>  rust/qemu-api/wrapper.h   |  1 +
>  4 files changed, 29 insertions(+)
>  create mode 100644 rust/qemu-api/src/rcu.rs
>
> diff --git a/rust/qemu-api/meson.build b/rust/qemu-api/meson.build
> index a362d44ed396..d40472092248 100644
> --- a/rust/qemu-api/meson.build
> +++ b/rust/qemu-api/meson.build
> @@ -68,6 +68,7 @@ _qemu_api_rs = static_library(
>        'src/prelude.rs',
>        'src/qdev.rs',
>        'src/qom.rs',
> +      'src/rcu.rs',
>        'src/sysbus.rs',
>        'src/timer.rs',
>        'src/uninit.rs',
> diff --git a/rust/qemu-api/src/lib.rs b/rust/qemu-api/src/lib.rs
> index 86dcd8ef17a9..4705cf9ccbc5 100644
> --- a/rust/qemu-api/src/lib.rs
> +++ b/rust/qemu-api/src/lib.rs
> @@ -26,6 +26,7 @@
>  pub mod module;
>  pub mod qdev;
>  pub mod qom;
> +pub mod rcu;
>  pub mod sysbus;
>  pub mod timer;
>  pub mod uninit;
> diff --git a/rust/qemu-api/src/rcu.rs b/rust/qemu-api/src/rcu.rs
> new file mode 100644
> index 000000000000..30d8b9e43967
> --- /dev/null
> +++ b/rust/qemu-api/src/rcu.rs
> @@ -0,0 +1,26 @@
> +// Copyright (C) 2025 Intel Corporation.
> +// Author(s): Zhao Liu <zhao1.liu@intel.com>
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +
> +//! Bindings for `rcu_read_lock` and `rcu_read_unlock`.
> +//! More details about RCU in QEMU, please refer docs/devel/rcu.rst.
> +

How about a RAII guard type? e.g. RCUGuard and runs `rcu_read_unlock` on Drop.

Destructors are not guaranteed to run or run only once, but the former
should happen when things go wrong e.g. crashes/aborts. You can add a
flag in the RCUGuard to make sure Drop runs unlock only once (since it
takes &mut and not ownership)

> +use crate::bindings;
> +
> +/// Used by a reader to inform the reclaimer that the reader is
> +/// entering an RCU read-side critical section.
> +pub fn rcu_read_lock() {
> +    // SAFETY: no return and no argument, everything is done at C side.
> +    unsafe { bindings::rcu_read_lock() }
> +}
> +
> +/// Used by a reader to inform the reclaimer that the reader is
> +/// exiting an RCU read-side critical section.  Note that RCU
> +/// read-side critical sections may be nested and/or overlapping.
> +pub fn rcu_read_unlock() {
> +    // SAFETY: no return and no argument, everything is done at C side.
> +    unsafe { bindings::rcu_read_unlock() }
> +}
> +
> +// FIXME: maybe we need rcu_read_lock_held() to check the rcu context,
> +// then make it possible to add assertion at any RCU critical section.

This would be less necessary with Drop, maybe.

> diff --git a/rust/qemu-api/wrapper.h b/rust/qemu-api/wrapper.h
> index 15a1b19847f2..ce0ac8d3f550 100644
> --- a/rust/qemu-api/wrapper.h
> +++ b/rust/qemu-api/wrapper.h
> @@ -69,3 +69,4 @@ typedef enum memory_order {
>  #include "qemu/timer.h"
>  #include "system/address-spaces.h"
>  #include "hw/char/pl011.h"
> +#include "qemu/rcu.h"
> --
> 2.34.1
>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm
@ 2025-08-07 12:30 Zhao Liu
  2025-08-07 12:30 ` [RFC 01/26] rust/hpet: Fix the error caused by vm-memory Zhao Liu
                   ` (27 more replies)
  0 siblings, 28 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Hi,

This RFC series explores integrating the vm-memory API into QEMU's
rust/memory bindings.

Thanks to Paolo and Manos's many suggestions and feedback, I have
resolved many issues over the past few months, but there are still
some open issues that I would like to discuss.

This series finally provides the following safe interfaces in Rust:
 * AddressSpace::write in Rust <=> address_space_write in C
   - **but only** supports MEMTXATTRS_UNSPECIFIED

 * AddressSpace::read in Rust <=> address_space_read_full in C
   - **but only** supports MEMTXATTRS_UNSPECIFIED.

 * AddressSpace::store in Rust <=> address_space_st{size} in C
   - **but only** supports MEMTXATTRS_UNSPECIFIED and native endian.

 * AddressSpace::load in Rust <=> address_space_ld{size} in C
   - **but only** supports MEMTXATTRS_UNSPECIFIED and native endian.

And this series involves changes mainly to these three parts:
 * NEW QEMU memory APIs wrapper at C side.
 * Extra changes for vm-memory (downstream for now).
 * NEW QEMU memory bindings/APIs based on vm-memory at Rust side.

Although the number of line changes appears to be significant, more
than half of them are documentation and comments.

(Note: the latest vm-memory v0.16.2 crate didn't contain Paolo's
 commit 5f59e29c3d30 ("guest_memory: let multiple regions slice one
 global bitmap"), so I have to pull the vm-memory from github directly.)

Thanks for your feedback!


Background
==========

About vm-memory crate, it's design documentation said:

"The vm-memory crate focuses on defining consumer side interfaces to
 access the physical memory of the VM. It does not define how the
 underlying VM memory provider is implemented. Lightweight VMMs like
 CrosVM and Firecracker can make assumptions about the structure of VM's
 physical memory and implement a lightweight backend to access it. For
 VMMs like Qemu, a high performance and full functionality backend may
 be implemented with less assumptions."

At present, in addition to the memory model abstractions (including
GuestMemoryRegion, GuestMemory, and AddressSpace) it provides, it also
implements a simple memory management backend based on mmap for RAM
access.

However, for QEMU, the backend implementation based on vm-memory is more
complex, as QEMU not only needs to consider MMIO/IOMMU, but also complex
situations such as different endian/memory attributes.

This series tries to be simple as much as possible, and leaves different
endian/memory attributes support as the open issues.

But... wait, why vm-memory is necessary?

QEMU needs the safe Rust bindings for memory access. Whatever vm-memory
is used or not, there'll be the similar wrappers over AddressSpace/
FlatView/MemorySection, and there'll be the safe bindings for translation/
memory store, load, read and write.

Even if we don't use vm-memory, we will likely end up creating something
similar to vm-memory.

Furthermore, many components in vm-memory are also inspiring for
enhancements on the QEMU Rust side.

So, why not have a try?


Introduction
============

The core idea of this series is simple:
 * Implement vm_memory::GuestMemoryRegion trait for MemoryRegionSection
   to represent (non-overlapping) memory region.

   vm_memory::GuestMemoryRegion trait itself doesn't provide any
   interface to access memory. So vm_memory::Bytes trait is also
   necessary for MemoryRegionSection to access memory region.

 * Implement vm_memory::GuestMemory trait for FlatView to manage
   Guest memory regions (that's MemoryRegionSection).

   Similarly, vm_memory::Bytes trait is also needed for FlatView
   to provide methods to write/read/store/load memory.

 * Implement vm_memory::AddressSpace tarit for AddressSpace, to
   provide a safe address space abstraction at Rust side.


For the above three parts, the most critical stuff is related to
MemoryRegionSection.

Currently QEMU's memory API is built around MemoryRegion, and
MemoryRegionSection is only for internal use at the most time.

But vm_memory::GuestMemoryRegion trait requires us to wrap unsafe C
bindings based on MemoryRegionSection. So it's necessary to expose
some C memory APIs based MemoryRegionSection. This is the following
section:


NEW QEMU memory APIs wrappers at C side
=======================================

Around MemoryRegionSection, this series provides these interfaces:
 * Some straightforward wrappers over original C interfaces:
   - section_access_allowed
   - section_covers_region_addr,
   - section_fuzz_dma_read
   - section_get_host_addr

 * Critical wrappers for intermediate memroy read/write (they're still
   simple wrapper over C functions):
   - section_rust_read_continue_step
   - section_rust_write_continue_step

 * And, special C helpers for memory load/store:
   - section_rust_load

   MemTxResult section_rust_load(MemoryRegionSection *section,
                                 hwaddr mr_offset, uint8_t *buf,
                                 MemTxAttrs attrs, hwaddr len);

   - section_rust_store

   MemTxResult section_rust_store(MemoryRegionSection *section,
                                  hwaddr mr_offset, const uint8_t *buf,
                                  MemTxAttrs attrs, hwaddr len);


   These 2 load/store helpers are so different: comparing with the
   detail implementations of address_space_ld{size}/
   address_space_st{size}, these 2 functions aren't bound with specific
   type (l, q, or w), and transfer the value via byte array.

   This is because of the AtomicAccess bound in vm_memory::Bytes, which
   makes it difficult to convert AtomicAccess type to u64! (For more
   details, please refer the comments of Bytes::store/Bytes::load in
   patch 22 "rust/memory: Implement vm_memory::GuestMemoryRegion for
   MemoryRegionSection").


Of course, some other wrappers are also needed for FlatView and
AddressSpace:
 * For FlatView:
   - flatview_ref
   - flatview_translate_section
   - flatview_unref

 * For AddressSpace:
   - address_space_lookup_section
   - address_space_memory
   - address_space_to_flatview

They're all simple wrappers.

But, though QEMU's native C memory API could support complex conditions,
such as different endian formats or different memory attributes,
especially for memory write/read/store/load inferfaces. For now, Rust
side can only limit the support to only native endian and only
MEMTXATTRS_UNSPECIFIED.

This issue is related with why and how to adjust vm-memory's API for
QEMU:


Extra changes for vm-memory (downstream for now)
================================================

(All the patches for vm-memory can be found at patch 10 "subprojects/
 vm-memory: Patch vm-memory for QEMU memory".)

As a minimum requirement, vm-memory still needs at least two changes:
 * the 0001.diff file under subprojects/packagefiles/vm-memory-0.16-rs:

   guest_memory: Add a marker tarit to implement Bytes<GuestAddress> for
   GuestMemory

   - This patch allows QEMU to customize its own Bytes trait
     implementation for FlatView, which makes it possible to have the
     memory write/read process in Rust similar to
     flatview_write/flatview_read in C side.

     So this patch is straightforward with low risk.

 * the 0002.diff file under subprojects/packagefiles/vm-memory-0.16-rs:

   guest_memory: Add is_write argument for GuestMemory::try_access()

   - This patch is related with how to extend vm-memory to support more
     complex cases.

     Paolo suggested to implement Bytes<(GuestAddress, MemTxAttrs)> for
     FlatView, but I found it's tricky since memory read/write will
     finally depend on GuestMemory::try_access() to do iteration. Then
     try_access() should konw more information.

     1) One option is like the 0002.diff, just to add more arguments,
        but this option is not very flexible. It is difficult to extend
        for MemTxAttrs because vm-memory does not have MemTxAttrs at all.
        (But perhaps vm-memory could support more memory attributes?)

     2) Another option is to move try_access() under Bytes trait, then
        try_access could accepts the tuple like (GuestAddress, MemTxAttrs)
        or (GuestAddress, bool, MemTxAttrs) - boolean is is_write. But
        my concern is I'm not sure whether it's proper to let Bytes have
        try_access() method, especially there's no other need expect
        GuestMemory.

   Therefore, this is the issue that blocks us to provide more flexible
   (or more complex) memory access interfaces at Rust side.


NEW QEMU memory bindings/APIs based on vm-memory at Rust side
=============================================================

At least, now we could have the safe bindings for the most basic (and
minimum) memory access in Rust side - no special memory attributs, and
only naive endian.

Speaking endian, this is another open issue. The current implementation
only supports native endian, i.e., endian that is consistent with the
target. Users can obtain the current endian and adjust it, for example,

```
use qemu_api::memory::{ADDRESS_SPACE_MEMORY, target_is_big_endian};

let addr = GuestAddress(0x123438000);
let val: u32 = 5;
let val_end = if target_is_big_endian() {
    val.to_be()
} else {
    val.to_le()
};

assert!(ADDRESS_SPACE_MEMORY.store(addr, val_end).is_ok());
```

It can work, But idealy, for Rust, type itself could tell enough
information.

But unfortunately, the Bytes::store/load accepts AtomicAccess, it doesn't
provide any information about endian (though vm-memory has other endian
type, e.g., Le32/be32).

For more endian support, I think there would be 2 options:
 1) Implement Bytes<(GuestAddress, bool, MemTxAttrs, DeviceEndian)>, and
    ask C side to handle endian issues.

 2) Consider to add endian information in AtomicAccess.
 
Option 1 seems easier, but option 2 seems more reasonable? Because only
the Bytes::store/load care about endian, and no other methods requires
it. Considering endian for the entire Bytes seems a bit overkill.


Open issues
===========

Alright, the open issues are talked in the above sections. But let me
make a summary:

* About how to support more MemTxAttrs, please see the section "Extra
  changes for vm-memory (downstream for now)".

* About how to support more endian formats, please see the section
  "NEW QEMU memory bindings/APIs based on vm-memory at Rust side".


Thanks and Best Regards,
Zhao
---
Zhao Liu (26):
  rust/hpet: Fix the error caused by vm-memory
  rust/cargo: Add the support for vm-memory
  subprojects: Add thiserror-impl crate
  subprojects: Add thiserror crate
  subprojects: Add winapi-i686-pc-windows-gnu crate
  subprojects: Add winapi-x86_64-pc-windows-gnu crate
  subprojects: Add winapi crate
  subprojects: Add vm-memory crate
  rust: Add vm-memory in meson
  subprojects/vm-memory: Patch vm-memory for QEMU memory backend
  rust/cargo: Specify the patched vm-memory crate
  rcu: Make rcu_read_lock & rcu_read_unlock not inline
  rust: Add RCU bindings
  memory: Expose interfaces about Flatview reference count to Rust side
  memory: Rename address_space_lookup_region and expose it to Rust side
  memory: Make flatview_do_translate() return a pointer to
    MemoryRegionSection
  memory: Add a translation helper to return MemoryRegionSection
  memory: Rename flatview_access_allowed() to
    memory_region_access_allowed()
  memory: Add MemoryRegionSection based misc helpers
  memory: Add wrappers of intermediate steps for read/write
  memory: Add store/load interfaces for Rust side
  rust/memory: Implement vm_memory::GuestMemoryRegion for
    MemoryRegionSection
  rust/memory: Implement vm_memory::GuestMemory for FlatView
  rust/memory: Provide AddressSpace bindings
  rust/memory: Add binding to check target endian
  rust/hpet: Use safe binding to access address space

 include/qemu/rcu.h                            |  45 +-
 include/system/memory.h                       | 313 +++++-
 rust/Cargo.lock                               |  51 +
 rust/Cargo.toml                               |   3 +
 rust/hw/timer/hpet/src/device.rs              |  29 +-
 rust/meson.build                              |   2 +
 rust/qemu-api/Cargo.toml                      |  11 +
 rust/qemu-api/meson.build                     |   5 +-
 rust/qemu-api/src/lib.rs                      |   1 +
 rust/qemu-api/src/memory.rs                   | 971 +++++++++++++++++-
 rust/qemu-api/src/rcu.rs                      |  26 +
 rust/qemu-api/wrapper.h                       |   2 +
 scripts/archive-source.sh                     |   4 +-
 scripts/make-release                          |   4 +-
 subprojects/.gitignore                        |   6 +
 .../packagefiles/thiserror-1-rs/meson.build   |  23 +
 .../thiserror-impl-1-rs/meson.build           |  41 +
 .../packagefiles/vm-memory-0.16-rs/0001.diff  |  81 ++
 .../packagefiles/vm-memory-0.16-rs/0002.diff  | 111 ++
 .../vm-memory-0.16-rs/meson.build             |  35 +
 .../packagefiles/winapi-0.3-rs/meson.build    |  46 +
 .../meson.build                               |  20 +
 .../meson.build                               |  20 +
 subprojects/thiserror-1-rs.wrap               |  10 +
 subprojects/thiserror-impl-1-rs.wrap          |  10 +
 subprojects/vm-memory-0.16-rs.wrap            |  14 +
 subprojects/winapi-0.3-rs.wrap                |  10 +
 .../winapi-i686-pc-windows-gnu-0.4-rs.wrap    |  10 +
 .../winapi-x86_64-pc-windows-gnu-0.4-rs.wrap  |  10 +
 system/memory-internal.h                      |   1 -
 system/memory.c                               |   7 +-
 system/physmem.c                              | 200 +++-
 util/rcu.c                                    |  43 +
 33 files changed, 2040 insertions(+), 125 deletions(-)
 create mode 100644 rust/qemu-api/src/rcu.rs
 create mode 100644 subprojects/packagefiles/thiserror-1-rs/meson.build
 create mode 100644 subprojects/packagefiles/thiserror-impl-1-rs/meson.build
 create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
 create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0002.diff
 create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/meson.build
 create mode 100644 subprojects/packagefiles/winapi-0.3-rs/meson.build
 create mode 100644 subprojects/packagefiles/winapi-i686-pc-windows-gnu-0.4-rs/meson.build
 create mode 100644 subprojects/packagefiles/winapi-x86_64-pc-windows-gnu-0.4-rs/meson.build
 create mode 100644 subprojects/thiserror-1-rs.wrap
 create mode 100644 subprojects/thiserror-impl-1-rs.wrap
 create mode 100644 subprojects/vm-memory-0.16-rs.wrap
 create mode 100644 subprojects/winapi-0.3-rs.wrap
 create mode 100644 subprojects/winapi-i686-pc-windows-gnu-0.4-rs.wrap
 create mode 100644 subprojects/winapi-x86_64-pc-windows-gnu-0.4-rs.wrap

-- 
2.34.1



^ permalink raw reply	[flat|nested] 58+ messages in thread

* [RFC 01/26] rust/hpet: Fix the error caused by vm-memory
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 13:52   ` Paolo Bonzini
  2025-08-07 12:30 ` [RFC 02/26] rust/cargo: Add the support for vm-memory Zhao Liu
                   ` (26 subsequent siblings)
  27 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

error[E0283]: type annotations needed
   --> hw/timer/hpet/src/device.rs:884:55
    |
884 |         self.num_timers == self.num_timers_save.get().into()
    |                         --                            ^^^^
    |                         |
    |                         type must be known at this point
    |
    = note: multiple `impl`s satisfying `usize: PartialEq<_>` found in the following crates: `core`, `vm_memory`:
            - impl PartialEq<vm_memory::endian::BeSize> for usize;
            - impl PartialEq<vm_memory::endian::LeSize> for usize;
            - impl<host> PartialEq for usize
              where the constant `host` has type `bool`;
help: try using a fully qualified path to specify the expected types
    |
884 |         self.num_timers == <u8 as Into<T>>::into(self.num_timers_save.get())
    |                            ++++++++++++++++++++++                          ~

For more information about this error, try `rustc --explain E0283`.
error: could not compile `hpet` (lib) due to 1 previous error

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/hw/timer/hpet/src/device.rs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index acf7251029e9..9fd75bf096e4 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -881,7 +881,7 @@ fn is_offset_needed(&self) -> bool {
     }
 
     fn validate_num_timers(&self, _version_id: u8) -> bool {
-        self.num_timers == self.num_timers_save.get().into()
+        self.num_timers == Into::<usize>::into(self.num_timers_save.get())
     }
 }
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 02/26] rust/cargo: Add the support for vm-memory
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
  2025-08-07 12:30 ` [RFC 01/26] rust/hpet: Fix the error caused by vm-memory Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 03/26] subprojects: Add thiserror-impl crate Zhao Liu
                   ` (25 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/Cargo.lock          | 52 ++++++++++++++++++++++++++++++++++++++++
 rust/qemu-api/Cargo.toml | 11 +++++++++
 2 files changed, 63 insertions(+)

diff --git a/rust/Cargo.lock b/rust/Cargo.lock
index b785c718f315..7aedae239f66 100644
--- a/rust/Cargo.lock
+++ b/rust/Cargo.lock
@@ -133,6 +133,7 @@ dependencies = [
  "foreign",
  "libc",
  "qemu_api_macros",
+ "vm-memory",
 ]
 
 [[package]]
@@ -164,6 +165,26 @@ dependencies = [
  "unicode-ident",
 ]
 
+[[package]]
+name = "thiserror"
+version = "1.0.65"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5d11abd9594d9b38965ef50805c5e469ca9cc6f197f883f717e0269a3057b3d5"
+dependencies = [
+ "thiserror-impl",
+]
+
+[[package]]
+name = "thiserror-impl"
+version = "1.0.65"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ae71770322cbd277e69d762a16c444af02aa0575ac0d174f0b9562d3b37f8602"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
 [[package]]
 name = "unicode-ident"
 version = "1.0.12"
@@ -175,3 +196,34 @@ name = "version_check"
 version = "0.9.4"
 source = "registry+https://github.com/rust-lang/crates.io-index"
 checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f"
+
+[[package]]
+name = "vm-memory"
+version = "0.16.1"
+source = "git+https://github.com/rust-vmm/vm-memory.git?rev=5eb996a060d7ca3844cbd2f10b1d048c0c91942f#5eb996a060d7ca3844cbd2f10b1d048c0c91942f"
+dependencies = [
+ "thiserror",
+ "winapi",
+]
+
+[[package]]
+name = "winapi"
+version = "0.3.9"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
+dependencies = [
+ "winapi-i686-pc-windows-gnu",
+ "winapi-x86_64-pc-windows-gnu",
+]
+
+[[package]]
+name = "winapi-i686-pc-windows-gnu"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
+
+[[package]]
+name = "winapi-x86_64-pc-windows-gnu"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
diff --git a/rust/qemu-api/Cargo.toml b/rust/qemu-api/Cargo.toml
index db7000dee441..bbed3d2de514 100644
--- a/rust/qemu-api/Cargo.toml
+++ b/rust/qemu-api/Cargo.toml
@@ -19,6 +19,17 @@ anyhow = "~1.0"
 libc = "0.2.162"
 foreign = "~0.3.1"
 
+[dependencies.vm-memory]
+# The latest v0.16.2 didn't contain Paolo's commit 5f59e29c3d30
+# ("guest_memory: let multiple regions slice one global bitmap").
+# Once a new release has that change, switch to crates.io.
+git = "https://github.com/rust-vmm/vm-memory.git"
+rev = "5eb996a060d7ca3844cbd2f10b1d048c0c91942f"
+# Note "rawfd" (as the only default feature) is disabled by default in
+# meson. It cause compilation failure on Windows and fortunately, we
+# don't need it either.
+default-features = false
+
 [features]
 default = ["debug_cell"]
 allocator = []
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 03/26] subprojects: Add thiserror-impl crate
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
  2025-08-07 12:30 ` [RFC 01/26] rust/hpet: Fix the error caused by vm-memory Zhao Liu
  2025-08-07 12:30 ` [RFC 02/26] rust/cargo: Add the support for vm-memory Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 04/26] subprojects: Add thiserror crate Zhao Liu
                   ` (24 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 scripts/archive-source.sh                     |  2 +-
 scripts/make-release                          |  2 +-
 subprojects/.gitignore                        |  1 +
 .../thiserror-impl-1-rs/meson.build           | 41 +++++++++++++++++++
 subprojects/thiserror-impl-1-rs.wrap          | 10 +++++
 5 files changed, 54 insertions(+), 2 deletions(-)
 create mode 100644 subprojects/packagefiles/thiserror-impl-1-rs/meson.build
 create mode 100644 subprojects/thiserror-impl-1-rs.wrap

diff --git a/scripts/archive-source.sh b/scripts/archive-source.sh
index 035828c532e7..8d8a0d37ecdc 100755
--- a/scripts/archive-source.sh
+++ b/scripts/archive-source.sh
@@ -31,7 +31,7 @@ subprojects="keycodemapdb libvfio-user berkeley-softfloat-3
   bilge-impl-0.2-rs either-1-rs foreign-0.3-rs itertools-0.11-rs
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
-  syn-2-rs unicode-ident-1-rs"
+  syn-2-rs thiserror-impl-1-rs unicode-ident-1-rs"
 sub_deinit=""
 
 function cleanup() {
diff --git a/scripts/make-release b/scripts/make-release
index 4509a9fabf50..3d3d8d4a51bc 100755
--- a/scripts/make-release
+++ b/scripts/make-release
@@ -44,7 +44,7 @@ SUBPROJECTS="libvfio-user keycodemapdb berkeley-softfloat-3
   bilge-impl-0.2-rs either-1-rs foreign-0.3-rs itertools-0.11-rs
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
-  syn-2-rs unicode-ident-1-rs"
+  syn-2-rs thiserror-impl-1-rs unicode-ident-1-rs"
 
 src="$1"
 version="$2"
diff --git a/subprojects/.gitignore b/subprojects/.gitignore
index f4281934ce11..e6ea570a2286 100644
--- a/subprojects/.gitignore
+++ b/subprojects/.gitignore
@@ -19,4 +19,5 @@
 /proc-macro2-1.0.84
 /quote-1.0.36
 /syn-2.0.66
+/thiserror-impl-1.0.65
 /unicode-ident-1.0.12
diff --git a/subprojects/packagefiles/thiserror-impl-1-rs/meson.build b/subprojects/packagefiles/thiserror-impl-1-rs/meson.build
new file mode 100644
index 000000000000..cc5546264035
--- /dev/null
+++ b/subprojects/packagefiles/thiserror-impl-1-rs/meson.build
@@ -0,0 +1,41 @@
+project('thiserror-impl-1-rs', 'rust',
+ meson_version: '>=1.5.0',
+ version: '1.0.65',
+ license: 'MIT OR Apache-2.0',
+ default_options: [])
+
+subproject('quote-1-rs', required: true)
+subproject('syn-2-rs', required: true)
+subproject('proc-macro2-1-rs', required: true)
+
+quote_dep = dependency('quote-1-rs', native: true)
+syn_dep = dependency('syn-2-rs', native: true)
+proc_macro2_dep = dependency('proc-macro2-1-rs', native: true)
+
+rust = import('rust')
+
+_thiserror_impl_rs = rust.proc_macro(
+  'thiserror_impl',
+  files('src/lib.rs'),
+  override_options: ['rust_std=2021', 'build.rust_std=2021'],
+  rust_args: [
+    '--cfg', 'feature="proc-macro"',
+    '--cfg', 'feature="clone-impls"',
+    '--cfg', 'feature="derive"',
+    '--cfg', 'feature="extra-traits"',
+    '--cfg', 'feature="full"',
+    '--cfg', 'feature="parsing"',
+    '--cfg', 'feature="printing"',
+  ],
+  dependencies: [
+    quote_dep,
+    syn_dep,
+    proc_macro2_dep
+  ],
+)
+
+thiserror_impl_dep = declare_dependency(
+  link_with: _thiserror_impl_rs,
+)
+
+meson.override_dependency('thiserror-impl-1-rs', thiserror_impl_dep)
diff --git a/subprojects/thiserror-impl-1-rs.wrap b/subprojects/thiserror-impl-1-rs.wrap
new file mode 100644
index 000000000000..0f2ca85b8590
--- /dev/null
+++ b/subprojects/thiserror-impl-1-rs.wrap
@@ -0,0 +1,10 @@
+[wrap-file]
+directory = thiserror-impl-1.0.65
+source_url = https://crates.io/api/v1/crates/thiserror-impl/1.0.65/download
+source_filename = thiserror-impl-1.0.65.tar.gz
+source_hash = ae71770322cbd277e69d762a16c444af02aa0575ac0d174f0b9562d3b37f8602
+#method = cargo
+patch_directory = thiserror-impl-1-rs
+
+# bump this version number on every change to meson.build or the patches:
+# v2
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 04/26] subprojects: Add thiserror crate
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (2 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 03/26] subprojects: Add thiserror-impl crate Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 05/26] subprojects: Add winapi-i686-pc-windows-gnu crate Zhao Liu
                   ` (23 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 scripts/archive-source.sh                     |  2 +-
 scripts/make-release                          |  2 +-
 subprojects/.gitignore                        |  1 +
 .../packagefiles/thiserror-1-rs/meson.build   | 23 +++++++++++++++++++
 subprojects/thiserror-1-rs.wrap               | 10 ++++++++
 5 files changed, 36 insertions(+), 2 deletions(-)
 create mode 100644 subprojects/packagefiles/thiserror-1-rs/meson.build
 create mode 100644 subprojects/thiserror-1-rs.wrap

diff --git a/scripts/archive-source.sh b/scripts/archive-source.sh
index 8d8a0d37ecdc..3ae064f65263 100755
--- a/scripts/archive-source.sh
+++ b/scripts/archive-source.sh
@@ -31,7 +31,7 @@ subprojects="keycodemapdb libvfio-user berkeley-softfloat-3
   bilge-impl-0.2-rs either-1-rs foreign-0.3-rs itertools-0.11-rs
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
-  syn-2-rs thiserror-impl-1-rs unicode-ident-1-rs"
+  syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs"
 sub_deinit=""
 
 function cleanup() {
diff --git a/scripts/make-release b/scripts/make-release
index 3d3d8d4a51bc..73a14c12bdeb 100755
--- a/scripts/make-release
+++ b/scripts/make-release
@@ -44,7 +44,7 @@ SUBPROJECTS="libvfio-user keycodemapdb berkeley-softfloat-3
   bilge-impl-0.2-rs either-1-rs foreign-0.3-rs itertools-0.11-rs
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
-  syn-2-rs thiserror-impl-1-rs unicode-ident-1-rs"
+  syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs"
 
 src="$1"
 version="$2"
diff --git a/subprojects/.gitignore b/subprojects/.gitignore
index e6ea570a2286..3b09ab42da08 100644
--- a/subprojects/.gitignore
+++ b/subprojects/.gitignore
@@ -19,5 +19,6 @@
 /proc-macro2-1.0.84
 /quote-1.0.36
 /syn-2.0.66
+/thiserror-1.0.65
 /thiserror-impl-1.0.65
 /unicode-ident-1.0.12
diff --git a/subprojects/packagefiles/thiserror-1-rs/meson.build b/subprojects/packagefiles/thiserror-1-rs/meson.build
new file mode 100644
index 000000000000..bfaf2f8d3eb8
--- /dev/null
+++ b/subprojects/packagefiles/thiserror-1-rs/meson.build
@@ -0,0 +1,23 @@
+project('thiserror-1-rs', 'rust',
+  meson_version: '>=1.5.0',
+  version: '1.0.65',
+  license: 'MIT OR Apache-2.0',
+  default_options: [])
+
+subproject('thiserror-impl-1-rs', required: true)
+thiserror_impl_rs = dependency('thiserror-impl-1-rs')
+
+_thiserror_rs = static_library(
+  'thiserror',
+  files('src/lib.rs'),
+  gnu_symbol_visibility: 'hidden',
+  override_options: ['rust_std=2021', 'build.rust_std=2021'],
+  rust_abi: 'rust',
+  dependencies: [thiserror_impl_rs],
+)
+
+thiserror_dep = declare_dependency(
+  link_with: _thiserror_rs,
+)
+
+meson.override_dependency('thiserror-1-rs', thiserror_dep)
diff --git a/subprojects/thiserror-1-rs.wrap b/subprojects/thiserror-1-rs.wrap
new file mode 100644
index 000000000000..0f9303bebf97
--- /dev/null
+++ b/subprojects/thiserror-1-rs.wrap
@@ -0,0 +1,10 @@
+[wrap-file]
+directory = thiserror-1.0.65
+source_url = https://crates.io/api/v1/crates/thiserror/1.0.65/download
+source_filename = thiserror-1.0.65.tar.gz
+source_hash = 5d11abd9594d9b38965ef50805c5e469ca9cc6f197f883f717e0269a3057b3d5
+#method = cargo
+patch_directory = thiserror-1-rs
+
+# bump this version number on every change to meson.build or the patches:
+# v2
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 05/26] subprojects: Add winapi-i686-pc-windows-gnu crate
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (3 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 04/26] subprojects: Add thiserror crate Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 06/26] subprojects: Add winapi-x86_64-pc-windows-gnu crate Zhao Liu
                   ` (22 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 scripts/archive-source.sh                     |  3 ++-
 scripts/make-release                          |  3 ++-
 subprojects/.gitignore                        |  1 +
 .../meson.build                               | 20 +++++++++++++++++++
 .../winapi-i686-pc-windows-gnu-0.4-rs.wrap    | 10 ++++++++++
 5 files changed, 35 insertions(+), 2 deletions(-)
 create mode 100644 subprojects/packagefiles/winapi-i686-pc-windows-gnu-0.4-rs/meson.build
 create mode 100644 subprojects/winapi-i686-pc-windows-gnu-0.4-rs.wrap

diff --git a/scripts/archive-source.sh b/scripts/archive-source.sh
index 3ae064f65263..2dff5d3d89fe 100755
--- a/scripts/archive-source.sh
+++ b/scripts/archive-source.sh
@@ -31,7 +31,8 @@ subprojects="keycodemapdb libvfio-user berkeley-softfloat-3
   bilge-impl-0.2-rs either-1-rs foreign-0.3-rs itertools-0.11-rs
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
-  syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs"
+  syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs
+  winapi-i686-pc-windows-gnu-0.4-rs"
 sub_deinit=""
 
 function cleanup() {
diff --git a/scripts/make-release b/scripts/make-release
index 73a14c12bdeb..f7a1481f856a 100755
--- a/scripts/make-release
+++ b/scripts/make-release
@@ -44,7 +44,8 @@ SUBPROJECTS="libvfio-user keycodemapdb berkeley-softfloat-3
   bilge-impl-0.2-rs either-1-rs foreign-0.3-rs itertools-0.11-rs
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
-  syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs"
+  syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs
+  winapi-i686-pc-windows-gnu-0.4-rs"
 
 src="$1"
 version="$2"
diff --git a/subprojects/.gitignore b/subprojects/.gitignore
index 3b09ab42da08..838409353fca 100644
--- a/subprojects/.gitignore
+++ b/subprojects/.gitignore
@@ -22,3 +22,4 @@
 /thiserror-1.0.65
 /thiserror-impl-1.0.65
 /unicode-ident-1.0.12
+/winapi-i686-pc-windows-gnu-0.4.0
diff --git a/subprojects/packagefiles/winapi-i686-pc-windows-gnu-0.4-rs/meson.build b/subprojects/packagefiles/winapi-i686-pc-windows-gnu-0.4-rs/meson.build
new file mode 100644
index 000000000000..5ae1b87403d2
--- /dev/null
+++ b/subprojects/packagefiles/winapi-i686-pc-windows-gnu-0.4-rs/meson.build
@@ -0,0 +1,20 @@
+project('winapi-i686-pc-windows-gnu-0.4-rs', 'rust',
+ meson_version: '>=1.5.0',
+ version: '0.4.0',
+ license: 'MIT OR Apache-2.0',
+ default_options: [])
+
+lib = static_library(
+  'winapi-i686-pc-windows-gnu',
+  'src/lib.rs',
+  override_options : ['rust_std=2021', 'build.rust_std=2021'],
+  rust_abi : 'rust',
+  rust_args: ['--cap-lints', 'allow'],
+  dependencies: [thiserror_rs],
+)
+
+dep = declare_dependency(
+  link_with : [lib],
+)
+
+meson.override_dependency('winapi-i686-pc-windows-gnu-0.4-rs', dep)
diff --git a/subprojects/winapi-i686-pc-windows-gnu-0.4-rs.wrap b/subprojects/winapi-i686-pc-windows-gnu-0.4-rs.wrap
new file mode 100644
index 000000000000..8ec2f2351d9e
--- /dev/null
+++ b/subprojects/winapi-i686-pc-windows-gnu-0.4-rs.wrap
@@ -0,0 +1,10 @@
+[wrap-file]
+directory = winapi-i686-pc-windows-gnu-0.4.0
+source_url = https://crates.io/api/v1/crates/winapi-i686-pc-windows-gnu/0.4.0/download
+source_filename = winapi-i686-pc-windows-gnu-0.4.0.tar.gz
+source_hash = ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6
+#method = cargo
+patch_directory = winapi-i686-pc-windows-gnu-0.4-rs
+
+# bump this version number on every change to meson.build or the patches:
+# v2
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 06/26] subprojects: Add winapi-x86_64-pc-windows-gnu crate
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (4 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 05/26] subprojects: Add winapi-i686-pc-windows-gnu crate Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 07/26] subprojects: Add winapi crate Zhao Liu
                   ` (21 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 scripts/archive-source.sh                     |  2 +-
 scripts/make-release                          |  2 +-
 subprojects/.gitignore                        |  1 +
 .../meson.build                               | 20 +++++++++++++++++++
 .../winapi-x86_64-pc-windows-gnu-0.4-rs.wrap  | 10 ++++++++++
 5 files changed, 33 insertions(+), 2 deletions(-)
 create mode 100644 subprojects/packagefiles/winapi-x86_64-pc-windows-gnu-0.4-rs/meson.build
 create mode 100644 subprojects/winapi-x86_64-pc-windows-gnu-0.4-rs.wrap

diff --git a/scripts/archive-source.sh b/scripts/archive-source.sh
index 2dff5d3d89fe..4caf6078f1ac 100755
--- a/scripts/archive-source.sh
+++ b/scripts/archive-source.sh
@@ -32,7 +32,7 @@ subprojects="keycodemapdb libvfio-user berkeley-softfloat-3
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
   syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs
-  winapi-i686-pc-windows-gnu-0.4-rs"
+  winapi-i686-pc-windows-gnu-0.4-rs winapi-x86_64-pc-windows-gnu-0.4-rs"
 sub_deinit=""
 
 function cleanup() {
diff --git a/scripts/make-release b/scripts/make-release
index f7a1481f856a..eb8b2446ad3a 100755
--- a/scripts/make-release
+++ b/scripts/make-release
@@ -45,7 +45,7 @@ SUBPROJECTS="libvfio-user keycodemapdb berkeley-softfloat-3
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
   syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs
-  winapi-i686-pc-windows-gnu-0.4-rs"
+  winapi-i686-pc-windows-gnu-0.4-rs winapi-x86_64-pc-windows-gnu-0.4-rs"
 
 src="$1"
 version="$2"
diff --git a/subprojects/.gitignore b/subprojects/.gitignore
index 838409353fca..ed51f2012e2c 100644
--- a/subprojects/.gitignore
+++ b/subprojects/.gitignore
@@ -23,3 +23,4 @@
 /thiserror-impl-1.0.65
 /unicode-ident-1.0.12
 /winapi-i686-pc-windows-gnu-0.4.0
+/winapi-x86_64-pc-windows-gnu-0.4.0
diff --git a/subprojects/packagefiles/winapi-x86_64-pc-windows-gnu-0.4-rs/meson.build b/subprojects/packagefiles/winapi-x86_64-pc-windows-gnu-0.4-rs/meson.build
new file mode 100644
index 000000000000..6b06e1d2810e
--- /dev/null
+++ b/subprojects/packagefiles/winapi-x86_64-pc-windows-gnu-0.4-rs/meson.build
@@ -0,0 +1,20 @@
+project('winapi-x86_64-pc-windows-gnu-0.4-rs', 'rust',
+ meson_version: '>=1.5.0',
+ version: '0.4.0',
+ license: 'MIT OR Apache-2.0',
+ default_options: [])
+
+lib = static_library(
+  'winapi-x86_64-pc-windows-gnu',
+  'src/lib.rs',
+  override_options : ['rust_std=2021', 'build.rust_std=2021'],
+  rust_abi : 'rust',
+  rust_args: ['--cap-lints', 'allow'],
+  dependencies: [thiserror_rs],
+)
+
+dep = declare_dependency(
+  link_with : [lib],
+)
+
+meson.override_dependency('winapi-x86_64-pc-windows-gnu-0.4-rs', dep)
diff --git a/subprojects/winapi-x86_64-pc-windows-gnu-0.4-rs.wrap b/subprojects/winapi-x86_64-pc-windows-gnu-0.4-rs.wrap
new file mode 100644
index 000000000000..d75a096980a1
--- /dev/null
+++ b/subprojects/winapi-x86_64-pc-windows-gnu-0.4-rs.wrap
@@ -0,0 +1,10 @@
+[wrap-file]
+directory = winapi-x86_64-pc-windows-gnu-0.4.0
+source_url = https://crates.io/api/v1/crates/winapi-x86_64-pc-windows-gnu/0.4.0/download
+source_filename = winapi-x86_64-pc-windows-gnu-0.4.0.tar.gz
+source_hash = 712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f
+#method = cargo
+patch_directory = winapi-x86_64-pc-windows-gnu-0.4-rs
+
+# bump this version number on every change to meson.build or the patches:
+# v2
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 07/26] subprojects: Add winapi crate
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (5 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 06/26] subprojects: Add winapi-x86_64-pc-windows-gnu crate Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 13:17   ` Paolo Bonzini
  2025-08-07 12:30 ` [RFC 08/26] subprojects: Add vm-memory crate Zhao Liu
                   ` (20 subsequent siblings)
  27 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 scripts/archive-source.sh                     |  3 +-
 scripts/make-release                          |  3 +-
 subprojects/.gitignore                        |  1 +
 .../packagefiles/winapi-0.3-rs/meson.build    | 46 +++++++++++++++++++
 subprojects/winapi-0.3-rs.wrap                | 10 ++++
 5 files changed, 61 insertions(+), 2 deletions(-)
 create mode 100644 subprojects/packagefiles/winapi-0.3-rs/meson.build
 create mode 100644 subprojects/winapi-0.3-rs.wrap

diff --git a/scripts/archive-source.sh b/scripts/archive-source.sh
index 4caf6078f1ac..99d0d898d010 100755
--- a/scripts/archive-source.sh
+++ b/scripts/archive-source.sh
@@ -32,7 +32,8 @@ subprojects="keycodemapdb libvfio-user berkeley-softfloat-3
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
   syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs
-  winapi-i686-pc-windows-gnu-0.4-rs winapi-x86_64-pc-windows-gnu-0.4-rs"
+  winapi-0.3-rs winapi-i686-pc-windows-gnu-0.4-rs
+  winapi-x86_64-pc-windows-gnu-0.4-rs"
 sub_deinit=""
 
 function cleanup() {
diff --git a/scripts/make-release b/scripts/make-release
index eb8b2446ad3a..c53dfa0a7f4f 100755
--- a/scripts/make-release
+++ b/scripts/make-release
@@ -45,7 +45,8 @@ SUBPROJECTS="libvfio-user keycodemapdb berkeley-softfloat-3
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
   syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs
-  winapi-i686-pc-windows-gnu-0.4-rs winapi-x86_64-pc-windows-gnu-0.4-rs"
+  winapi-0.3-rs winapi-i686-pc-windows-gnu-0.4-rs
+  winapi-x86_64-pc-windows-gnu-0.4-rs"
 
 src="$1"
 version="$2"
diff --git a/subprojects/.gitignore b/subprojects/.gitignore
index ed51f2012e2c..c42adaa928ec 100644
--- a/subprojects/.gitignore
+++ b/subprojects/.gitignore
@@ -22,5 +22,6 @@
 /thiserror-1.0.65
 /thiserror-impl-1.0.65
 /unicode-ident-1.0.12
+/winapi-0.3.9
 /winapi-i686-pc-windows-gnu-0.4.0
 /winapi-x86_64-pc-windows-gnu-0.4.0
diff --git a/subprojects/packagefiles/winapi-0.3-rs/meson.build b/subprojects/packagefiles/winapi-0.3-rs/meson.build
new file mode 100644
index 000000000000..e2cee17ec2d5
--- /dev/null
+++ b/subprojects/packagefiles/winapi-0.3-rs/meson.build
@@ -0,0 +1,46 @@
+project('winapi-0.3-rs', 'rust',
+  meson_version: '>=1.5.0',
+  version: '0.3.9',
+  license: 'MIT OR Apache-2.0'
+)
+
+if host_machine.cpu_family() == 'x86_64'
+  winapi_arch = 'winapi-x86_64-pc-windows-gnu-0.4-rs'
+elif host_machine.cpu_family() == 'x86'
+  winapi_arch = 'winapi-i686-pc-windows-gnu-0.4-rs'
+else
+  error('Unsupported CPU family for winapi: ' + host_machine.cpu_family())
+endif
+
+subproject(winapi_arch, required: true)
+winapi_arch_dep = dependency(winapi_arch)
+
+winapi_features = [
+  '--cfg', 'feature="errhandlingapi"',
+  '--cfg', 'feature="sysinfoapi"',
+  '--cfg', 'feature="excpt"',
+  '--cfg', 'feature="minwinbase"',
+  '--cfg', 'feature="ntstatus"',
+  '--cfg', 'feature="winnt"',
+  '--cfg', 'feature="basetsd"',
+  '--cfg', 'feature="ktmtypes"',
+  '--cfg', 'feature="minwindef"',
+  '--cfg', 'feature="ntdef"',
+  '--cfg', 'feature="guiddef"',
+  '--cfg', 'feature="vcruntime"'
+]
+
+lib = static_library(
+  'winapi',
+  'src/lib.rs',
+  override_options : ['rust_std=2021'],
+  rust_abi : 'rust',
+  rust_args: ['--cap-lints', 'allow'] + winapi_features,
+  dependencies: [winapi_arch_dep]
+)
+
+dep = declare_dependency(
+  link_with: lib,
+)
+
+meson.override_dependency('winapi-0.3-rs', dep)
diff --git a/subprojects/winapi-0.3-rs.wrap b/subprojects/winapi-0.3-rs.wrap
new file mode 100644
index 000000000000..49a5954ec225
--- /dev/null
+++ b/subprojects/winapi-0.3-rs.wrap
@@ -0,0 +1,10 @@
+[wrap-file]
+directory = winapi-0.3.9
+source_url = https://crates.io/api/v1/crates/winapi/0.3.9/download
+source_filename = winapi-0.3.9.tar.gz
+source_hash = 5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419
+#method = cargo
+patch_directory = winapi-0.3-rs
+
+# bump this version number on every change to meson.build or the patches:
+# v2
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 08/26] subprojects: Add vm-memory crate
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (6 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 07/26] subprojects: Add winapi crate Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 09/26] rust: Add vm-memory in meson Zhao Liu
                   ` (19 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 scripts/archive-source.sh                     |  2 +-
 scripts/make-release                          |  2 +-
 subprojects/.gitignore                        |  1 +
 .../vm-memory-0.16-rs/meson.build             | 35 +++++++++++++++++++
 subprojects/vm-memory-0.16-rs.wrap            | 12 +++++++
 5 files changed, 50 insertions(+), 2 deletions(-)
 create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/meson.build
 create mode 100644 subprojects/vm-memory-0.16-rs.wrap

diff --git a/scripts/archive-source.sh b/scripts/archive-source.sh
index 99d0d898d010..41cf095ca33d 100755
--- a/scripts/archive-source.sh
+++ b/scripts/archive-source.sh
@@ -32,7 +32,7 @@ subprojects="keycodemapdb libvfio-user berkeley-softfloat-3
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
   syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs
-  winapi-0.3-rs winapi-i686-pc-windows-gnu-0.4-rs
+  vm-memory-0.16-rs winapi-0.3-rs winapi-i686-pc-windows-gnu-0.4-rs
   winapi-x86_64-pc-windows-gnu-0.4-rs"
 sub_deinit=""
 
diff --git a/scripts/make-release b/scripts/make-release
index c53dfa0a7f4f..115739d31623 100755
--- a/scripts/make-release
+++ b/scripts/make-release
@@ -45,7 +45,7 @@ SUBPROJECTS="libvfio-user keycodemapdb berkeley-softfloat-3
   libc-0.2-rs proc-macro2-1-rs
   proc-macro-error-1-rs proc-macro-error-attr-1-rs quote-1-rs
   syn-2-rs thiserror-1-rs thiserror-impl-1-rs unicode-ident-1-rs
-  winapi-0.3-rs winapi-i686-pc-windows-gnu-0.4-rs
+  vm-memory-0.16-rs winapi-0.3-rs winapi-i686-pc-windows-gnu-0.4-rs
   winapi-x86_64-pc-windows-gnu-0.4-rs"
 
 src="$1"
diff --git a/subprojects/.gitignore b/subprojects/.gitignore
index c42adaa928ec..518dd39199ab 100644
--- a/subprojects/.gitignore
+++ b/subprojects/.gitignore
@@ -22,6 +22,7 @@
 /thiserror-1.0.65
 /thiserror-impl-1.0.65
 /unicode-ident-1.0.12
+/vm-memory-0.16
 /winapi-0.3.9
 /winapi-i686-pc-windows-gnu-0.4.0
 /winapi-x86_64-pc-windows-gnu-0.4.0
diff --git a/subprojects/packagefiles/vm-memory-0.16-rs/meson.build b/subprojects/packagefiles/vm-memory-0.16-rs/meson.build
new file mode 100644
index 000000000000..f0d99bad5b7f
--- /dev/null
+++ b/subprojects/packagefiles/vm-memory-0.16-rs/meson.build
@@ -0,0 +1,35 @@
+project(
+  'vm-memory-0.16-rs',
+  'rust',
+  meson_version: '>=1.5.0',
+  #version : '0.2.0',
+  license : 'Apache-2.0 or BSD-3-Clause',
+)
+
+all_deps = []
+
+subproject('thiserror-1-rs', required: true)
+all_deps += dependency('thiserror-1-rs')
+
+if host_machine.system() == 'windows'
+  subproject('winapi-0.3-rs', required: true)
+  all_deps += dependency('winapi-0.3-rs')
+endif
+
+# Note "rawfd" (as the only default feature) is disabled by default in
+# meson. It cause compilation failure on Windows and fortunately, we
+# don't need it either.
+lib = static_library(
+  'vm_memory',
+  'src/lib.rs',
+  override_options : ['rust_std=2021', 'build.rust_std=2021'],
+  rust_abi : 'rust',
+  rust_args: ['--cap-lints', 'allow'],
+  dependencies: all_deps,
+)
+
+dep = declare_dependency(
+  link_with : [lib],
+)
+
+meson.override_dependency('vm-memory-0.16-rs', dep)
diff --git a/subprojects/vm-memory-0.16-rs.wrap b/subprojects/vm-memory-0.16-rs.wrap
new file mode 100644
index 000000000000..a057c8c9efc1
--- /dev/null
+++ b/subprojects/vm-memory-0.16-rs.wrap
@@ -0,0 +1,12 @@
+[wrap-git]
+directory = vm-memory-0.16
+# The latest v0.16.2 didn't contain Paolo's commit 5f59e29c3d30
+# ("guest_memory: let multiple regions slice one global bitmap").
+# Once a new release has that change, switch to crates.io.
+url = https://github.com/rust-vmm/vm-memory.git
+revision = 5eb996a060d7ca3844cbd2f10b1d048c0c91942f
+patch_directory = vm-memory-0.16-rs
+depth = 1
+
+# bump this version number on every change to meson.build or the patches:
+# v2
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 09/26] rust: Add vm-memory in meson
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (7 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 08/26] subprojects: Add vm-memory crate Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU memory backend Zhao Liu
                   ` (18 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/meson.build          | 2 ++
 rust/qemu-api/meson.build | 4 ++--
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/rust/meson.build b/rust/meson.build
index 331f11b7e72a..b7d151fb0349 100644
--- a/rust/meson.build
+++ b/rust/meson.build
@@ -3,12 +3,14 @@ subproject('bilge-0.2-rs', required: true)
 subproject('bilge-impl-0.2-rs', required: true)
 subproject('foreign-0.3-rs', required: true)
 subproject('libc-0.2-rs', required: true)
+subproject('vm-memory-0.16-rs', required: true)
 
 anyhow_rs = dependency('anyhow-1-rs')
 bilge_rs = dependency('bilge-0.2-rs')
 bilge_impl_rs = dependency('bilge-impl-0.2-rs')
 foreign_rs = dependency('foreign-0.3-rs')
 libc_rs = dependency('libc-0.2-rs')
+vm_memory_rs = dependency('vm-memory-0.16-rs')
 
 subproject('proc-macro2-1-rs', required: true)
 subproject('quote-1-rs', required: true)
diff --git a/rust/qemu-api/meson.build b/rust/qemu-api/meson.build
index a090297c458b..a362d44ed396 100644
--- a/rust/qemu-api/meson.build
+++ b/rust/qemu-api/meson.build
@@ -79,8 +79,8 @@ _qemu_api_rs = static_library(
   override_options: ['rust_std=2021', 'build.rust_std=2021'],
   rust_abi: 'rust',
   rust_args: _qemu_api_cfg,
-  dependencies: [anyhow_rs, foreign_rs, libc_rs, qemu_api_macros, qemuutil_rs,
-                 qom, hwcore, chardev, migration],
+  dependencies: [anyhow_rs, foreign_rs, libc_rs, vm_memory_rs, qemu_api_macros,
+                 qemuutil_rs, qom, hwcore, chardev, migration],
 )
 
 rust.test('rust-qemu-api-tests', _qemu_api_rs,
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU memory backend
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (8 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 09/26] rust: Add vm-memory in meson Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 13:59   ` Paolo Bonzini
  2025-08-07 12:30 ` [RFC 11/26] rust/cargo: Specify the patched vm-memory crate Zhao Liu
                   ` (17 subsequent siblings)
  27 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Add 2 patches to support QEMU memory backend implementation.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 .../packagefiles/vm-memory-0.16-rs/0001.diff  |  81 +++++++++++++
 .../packagefiles/vm-memory-0.16-rs/0002.diff  | 111 ++++++++++++++++++
 subprojects/vm-memory-0.16-rs.wrap            |   2 +
 3 files changed, 194 insertions(+)
 create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
 create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0002.diff

diff --git a/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
new file mode 100644
index 000000000000..037193108d45
--- /dev/null
+++ b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
@@ -0,0 +1,81 @@
+From 298f8ba019b2fe159fa943e0ae4dfd3c83ee64e0 Mon Sep 17 00:00:00 2001
+From: Zhao Liu <zhao1.liu@intel.com>
+Date: Wed, 6 Aug 2025 11:31:11 +0800
+Subject: [PATCH 1/2] guest_memory: Add a marker tarit to implement
+ Bytes<GuestAddress> for GuestMemory
+
+At present, Bytes<GuestAddress> is implemented as the blanet trait for
+all types which implemented GuestMemory.
+
+QEMU needs to customize its own Bytes<GuestAddress> implementation.
+
+So add a marker trait to still provide the default implementation for
+GuestRegionCollection and GuestMemoryMmap, and QEMU could have its own
+implementation.
+
+Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
+---
+ src/guest_memory.rs | 8 +++++++-
+ src/lib.rs          | 2 +-
+ src/region.rs       | 6 ++++--
+ 3 files changed, 12 insertions(+), 4 deletions(-)
+
+diff --git a/src/guest_memory.rs b/src/guest_memory.rs
+index 39e4f10a89d6..5b78038c3c92 100644
+--- a/src/guest_memory.rs
++++ b/src/guest_memory.rs
+@@ -457,7 +457,13 @@ pub trait GuestMemory {
+     }
+ }
+ 
+-impl<T: GuestMemory + ?Sized> Bytes<GuestAddress> for T {
++/// A marker trait that if implemented on a type `M` makes available a default
++/// implementation of `Bytes<GuestAddress>` for `M`, based on the assumption
++/// that the entire `GuestMemory` is just traditional Guest memory abstraction
++/// without any special access requirements.
++pub trait GuestMemoryBytes: GuestMemory {}
++
++impl<M: GuestMemoryBytes + ?Sized> Bytes<GuestAddress> for M {
+     type E = Error;
+ 
+     fn write(&self, buf: &[u8], addr: GuestAddress) -> Result<usize> {
+diff --git a/src/lib.rs b/src/lib.rs
+index 2f87f4c8482f..64ed3ec27a36 100644
+--- a/src/lib.rs
++++ b/src/lib.rs
+@@ -47,7 +47,7 @@ pub use endian::{Be16, Be32, Be64, BeSize, Le16, Le32, Le64, LeSize};
+ pub mod guest_memory;
+ pub use guest_memory::{
+     Error as GuestMemoryError, FileOffset, GuestAddress, GuestAddressSpace, GuestMemory,
+-    GuestUsize, MemoryRegionAddress, Result as GuestMemoryResult,
++    GuestMemoryBytes, GuestUsize, MemoryRegionAddress, Result as GuestMemoryResult,
+ };
+ 
+ pub mod region;
+diff --git a/src/region.rs b/src/region.rs
+index e716a6290e75..7114dfbe15a7 100644
+--- a/src/region.rs
++++ b/src/region.rs
+@@ -3,8 +3,8 @@
+ use crate::bitmap::{Bitmap, BS};
+ use crate::guest_memory::Result;
+ use crate::{
+-    Address, AtomicAccess, Bytes, FileOffset, GuestAddress, GuestMemory, GuestMemoryError,
+-    GuestUsize, MemoryRegionAddress, ReadVolatile, VolatileSlice, WriteVolatile,
++    Address, AtomicAccess, Bytes, FileOffset, GuestAddress, GuestMemory, GuestMemoryBytes,
++    GuestMemoryError, GuestUsize, MemoryRegionAddress, ReadVolatile, VolatileSlice, WriteVolatile,
+ };
+ use std::sync::atomic::Ordering;
+ use std::sync::Arc;
+@@ -322,6 +322,8 @@ impl<R: GuestMemoryRegion> GuestMemory for GuestRegionCollection<R> {
+     }
+ }
+ 
++impl<R: GuestMemoryRegion> GuestMemoryBytes for GuestRegionCollection<R> {}
++
+ /// A marker trait that if implemented on a type `R` makes available a default
+ /// implementation of `Bytes<MemoryRegionAddress>` for `R`, based on the assumption
+ /// that the entire `GuestMemoryRegion` is just traditional memory without any
+-- 
+2.34.1
+
diff --git a/subprojects/packagefiles/vm-memory-0.16-rs/0002.diff b/subprojects/packagefiles/vm-memory-0.16-rs/0002.diff
new file mode 100644
index 000000000000..bfef1bf1fee3
--- /dev/null
+++ b/subprojects/packagefiles/vm-memory-0.16-rs/0002.diff
@@ -0,0 +1,111 @@
+From 2af7ea12a589fde619690e5060c01710cb6f2e0e Mon Sep 17 00:00:00 2001
+From: Zhao Liu <zhao1.liu@intel.com>
+Date: Wed, 6 Aug 2025 14:27:14 +0800
+Subject: [PATCH 2/2] guest_memory: Add is_write argument for
+ GuestMemory::try_access()
+
+QEMU needs to know whether the memory access is for write or not, e.g.,
+memory region may be read-only, or iommu needs to distinguish write
+access.
+
+The alternative option is to move try_access() into Bytes trait, and
+implement Bytes<(GuestAddress, is_write)> for QEMU's GuestMemory
+abstraction. However, try_access() seems to lack generality in the
+abstraction of Bytes, as only GuestMemory needs it.
+
+Therefore, just add another argument in try_access() to help handle
+more complex memory backend.
+
+Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
+---
+ src/bitmap/mod.rs   | 17 +++++++++++------
+ src/guest_memory.rs | 10 ++++++----
+ 2 files changed, 17 insertions(+), 10 deletions(-)
+
+diff --git a/src/bitmap/mod.rs b/src/bitmap/mod.rs
+index cf1555b29350..de4203166304 100644
+--- a/src/bitmap/mod.rs
++++ b/src/bitmap/mod.rs
+@@ -287,12 +287,17 @@ pub(crate) mod tests {
+         // Finally, let's invoke the generic tests for `Bytes`.
+         let check_range_closure = |m: &M, start: usize, len: usize, clean: bool| -> bool {
+             let mut check_result = true;
+-            m.try_access(len, GuestAddress(start as u64), |_, size, reg_addr, reg| {
+-                if !check_range(&reg.bitmap(), reg_addr.0 as usize, size, clean) {
+-                    check_result = false;
+-                }
+-                Ok(size)
+-            })
++            m.try_access(
++                len,
++                GuestAddress(start as u64),
++                false,
++                |_, size, reg_addr, reg| {
++                    if !check_range(&reg.bitmap(), reg_addr.0 as usize, size, clean) {
++                        check_result = false;
++                    }
++                    Ok(size)
++                },
++            )
+             .unwrap();
+ 
+             check_result
+diff --git a/src/guest_memory.rs b/src/guest_memory.rs
+index 5b78038c3c92..53981c4e8e94 100644
+--- a/src/guest_memory.rs
++++ b/src/guest_memory.rs
+@@ -353,7 +353,7 @@ pub trait GuestMemory {
+ 
+     /// Check whether the range [base, base + len) is valid.
+     fn check_range(&self, base: GuestAddress, len: usize) -> bool {
+-        match self.try_access(len, base, |_, count, _, _| -> Result<usize> { Ok(count) }) {
++        match self.try_access(len, base, false, |_, count, _, _| -> Result<usize> { Ok(count) }) {
+             Ok(count) => count == len,
+             _ => false,
+         }
+@@ -374,7 +374,7 @@ pub trait GuestMemory {
+     /// - the error code returned by the callback 'f'
+     /// - the size of the already handled data when encountering the first hole
+     /// - the size of the already handled data when the whole range has been handled
+-    fn try_access<F>(&self, count: usize, addr: GuestAddress, mut f: F) -> Result<usize>
++    fn try_access<F>(&self, count: usize, addr: GuestAddress, _is_write: bool, mut f: F) -> Result<usize>
+     where
+         F: FnMut(usize, usize, MemoryRegionAddress, &Self::R) -> Result<usize>,
+     {
+@@ -470,6 +470,7 @@ impl<M: GuestMemoryBytes + ?Sized> Bytes<GuestAddress> for M {
+         self.try_access(
+             buf.len(),
+             addr,
++            true,
+             |offset, _count, caddr, region| -> Result<usize> {
+                 region.write(&buf[offset..], caddr)
+             },
+@@ -480,6 +481,7 @@ impl<M: GuestMemoryBytes + ?Sized> Bytes<GuestAddress> for M {
+         self.try_access(
+             buf.len(),
+             addr,
++            false,
+             |offset, _count, caddr, region| -> Result<usize> {
+                 region.read(&mut buf[offset..], caddr)
+             },
+@@ -547,7 +549,7 @@ impl<M: GuestMemoryBytes + ?Sized> Bytes<GuestAddress> for M {
+     where
+         F: ReadVolatile,
+     {
+-        self.try_access(count, addr, |_, len, caddr, region| -> Result<usize> {
++        self.try_access(count, addr, false, |_, len, caddr, region| -> Result<usize> {
+             region.read_volatile_from(caddr, src, len)
+         })
+     }
+@@ -575,7 +577,7 @@ impl<M: GuestMemoryBytes + ?Sized> Bytes<GuestAddress> for M {
+     where
+         F: WriteVolatile,
+     {
+-        self.try_access(count, addr, |_, len, caddr, region| -> Result<usize> {
++        self.try_access(count, addr, true, |_, len, caddr, region| -> Result<usize> {
+             // For a non-RAM region, reading could have side effects, so we
+             // must use write_all().
+             region.write_all_volatile_to(caddr, dst, len).map(|()| len)
+-- 
+2.34.1
+
diff --git a/subprojects/vm-memory-0.16-rs.wrap b/subprojects/vm-memory-0.16-rs.wrap
index a057c8c9efc1..592271300294 100644
--- a/subprojects/vm-memory-0.16-rs.wrap
+++ b/subprojects/vm-memory-0.16-rs.wrap
@@ -8,5 +8,7 @@ revision = 5eb996a060d7ca3844cbd2f10b1d048c0c91942f
 patch_directory = vm-memory-0.16-rs
 depth = 1
 
+diff_files = vm-memory-0.16-rs/0001.diff, vm-memory-0.16-rs/0002.diff
+
 # bump this version number on every change to meson.build or the patches:
 # v2
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 11/26] rust/cargo: Specify the patched vm-memory crate
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (9 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU memory backend Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 12/26] rcu: Make rcu_read_lock & rcu_read_unlock not inline Zhao Liu
                   ` (16 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/Cargo.lock | 1 -
 rust/Cargo.toml | 3 +++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/rust/Cargo.lock b/rust/Cargo.lock
index 7aedae239f66..f1bb2457e133 100644
--- a/rust/Cargo.lock
+++ b/rust/Cargo.lock
@@ -200,7 +200,6 @@ checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f"
 [[package]]
 name = "vm-memory"
 version = "0.16.1"
-source = "git+https://github.com/rust-vmm/vm-memory.git?rev=5eb996a060d7ca3844cbd2f10b1d048c0c91942f#5eb996a060d7ca3844cbd2f10b1d048c0c91942f"
 dependencies = [
  "thiserror",
  "winapi",
diff --git a/rust/Cargo.toml b/rust/Cargo.toml
index 0868e1b42680..ecb31647f93b 100644
--- a/rust/Cargo.toml
+++ b/rust/Cargo.toml
@@ -15,6 +15,9 @@ license = "GPL-2.0-or-later"
 repository = "https://gitlab.com/qemu-project/qemu/"
 rust-version = "1.77.0"
 
+[patch."https://github.com/rust-vmm/vm-memory.git"]
+vm-memory = { path = "./../subprojects/vm-memory-0.16" }
+
 [workspace.lints.rust]
 unexpected_cfgs = { level = "deny", check-cfg = [
     'cfg(MESON)', 'cfg(HAVE_GLIB_WITH_ALIGNED_ALLOC)',
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 12/26] rcu: Make rcu_read_lock & rcu_read_unlock not inline
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (10 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 11/26] rust/cargo: Specify the patched vm-memory crate Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 13:54   ` Paolo Bonzini
  2025-08-07 12:30 ` [RFC 13/26] rust: Add RCU bindings Zhao Liu
                   ` (15 subsequent siblings)
  27 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Make rcu_read_lock & rcu_read_unlock not inline, then bindgen could
generate the bindings.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 include/qemu/rcu.h | 45 ++-------------------------------------------
 util/rcu.c         | 43 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 45 insertions(+), 43 deletions(-)

diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h
index 020dbe4d8b77..34d955204b81 100644
--- a/include/qemu/rcu.h
+++ b/include/qemu/rcu.h
@@ -75,49 +75,8 @@ struct rcu_reader_data {
 
 QEMU_DECLARE_CO_TLS(struct rcu_reader_data, rcu_reader)
 
-static inline void rcu_read_lock(void)
-{
-    struct rcu_reader_data *p_rcu_reader = get_ptr_rcu_reader();
-    unsigned ctr;
-
-    if (p_rcu_reader->depth++ > 0) {
-        return;
-    }
-
-    ctr = qatomic_read(&rcu_gp_ctr);
-    qatomic_set(&p_rcu_reader->ctr, ctr);
-
-    /*
-     * Read rcu_gp_ptr and write p_rcu_reader->ctr before reading
-     * RCU-protected pointers.
-     */
-    smp_mb_placeholder();
-}
-
-static inline void rcu_read_unlock(void)
-{
-    struct rcu_reader_data *p_rcu_reader = get_ptr_rcu_reader();
-
-    assert(p_rcu_reader->depth != 0);
-    if (--p_rcu_reader->depth > 0) {
-        return;
-    }
-
-    /* Ensure that the critical section is seen to precede the
-     * store to p_rcu_reader->ctr.  Together with the following
-     * smp_mb_placeholder(), this ensures writes to p_rcu_reader->ctr
-     * are sequentially consistent.
-     */
-    qatomic_store_release(&p_rcu_reader->ctr, 0);
-
-    /* Write p_rcu_reader->ctr before reading p_rcu_reader->waiting.  */
-    smp_mb_placeholder();
-    if (unlikely(qatomic_read(&p_rcu_reader->waiting))) {
-        qatomic_set(&p_rcu_reader->waiting, false);
-        qemu_event_set(&rcu_gp_event);
-    }
-}
-
+void rcu_read_lock(void);
+void rcu_read_unlock(void);
 void synchronize_rcu(void);
 
 /*
diff --git a/util/rcu.c b/util/rcu.c
index b703c86f15a3..2dfd82796e1e 100644
--- a/util/rcu.c
+++ b/util/rcu.c
@@ -141,6 +141,49 @@ static void wait_for_readers(void)
     QLIST_SWAP(&registry, &qsreaders, node);
 }
 
+void rcu_read_lock(void)
+{
+    struct rcu_reader_data *p_rcu_reader = get_ptr_rcu_reader();
+    unsigned ctr;
+
+    if (p_rcu_reader->depth++ > 0) {
+        return;
+    }
+
+    ctr = qatomic_read(&rcu_gp_ctr);
+    qatomic_set(&p_rcu_reader->ctr, ctr);
+
+    /*
+     * Read rcu_gp_ptr and write p_rcu_reader->ctr before reading
+     * RCU-protected pointers.
+     */
+    smp_mb_placeholder();
+}
+
+void rcu_read_unlock(void)
+{
+    struct rcu_reader_data *p_rcu_reader = get_ptr_rcu_reader();
+
+    assert(p_rcu_reader->depth != 0);
+    if (--p_rcu_reader->depth > 0) {
+        return;
+    }
+
+    /* Ensure that the critical section is seen to precede the
+     * store to p_rcu_reader->ctr.  Together with the following
+     * smp_mb_placeholder(), this ensures writes to p_rcu_reader->ctr
+     * are sequentially consistent.
+     */
+    qatomic_store_release(&p_rcu_reader->ctr, 0);
+
+    /* Write p_rcu_reader->ctr before reading p_rcu_reader->waiting.  */
+    smp_mb_placeholder();
+    if (unlikely(qatomic_read(&p_rcu_reader->waiting))) {
+        qatomic_set(&p_rcu_reader->waiting, false);
+        qemu_event_set(&rcu_gp_event);
+    }
+}
+
 void synchronize_rcu(void)
 {
     QEMU_LOCK_GUARD(&rcu_sync_lock);
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 13/26] rust: Add RCU bindings
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (11 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 12/26] rcu: Make rcu_read_lock & rcu_read_unlock not inline Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:29   ` Manos Pitsidianakis
  2025-08-07 12:30 ` [RFC 14/26] memory: Expose interfaces about Flatview reference count to Rust side Zhao Liu
                   ` (14 subsequent siblings)
  27 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Add rcu_read_lock() & rcu_read_unlock() bindings, then they can be used
in memory access.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/qemu-api/meson.build |  1 +
 rust/qemu-api/src/lib.rs  |  1 +
 rust/qemu-api/src/rcu.rs  | 26 ++++++++++++++++++++++++++
 rust/qemu-api/wrapper.h   |  1 +
 4 files changed, 29 insertions(+)
 create mode 100644 rust/qemu-api/src/rcu.rs

diff --git a/rust/qemu-api/meson.build b/rust/qemu-api/meson.build
index a362d44ed396..d40472092248 100644
--- a/rust/qemu-api/meson.build
+++ b/rust/qemu-api/meson.build
@@ -68,6 +68,7 @@ _qemu_api_rs = static_library(
       'src/prelude.rs',
       'src/qdev.rs',
       'src/qom.rs',
+      'src/rcu.rs',
       'src/sysbus.rs',
       'src/timer.rs',
       'src/uninit.rs',
diff --git a/rust/qemu-api/src/lib.rs b/rust/qemu-api/src/lib.rs
index 86dcd8ef17a9..4705cf9ccbc5 100644
--- a/rust/qemu-api/src/lib.rs
+++ b/rust/qemu-api/src/lib.rs
@@ -26,6 +26,7 @@
 pub mod module;
 pub mod qdev;
 pub mod qom;
+pub mod rcu;
 pub mod sysbus;
 pub mod timer;
 pub mod uninit;
diff --git a/rust/qemu-api/src/rcu.rs b/rust/qemu-api/src/rcu.rs
new file mode 100644
index 000000000000..30d8b9e43967
--- /dev/null
+++ b/rust/qemu-api/src/rcu.rs
@@ -0,0 +1,26 @@
+// Copyright (C) 2025 Intel Corporation.
+// Author(s): Zhao Liu <zhao1.liu@intel.com>
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+//! Bindings for `rcu_read_lock` and `rcu_read_unlock`.
+//! More details about RCU in QEMU, please refer docs/devel/rcu.rst.
+
+use crate::bindings;
+
+/// Used by a reader to inform the reclaimer that the reader is
+/// entering an RCU read-side critical section.
+pub fn rcu_read_lock() {
+    // SAFETY: no return and no argument, everything is done at C side.
+    unsafe { bindings::rcu_read_lock() }
+}
+
+/// Used by a reader to inform the reclaimer that the reader is
+/// exiting an RCU read-side critical section.  Note that RCU
+/// read-side critical sections may be nested and/or overlapping.
+pub fn rcu_read_unlock() {
+    // SAFETY: no return and no argument, everything is done at C side.
+    unsafe { bindings::rcu_read_unlock() }
+}
+
+// FIXME: maybe we need rcu_read_lock_held() to check the rcu context,
+// then make it possible to add assertion at any RCU critical section.
diff --git a/rust/qemu-api/wrapper.h b/rust/qemu-api/wrapper.h
index 15a1b19847f2..ce0ac8d3f550 100644
--- a/rust/qemu-api/wrapper.h
+++ b/rust/qemu-api/wrapper.h
@@ -69,3 +69,4 @@ typedef enum memory_order {
 #include "qemu/timer.h"
 #include "system/address-spaces.h"
 #include "hw/char/pl011.h"
+#include "qemu/rcu.h"
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 14/26] memory: Expose interfaces about Flatview reference count to Rust side
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (12 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 13/26] rust: Add RCU bindings Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 15/26] memory: Rename address_space_lookup_region and expose it " Zhao Liu
                   ` (13 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Expose the following interfaces in include/system/memory.h without
`inline`:

* address_space_to_flatview
* flatview_ref
* flatview_unref

Then Rust side could generate related bindings.

In addtion, add documentations for these 3 interface.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 include/system/memory.h  | 69 +++++++++++++++++++++++++++++++++++++---
 system/memory-internal.h |  1 -
 system/memory.c          |  7 +++-
 3 files changed, 71 insertions(+), 6 deletions(-)

diff --git a/include/system/memory.h b/include/system/memory.h
index e2cd6ed12614..4b9a2f528d86 100644
--- a/include/system/memory.h
+++ b/include/system/memory.h
@@ -1203,10 +1203,71 @@ struct FlatView {
     MemoryRegion *root;
 };
 
-static inline FlatView *address_space_to_flatview(AddressSpace *as)
-{
-    return qatomic_rcu_read(&as->current_map);
-}
+/**
+ * address_space_to_flatview: Get a transient RCU-protected pointer to
+ * the current FlatView.
+ *
+ * @as: The #AddressSpace to be accessed.
+ *
+ * This function retrieves a pointer to the current #FlatView for the
+ * given #AddressSpace.
+ *
+ * Note: This is a low-level, RCU-based accessor. It DOES NOT increment
+ * the FlatView's reference count. The returned pointer is only
+ * guaranteed to be valid within an RCU read-side critical section.
+ *
+ * Difference from address_space_get_flatview():
+ *
+ * For address_space_to_flatview() (this function), it is a lightweight
+ * "peek" operation. It is fast but unsafe for long-term use. Use it
+ * only for very short-lived access where performance is critical.
+ *
+ * For address_space_get_flatview(), it acquires a "strong" reference
+ * by safely incrementing the reference count. The returned pointer is
+ * stable and can be used for long-lived operations, even outside an
+ * RCU lock. It is the safer and generally preferred method, but it
+ * MUST be paired with a call to flatview_unref() after the use of
+ * #FlatView.
+ *
+ * Returns:
+ * A transient pointer to the current #FlatView, valid only under RCU
+ * protection.
+ */
+FlatView *address_space_to_flatview(AddressSpace *as);
+
+/**
+ * flatview_ref: Atomically increment the reference count of #FlatView.
+ *
+ * @view: The #FlatView whose reference count is to be incremented.
+ *
+ * This function attempts to atomically increment the reference count
+ * of the given @view. This operation is conditional and will only
+ * succeed if the current reference count is non-zero.
+ *
+ * A non-zero reference count indicates that the FlatView is live and
+ * in use. If the reference count is already zero, it indicates that the
+ * FlatView is being deinitialized, and no new references can be
+ * acquired.
+ *
+ * Returns:
+ * 'true' if the reference count was successfully incremented (i.e., it
+ * was non-zero before the call).
+ * 'false' if the reference count was already zero and could not be
+ * incremented.
+ */
+bool flatview_ref(FlatView *view);
+
+/**
+ * flatview_unref: Atomically decrement the reference count of
+ * #FlatView.
+ *
+ * @view: The #FlatView to be unreferenced.
+ *
+ * This function atomically decrements the reference count of the given
+ * @view. When the reference count drops to zero, #FlatView will be
+ * destroied via RCU.
+ */
+void flatview_unref(FlatView *view);
 
 /**
  * typedef flatview_cb: callback for flatview_for_each_range()
diff --git a/system/memory-internal.h b/system/memory-internal.h
index 46f758fa7e47..b0870a6359c3 100644
--- a/system/memory-internal.h
+++ b/system/memory-internal.h
@@ -26,7 +26,6 @@ static inline AddressSpaceDispatch *address_space_to_dispatch(AddressSpace *as)
 }
 
 FlatView *address_space_get_flatview(AddressSpace *as);
-void flatview_unref(FlatView *view);
 
 extern const MemoryRegionOps unassigned_mem_ops;
 
diff --git a/system/memory.c b/system/memory.c
index 56465479406f..2a749081fb50 100644
--- a/system/memory.c
+++ b/system/memory.c
@@ -304,7 +304,7 @@ static void flatview_destroy(FlatView *view)
     g_free(view);
 }
 
-static bool flatview_ref(FlatView *view)
+bool flatview_ref(FlatView *view)
 {
     return qatomic_fetch_inc_nonzero(&view->ref) > 0;
 }
@@ -818,6 +818,11 @@ static void address_space_add_del_ioeventfds(AddressSpace *as,
     }
 }
 
+FlatView *address_space_to_flatview(AddressSpace *as)
+{
+    return qatomic_rcu_read(&as->current_map);
+}
+
 FlatView *address_space_get_flatview(AddressSpace *as)
 {
     FlatView *view;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 15/26] memory: Rename address_space_lookup_region and expose it to Rust side
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (13 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 14/26] memory: Expose interfaces about Flatview reference count to Rust side Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection Zhao Liu
                   ` (12 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

address_space_lookup_region() retures the pointer to
MemoryRegionSection, instead of MemoryRegion, so it's better to rename
it to address_space_lookup_section().

And add its declaration to memory.h so that bindgen could generate its
binding.

This interface will be used to implement GuestMemory::find_region() of
vm_memory crate.

In addition, add its documentation in memory.h.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 include/system/memory.h | 21 +++++++++++++++++++++
 system/physmem.c        |  8 ++++----
 2 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/include/system/memory.h b/include/system/memory.h
index 4b9a2f528d86..f492e1fc78bf 100644
--- a/include/system/memory.h
+++ b/include/system/memory.h
@@ -1203,6 +1203,27 @@ struct FlatView {
     MemoryRegion *root;
 };
 
+/**
+ * address_space_lookup_section: Find the MemoryRegionSection by a
+ * given #AddressSpaceDispatch.
+ *
+ * @d: The AddressSpaceDispatch to search within.
+ * @addr: The address to look up.
+ * @resolve_subpage: If 'true', resolve to a subpage section if the
+ * region is a subpage container.
+ *
+ * This function translates a address (@addr) into its corresponding
+ * #MemoryRegionSection within a given address space dispatch (@d).
+ * Called within RCU critical section.
+ *
+ * Returns:
+ * A pointer to the #MemoryRegionSection. If the address is not
+ * mapped, this will be a pointer to the 'unassigned' section.
+ */
+MemoryRegionSection *address_space_lookup_section(AddressSpaceDispatch *d,
+                                                  hwaddr addr,
+                                                  bool resolve_subpage);
+
 /**
  * address_space_to_flatview: Get a transient RCU-protected pointer to
  * the current FlatView.
diff --git a/system/physmem.c b/system/physmem.c
index e5dd760e0bca..785c9a4050c6 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -341,9 +341,9 @@ static MemoryRegionSection *phys_page_find(AddressSpaceDispatch *d, hwaddr addr)
 }
 
 /* Called from RCU critical section */
-static MemoryRegionSection *address_space_lookup_region(AddressSpaceDispatch *d,
-                                                        hwaddr addr,
-                                                        bool resolve_subpage)
+MemoryRegionSection *address_space_lookup_section(AddressSpaceDispatch *d,
+                                                  hwaddr addr,
+                                                  bool resolve_subpage)
 {
     MemoryRegionSection *section = qatomic_read(&d->mru_section);
     subpage_t *subpage;
@@ -369,7 +369,7 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
     MemoryRegion *mr;
     Int128 diff;
 
-    section = address_space_lookup_region(d, addr, resolve_subpage);
+    section = address_space_lookup_section(d, addr, resolve_subpage);
     /* Compute offset within MemoryRegionSection */
     addr -= section->offset_within_address_space;
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (14 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 15/26] memory: Rename address_space_lookup_region and expose it " Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 13:57   ` Paolo Bonzini
  2025-08-07 12:30 ` [RFC 17/26] memory: Add a translation helper to return MemoryRegionSection Zhao Liu
                   ` (11 subsequent siblings)
  27 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Rust side will use cell::Opaque<> to hide details of C structure, and
this could help avoid the direct operation on C memory from Rust side.

Therefore, it's necessary to wrap a translation binding and make it only
return the pointer to MemoryRegionSection, instead of the copy.

As the first step, make flatview_do_translate return a pointer to
MemoryRegionSection, so that we can build a wrapper based on it.

In addtion, add a global variable `unassigned_section` to help get a
pointer to an invalid MemoryRegionSection.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 system/physmem.c | 51 ++++++++++++++++++++++--------------------------
 1 file changed, 23 insertions(+), 28 deletions(-)

diff --git a/system/physmem.c b/system/physmem.c
index 785c9a4050c6..4af29ea2168e 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -103,6 +103,9 @@ AddressSpace address_space_io;
 AddressSpace address_space_memory;
 
 static MemoryRegion io_mem_unassigned;
+static MemoryRegionSection unassigned_section = {
+    .mr = &io_mem_unassigned
+};
 
 typedef struct PhysPageEntry PhysPageEntry;
 
@@ -418,14 +421,11 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
  * This function is called from RCU critical section.  It is the common
  * part of flatview_do_translate and address_space_translate_cached.
  */
-static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iommu_mr,
-                                                         hwaddr *xlat,
-                                                         hwaddr *plen_out,
-                                                         hwaddr *page_mask_out,
-                                                         bool is_write,
-                                                         bool is_mmio,
-                                                         AddressSpace **target_as,
-                                                         MemTxAttrs attrs)
+static MemoryRegionSection *
+address_space_translate_iommu(IOMMUMemoryRegion *iommu_mr, hwaddr *xlat,
+                              hwaddr *plen_out, hwaddr *page_mask_out,
+                              bool is_write, bool is_mmio,
+                              AddressSpace **target_as, MemTxAttrs attrs)
 {
     MemoryRegionSection *section;
     hwaddr page_mask = (hwaddr)-1;
@@ -463,10 +463,10 @@ static MemoryRegionSection address_space_translate_iommu(IOMMUMemoryRegion *iomm
     if (page_mask_out) {
         *page_mask_out = page_mask;
     }
-    return *section;
+    return section;
 
 unassigned:
-    return (MemoryRegionSection) { .mr = &io_mem_unassigned };
+    return &unassigned_section;
 }
 
 /**
@@ -489,15 +489,10 @@ unassigned:
  *
  * This function is called from RCU critical section
  */
-static MemoryRegionSection flatview_do_translate(FlatView *fv,
-                                                 hwaddr addr,
-                                                 hwaddr *xlat,
-                                                 hwaddr *plen_out,
-                                                 hwaddr *page_mask_out,
-                                                 bool is_write,
-                                                 bool is_mmio,
-                                                 AddressSpace **target_as,
-                                                 MemTxAttrs attrs)
+static MemoryRegionSection *
+flatview_do_translate(FlatView *fv, hwaddr addr, hwaddr *xlat, hwaddr *plen_out,
+                      hwaddr *page_mask_out, bool is_write, bool is_mmio,
+                      AddressSpace **target_as, MemTxAttrs attrs)
 {
     MemoryRegionSection *section;
     IOMMUMemoryRegion *iommu_mr;
@@ -523,14 +518,14 @@ static MemoryRegionSection flatview_do_translate(FlatView *fv,
         *page_mask_out = ~TARGET_PAGE_MASK;
     }
 
-    return *section;
+    return section;
 }
 
 /* Called from RCU critical section */
 IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
                                             bool is_write, MemTxAttrs attrs)
 {
-    MemoryRegionSection section;
+    MemoryRegionSection *section;
     hwaddr xlat, page_mask;
 
     /*
@@ -542,13 +537,13 @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
                                     attrs);
 
     /* Illegal translation */
-    if (section.mr == &io_mem_unassigned) {
+    if (section->mr == &io_mem_unassigned) {
         goto iotlb_fail;
     }
 
     /* Convert memory region offset into address space offset */
-    xlat += section.offset_within_address_space -
-        section.offset_within_region;
+    xlat += section->offset_within_address_space -
+        section->offset_within_region;
 
     return (IOMMUTLBEntry) {
         .target_as = as,
@@ -569,13 +564,13 @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
                                  MemTxAttrs attrs)
 {
     MemoryRegion *mr;
-    MemoryRegionSection section;
+    MemoryRegionSection *section;
     AddressSpace *as = NULL;
 
     /* This can be MMIO, so setup MMIO bit. */
     section = flatview_do_translate(fv, addr, xlat, plen, NULL,
                                     is_write, true, &as, attrs);
-    mr = section.mr;
+    mr = section->mr;
 
     if (xen_enabled() && memory_access_is_direct(mr, is_write, attrs)) {
         hwaddr page = ((addr & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE) - addr;
@@ -3618,7 +3613,7 @@ static inline MemoryRegion *address_space_translate_cached(
     MemoryRegionCache *cache, hwaddr addr, hwaddr *xlat,
     hwaddr *plen, bool is_write, MemTxAttrs attrs)
 {
-    MemoryRegionSection section;
+    MemoryRegionSection *section;
     MemoryRegion *mr;
     IOMMUMemoryRegion *iommu_mr;
     AddressSpace *target_as;
@@ -3636,7 +3631,7 @@ static inline MemoryRegion *address_space_translate_cached(
     section = address_space_translate_iommu(iommu_mr, xlat, plen,
                                             NULL, is_write, true,
                                             &target_as, attrs);
-    return section.mr;
+    return section->mr;
 }
 
 /* Called within RCU critical section.  */
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 17/26] memory: Add a translation helper to return MemoryRegionSection
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (15 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 18/26] memory: Rename flatview_access_allowed() to memory_region_access_allowed() Zhao Liu
                   ` (10 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Rust side will based on MemoryRegionSection to origanize non-overlapping
memory "region" abstractions. So it's necessary to provide a translation
variant helper to return the MemoryRegionSection directly.

Additionally, refine and complete the documentations for translation
helpers.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 include/system/memory.h | 68 +++++++++++++++++++++++++++++++++++------
 system/physmem.c        | 22 ++++++++++---
 2 files changed, 77 insertions(+), 13 deletions(-)

diff --git a/include/system/memory.h b/include/system/memory.h
index f492e1fc78bf..eab69e15e10f 100644
--- a/include/system/memory.h
+++ b/include/system/memory.h
@@ -3053,24 +3053,74 @@ void address_space_cache_destroy(MemoryRegionCache *cache);
 IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
                                             bool is_write, MemTxAttrs attrs);
 
-/* address_space_translate: translate an address range into an address space
- * into a MemoryRegion and an address range into that section.  Should be
+/**
+ * flatview_translate_section: translate an guest physical address range
+ * to the corresponding MemoryRegionSection in Flatview. Should be
  * called from an RCU critical section, to avoid that the last reference
- * to the returned region disappears after address_space_translate returns.
- *
- * @fv: #FlatView to be accessed
- * @addr: address within that address space
- * @xlat: pointer to address within the returned memory region section's
- * #MemoryRegion.
- * @len: pointer to length
+ * to the memory region (pointed by returned section) disappears after
+ * flatview_translate_section returns.
+ *
+ * @fv: the flat view to be accessed.
+ * @addr: the address to be translated in above address space.
+ * @xlat: the translated address offset within the returned section's
+ *        #MemoryRegion.
+ * @len: pointer to length, and it will be changed to valid read/write
+ *       length of the translated address after this function returns.
  * @is_write: indicates the transfer direction
  * @attrs: memory attributes
+ *
+ * Returns:
+ * The #MemoryRegionSection that contains the translated address
+ */
+MemoryRegionSection *flatview_translate_section(FlatView *fv, hwaddr addr,
+                                                hwaddr *xlat, hwaddr *len,
+                                                bool is_write, MemTxAttrs attrs);
+
+/**
+ * flatview_translate: translate an guest physical address range
+ * to the corresponding MemoryRegionSection in Flatview. Should be
+ * called from an RCU critical section, to avoid that the last reference
+ * to the returned memory region disappears after flatview_translate
+ * returns.
+ *
+ * This function is the variant of flatview_translate_section(), with the
+ * difference that it returns the MemoryRegion contained in the
+ * MemoryRegionSection.
+ *
+ * @fv: the flat view to be accessed.
+ * @addr: the address to be translated in above address space.
+ * @xlat: the translated address offset within memory region.
+ * @len: pointer to length, and it will be changed to valid read/write
+ *       length of the translated address after this function returns.
+ * @is_write: whether the translation operation is for write.
+ * @attrs: memory transaction attributes.
+ *
+ * Returns:
+ * The #MemoryRegion that contains the translated address.
  */
 MemoryRegion *flatview_translate(FlatView *fv,
                                  hwaddr addr, hwaddr *xlat,
                                  hwaddr *len, bool is_write,
                                  MemTxAttrs attrs);
 
+/**
+ * address_space_translate: translate an guest physical address range
+ * to the corresponding MemoryRegionSection in Flatview. Should be
+ * called from an RCU critical section, to avoid that the last reference
+ * to the returned memory region disappears after flatview_translate
+ * returns.
+ *
+ * This function is the variant of flatview_translate(), with the difference
+ * that it accesses the AddressSpace which contains FlatView.
+ *
+ * @as: #AddressSpace to be accessed
+ * @addr: the address to be translated in above address space.
+ * @xlat: the translated address offset within memory region.
+ * @len: pointer to length, and it will be changed to valid read/write
+ *       length of the translated address after this function returns.
+ * @is_write: whether the translation operation is for write.
+ * @attrs: memory transaction attributes.
+ */
 static inline MemoryRegion *address_space_translate(AddressSpace *as,
                                                     hwaddr addr, hwaddr *xlat,
                                                     hwaddr *len, bool is_write,
diff --git a/system/physmem.c b/system/physmem.c
index 4af29ea2168e..d2106d0ffa87 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -559,9 +559,9 @@ iotlb_fail:
 }
 
 /* Called from RCU critical section */
-MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
-                                 hwaddr *plen, bool is_write,
-                                 MemTxAttrs attrs)
+MemoryRegionSection *flatview_translate_section(FlatView *fv, hwaddr addr,
+                                                hwaddr *xlat, hwaddr *plen,
+                                                bool is_write, MemTxAttrs attrs)
 {
     MemoryRegion *mr;
     MemoryRegionSection *section;
@@ -577,7 +577,21 @@ MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
         *plen = MIN(page, *plen);
     }
 
-    return mr;
+    return section;
+}
+
+/* Called from RCU critical section */
+MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
+                                 hwaddr *plen, bool is_write,
+                                 MemTxAttrs attrs)
+{
+    MemoryRegionSection *section;
+
+    /* This can be MMIO, so setup MMIO bit. */
+    section = flatview_translate_section(fv, addr, xlat, plen,
+                                         is_write, attrs);
+
+    return section->mr;
 }
 
 #ifdef CONFIG_TCG
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 18/26] memory: Rename flatview_access_allowed() to memory_region_access_allowed()
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (16 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 17/26] memory: Add a translation helper to return MemoryRegionSection Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:41   ` Manos Pitsidianakis
  2025-08-07 12:30 ` [RFC 19/26] memory: Add MemoryRegionSection based misc helpers Zhao Liu
                   ` (9 subsequent siblings)
  27 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

flatview_access_allowed() accepts `MemoryRegion *mr` as an argument, so
it's based on MemoryRegion and should be named as
memory_region_access_allowed().

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 system/physmem.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/system/physmem.c b/system/physmem.c
index d2106d0ffa87..8aaaab4d3a74 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -2921,7 +2921,7 @@ bool prepare_mmio_access(MemoryRegion *mr)
 }
 
 /**
- * flatview_access_allowed
+ * memory_region_access_allowed
  * @mr: #MemoryRegion to be accessed
  * @attrs: memory transaction attributes
  * @addr: address within that memory region
@@ -2931,8 +2931,8 @@ bool prepare_mmio_access(MemoryRegion *mr)
  *
  * Returns: true if transaction is allowed, false if denied.
  */
-static bool flatview_access_allowed(MemoryRegion *mr, MemTxAttrs attrs,
-                                    hwaddr addr, hwaddr len)
+static bool memory_region_access_allowed(MemoryRegion *mr, MemTxAttrs attrs,
+                                         hwaddr addr, hwaddr len)
 {
     if (likely(!attrs.memory)) {
         return true;
@@ -2952,7 +2952,7 @@ static MemTxResult flatview_write_continue_step(MemTxAttrs attrs,
                                                 hwaddr len, hwaddr mr_addr,
                                                 hwaddr *l, MemoryRegion *mr)
 {
-    if (!flatview_access_allowed(mr, attrs, mr_addr, *l)) {
+    if (!memory_region_access_allowed(mr, attrs, mr_addr, *l)) {
         return MEMTX_ACCESS_ERROR;
     }
 
@@ -3036,7 +3036,7 @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
 
     l = len;
     mr = flatview_translate(fv, addr, &mr_addr, &l, true, attrs);
-    if (!flatview_access_allowed(mr, attrs, addr, len)) {
+    if (!memory_region_access_allowed(mr, attrs, addr, len)) {
         return MEMTX_ACCESS_ERROR;
     }
     return flatview_write_continue(fv, addr, attrs, buf, len,
@@ -3048,7 +3048,7 @@ static MemTxResult flatview_read_continue_step(MemTxAttrs attrs, uint8_t *buf,
                                                hwaddr *l,
                                                MemoryRegion *mr)
 {
-    if (!flatview_access_allowed(mr, attrs, mr_addr, *l)) {
+    if (!memory_region_access_allowed(mr, attrs, mr_addr, *l)) {
         return MEMTX_ACCESS_ERROR;
     }
 
@@ -3127,7 +3127,7 @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
 
     l = len;
     mr = flatview_translate(fv, addr, &mr_addr, &l, false, attrs);
-    if (!flatview_access_allowed(mr, attrs, addr, len)) {
+    if (!memory_region_access_allowed(mr, attrs, addr, len)) {
         return MEMTX_ACCESS_ERROR;
     }
     return flatview_read_continue(fv, addr, attrs, buf, len,
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 19/26] memory: Add MemoryRegionSection based misc helpers
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (17 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 18/26] memory: Rename flatview_access_allowed() to memory_region_access_allowed() Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 20/26] memory: Add wrappers of intermediate steps for read/write Zhao Liu
                   ` (8 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Add the following helpers:
 * section_access_allowed()
   - used to check access in GuestMemory::try_access().

 * section_covers_region_addr()
   - used to implement GuestMemoryRegion::check_address().

 * section_get_host_addr()
   - used to implement GuestMemoryRegion::get_host_address().

 * section_fuzz_dma_read()
   - used to insert fuzz hook before read/load.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 include/system/memory.h | 56 +++++++++++++++++++++++++++++++++++++++++
 system/physmem.c        | 30 ++++++++++++++++++++++
 2 files changed, 86 insertions(+)

diff --git a/include/system/memory.h b/include/system/memory.h
index eab69e15e10f..110ad0a3b590 100644
--- a/include/system/memory.h
+++ b/include/system/memory.h
@@ -3357,6 +3357,62 @@ address_space_write_cached(MemoryRegionCache *cache, hwaddr addr,
 MemTxResult address_space_set(AddressSpace *as, hwaddr addr,
                               uint8_t c, hwaddr len, MemTxAttrs attrs);
 
+/**
+ * section_access_allowed
+ *
+ * @section: #MemoryRegionSection to be accessed.
+ * @attrs: memory transaction attributes.
+ * @addr: address within that memory region.
+ * @len: the number of bytes to access.
+ *
+ * Check if a memory transaction is allowed.
+ *
+ * Returns: true if transaction is allowed, false if denied.
+ */
+bool section_access_allowed(MemoryRegionSection *section,
+                            MemTxAttrs attrs, hwaddr addr,
+                            hwaddr len);
+
+/**
+ * section_covers_region_addr
+ *
+ * @section: #MemoryRegionSection to be accessed.
+ * @region_addr: memory region address within the region, which is
+ *               pointed by #MemoryRegionSection.
+ *
+ * Check if a region address is coverd by #MemoryRegionSection.
+ *
+ * Returns: true if transaction is allowed, false if denied.
+ */
+bool section_covers_region_addr(const MemoryRegionSection *section,
+                                hwaddr region_addr);
+
+/**
+ * section_get_host_addr
+ *
+ * @section: #MemoryRegionSection to be accessed.
+ * @region_addr: memory region address within the region, which is
+ *               pointed by #MemoryRegionSection.
+ *
+ * Get the pointer to the host address.
+ *
+ * Returns: pointer to the host address.
+ */
+uint8_t *section_get_host_addr(const MemoryRegionSection *section,
+                               hwaddr region_addr);
+
+/**
+ * section_fuzz_dma_read
+ *
+ * @section: #MemoryRegionSection to be accessed.
+ * @addr: memory address to be fuzzed.
+ * @len: length of the memory
+ *
+ * This function is wrapper of fuzz_dma_read_cb().
+ */
+void section_fuzz_dma_read(MemoryRegionSection *section,
+                           hwaddr addr, hwaddr len);
+
 /*
  * Inhibit technologies that require discarding of pages in RAM blocks, e.g.,
  * to manage the actual amount of memory consumed by the VM (then, the memory
diff --git a/system/physmem.c b/system/physmem.c
index 8aaaab4d3a74..e06633f4d8a2 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -320,6 +320,29 @@ static inline bool section_covers_addr(const MemoryRegionSection *section,
                              int128_getlo(section->size), addr);
 }
 
+bool section_covers_region_addr(const MemoryRegionSection *section,
+                                hwaddr region_addr)
+{
+    return section->offset_within_region <= region_addr &&
+           section->offset_within_region + int128_get64(section->size) >= region_addr;
+}
+
+uint8_t *section_get_host_addr(const MemoryRegionSection *section,
+                               hwaddr region_addr)
+{
+    MemoryRegion *mr = section->mr;
+    assert(mr && mr->ram_block);
+
+    return qemu_map_ram_ptr(mr->ram_block,
+                            section->offset_within_region + region_addr);
+}
+
+void section_fuzz_dma_read(MemoryRegionSection *section,
+                           hwaddr addr, hwaddr len)
+{
+    fuzz_dma_read_cb(addr, len, section->mr);
+}
+
 static MemoryRegionSection *phys_page_find(AddressSpaceDispatch *d, hwaddr addr)
 {
     PhysPageEntry lp = d->phys_map, *p;
@@ -2947,6 +2970,13 @@ static bool memory_region_access_allowed(MemoryRegion *mr, MemTxAttrs attrs,
     return false;
 }
 
+bool section_access_allowed(MemoryRegionSection *section,
+                            MemTxAttrs attrs, hwaddr addr,
+                            hwaddr len)
+{
+    return memory_region_access_allowed(section->mr, attrs, addr, len);
+}
+
 static MemTxResult flatview_write_continue_step(MemTxAttrs attrs,
                                                 const uint8_t *buf,
                                                 hwaddr len, hwaddr mr_addr,
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 20/26] memory: Add wrappers of intermediate steps for read/write
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (18 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 19/26] memory: Add MemoryRegionSection based misc helpers Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 21/26] memory: Add store/load interfaces for Rust side Zhao Liu
                   ` (7 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Add these 2 wrappers to allow bindgen to generate the bindings based on
MemoryRegionSection:
 * section_rust_write_continue_step()
 * section_rust_read_continue_step()

Then Rust side could be able to re-build a full write/read processes as
address_space_write()/address_space_read_full() did.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 include/system/memory.h | 51 +++++++++++++++++++++++++++++++++++++++++
 system/physmem.c        | 16 +++++++++++++
 2 files changed, 67 insertions(+)

diff --git a/include/system/memory.h b/include/system/memory.h
index 110ad0a3b590..a75c8c348f58 100644
--- a/include/system/memory.h
+++ b/include/system/memory.h
@@ -3413,6 +3413,57 @@ uint8_t *section_get_host_addr(const MemoryRegionSection *section,
 void section_fuzz_dma_read(MemoryRegionSection *section,
                            hwaddr addr, hwaddr len);
 
+/**
+ * section_rust_write_continue_step: write to #MemoryRegionSection.
+ *
+ * Not: This function should only used by Rust side, and user shouldn't
+ * call it directly!
+ *
+ * This function provides a wrapper of flatview_write_continue_step(),
+ * and allows Rust side to re-build a full write process as
+ * address_space_write() did.
+ *
+ * Should be called from an RCU critical section.
+ *
+ * @section: #MemoryRegionSection to be accessed.
+ * @attrs: memory transaction attributes.
+ * @buf: buffer with the data to be written.
+ * @len: the number of bytes to write.
+ * @mr_addr: address within that memory region.
+ * @l: the actual length of the data is written after function returns.
+ *
+ * Return a MemTxResult indicating whether the operation succeeded
+ * or failed (eg unassigned memory, device rejected the transaction,
+ * IOMMU fault).
+ */
+MemTxResult section_rust_write_continue_step(MemoryRegionSection *section,
+    MemTxAttrs attrs, const uint8_t *buf, hwaddr len, hwaddr mr_addr, hwaddr *l);
+
+/**
+ * section_read_continue_step: read from #MemoryRegionSection.
+ *
+ * Not: This function should only used by Rust side, and user shouldn't
+ * call it directly!
+ *
+ * This function provides a wrapper of flatview_read_continue_step(),
+ * and allows Rust side to re-build a full write process as
+ * address_space_read_full() did.
+ *
+ * Should be called from an RCU critical section.
+ *
+ * @section: #MemoryRegionSection to be accessed.
+ * @attrs: memory transaction attributes.
+ * @buf: buffer to be written.
+ * @len: the number of bytes is expected to read.
+ * @mr_addr: address within that memory region.
+ * @l: the actual length of the data is read after function returns.
+ *
+ * Return a MemTxResult indicating whether the operation succeeded
+ * or failed.
+ */
+MemTxResult section_read_continue_step(MemoryRegionSection *section,
+    MemTxAttrs attrs, uint8_t *buf, hwaddr len, hwaddr mr_addr, hwaddr *l);
+
 /*
  * Inhibit technologies that require discarding of pages in RAM blocks, e.g.,
  * to manage the actual amount of memory consumed by the VM (then, the memory
diff --git a/system/physmem.c b/system/physmem.c
index e06633f4d8a2..0c30dea775ca 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -3119,6 +3119,14 @@ static MemTxResult flatview_read_continue_step(MemTxAttrs attrs, uint8_t *buf,
     }
 }
 
+MemTxResult
+section_read_continue_step(MemoryRegionSection *section, MemTxAttrs attrs,
+                           uint8_t *buf, hwaddr len, hwaddr mr_addr,
+                           hwaddr *l)
+{
+    return flatview_read_continue_step(attrs, buf, len, mr_addr, l, section->mr);
+}
+
 /* Called within RCU critical section.  */
 MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
                                    MemTxAttrs attrs, void *ptr,
@@ -3707,6 +3715,14 @@ static MemTxResult address_space_write_continue_cached(MemTxAttrs attrs,
     return result;
 }
 
+MemTxResult
+section_rust_write_continue_step(MemoryRegionSection *section, MemTxAttrs attrs,
+                                 const uint8_t *buf, hwaddr len, hwaddr mr_addr,
+                                 hwaddr *l)
+{
+    return flatview_write_continue_step(attrs, buf, len, mr_addr, l, section->mr);
+}
+
 /* Called within RCU critical section.  */
 static MemTxResult address_space_read_continue_cached(MemTxAttrs attrs,
                                                       void *ptr, hwaddr len,
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 21/26] memory: Add store/load interfaces for Rust side
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (19 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 20/26] memory: Add wrappers of intermediate steps for read/write Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 22/26] rust/memory: Implement vm_memory::GuestMemoryRegion for MemoryRegionSection Zhao Liu
                   ` (6 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

At present, there're many store/load variants defined in
memory_ldst.c.inc.

However, the Bytes::store() and Bytes::load() of vm-memory are bound
with AtomicAccess trait, which makes it (almost) impossible to select
proper interface (for l, w or q) based on specific type.

So it's necessary to provide such interfaces that hide the type details
as much as possible. And compared with address_space_st{size} or
address_space_ld{size}, the differences include:

 * No translation, and only memory access.

 * Only support native endian. Then Rust side must handle the endian
   format before/after processing store()/load().

 * Use bytes array instead of single uint64_t for the value to be
   written or read. Then Rust side doesn't need to convert generic type
   to u64.

   - But the extra cost is that there's the need for conversation
     between bytes array and uint64_t inside the interfaces.

 * Do not handle the cross-region case via MMIO access. Then Rust side
   will handle such abnormal cases.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 include/system/memory.h | 52 +++++++++++++++++++++++++++++++--
 system/physmem.c        | 65 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 112 insertions(+), 5 deletions(-)

diff --git a/include/system/memory.h b/include/system/memory.h
index a75c8c348f58..f0f51f1c4c89 100644
--- a/include/system/memory.h
+++ b/include/system/memory.h
@@ -3440,7 +3440,7 @@ MemTxResult section_rust_write_continue_step(MemoryRegionSection *section,
     MemTxAttrs attrs, const uint8_t *buf, hwaddr len, hwaddr mr_addr, hwaddr *l);
 
 /**
- * section_read_continue_step: read from #MemoryRegionSection.
+ * section_rust_read_continue_step: read from #MemoryRegionSection.
  *
  * Not: This function should only used by Rust side, and user shouldn't
  * call it directly!
@@ -3461,9 +3461,57 @@ MemTxResult section_rust_write_continue_step(MemoryRegionSection *section,
  * Return a MemTxResult indicating whether the operation succeeded
  * or failed.
  */
-MemTxResult section_read_continue_step(MemoryRegionSection *section,
+MemTxResult section_rust_read_continue_step(MemoryRegionSection *section,
     MemTxAttrs attrs, uint8_t *buf, hwaddr len, hwaddr mr_addr, hwaddr *l);
 
+/**
+ * section_rust_store: store data to #MemoryRegionSection.
+ *
+ * Not: This function should only used by Rust side, and user shouldn't
+ * call it directly!
+ *
+ * This function provides a wrapper for address_space_st{size} without
+ * translation, and only supports native endian by default.
+ *
+ * Should be called from an RCU critical section.
+ *
+ * @section: #MemoryRegionSection to be accessed.
+ * @mr_offset: address within that memory region.
+ * @buf: buffer to be written.
+ * @attrs: memory transaction attributes.
+ * @len: the number of bytes is expected to read.
+ *
+ * Return a MemTxResult indicating whether the operation succeeded
+ * or failed.
+ */
+MemTxResult section_rust_store(MemoryRegionSection *section,
+                               hwaddr mr_offset, const uint8_t *buf,
+                               MemTxAttrs attrs, hwaddr len);
+
+/**
+ * section_rust_load: load data from #MemoryRegionSection.
+ *
+ * Not: This function should only used by Rust side, and user shouldn't
+ * call it directly!
+ *
+ * This function provides a wrapper for address_space_st{size} without
+ * translation, and only supports native endian by default.
+ *
+ * Should be called from an RCU critical section.
+ *
+ * @section: #MemoryRegionSection to be accessed.
+ * @mr_offset: address within that memory region.
+ * @buf: buffer to be written.
+ * @attrs: memory transaction attributes.
+ * @len: the number of bytes is expected to read.
+ *
+ * Return a MemTxResult indicating whether the operation succeeded
+ * or failed.
+ */
+MemTxResult section_rust_load(MemoryRegionSection *section,
+                              hwaddr mr_offset, uint8_t *buf,
+                              MemTxAttrs attrs, hwaddr len);
+
 /*
  * Inhibit technologies that require discarding of pages in RAM blocks, e.g.,
  * to manage the actual amount of memory consumed by the VM (then, the memory
diff --git a/system/physmem.c b/system/physmem.c
index 0c30dea775ca..6048d5faac8c 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -3120,9 +3120,9 @@ static MemTxResult flatview_read_continue_step(MemTxAttrs attrs, uint8_t *buf,
 }
 
 MemTxResult
-section_read_continue_step(MemoryRegionSection *section, MemTxAttrs attrs,
-                           uint8_t *buf, hwaddr len, hwaddr mr_addr,
-                           hwaddr *l)
+section_rust_read_continue_step(MemoryRegionSection *section, MemTxAttrs attrs,
+                                uint8_t *buf, hwaddr len, hwaddr mr_addr,
+                                hwaddr *l)
 {
     return flatview_read_continue_step(attrs, buf, len, mr_addr, l, section->mr);
 }
@@ -3239,6 +3239,65 @@ void cpu_physical_memory_rw(hwaddr addr, void *buf,
                      buf, len, is_write);
 }
 
+MemTxResult section_rust_store(MemoryRegionSection *section,
+                               hwaddr mr_offset, const uint8_t *buf,
+                               MemTxAttrs attrs, hwaddr len)
+{
+    MemoryRegion *mr = section->mr;
+    MemTxResult r;
+    uint64_t val;
+
+    val = ldn_he_p(buf, len);
+    if (!memory_access_is_direct(mr, true, attrs)) {
+        bool release_lock = false;
+
+        release_lock |= prepare_mmio_access(mr);
+        r = memory_region_dispatch_write(mr, mr_offset, val,
+                                         size_memop(len) |
+                                         devend_memop(DEVICE_NATIVE_ENDIAN),
+                                         attrs);
+        if (release_lock) {
+            bql_unlock();
+        }
+    } else {
+        uint8_t *ptr = qemu_map_ram_ptr(mr->ram_block, mr_offset);
+        stn_p(ptr, len, val);
+        invalidate_and_set_dirty(mr, mr_offset, len);
+        r = MEMTX_OK;
+    }
+
+    return r;
+}
+
+MemTxResult section_rust_load(MemoryRegionSection *section,
+                              hwaddr mr_offset, uint8_t *buf,
+                              MemTxAttrs attrs, hwaddr len)
+{
+    MemoryRegion *mr = section->mr;
+    MemTxResult r;
+    uint64_t val;
+
+    if (!memory_access_is_direct(mr, false, attrs)) {
+        bool release_lock = false;
+
+        release_lock |= prepare_mmio_access(mr);
+        r = memory_region_dispatch_read(mr, mr_offset, &val,
+                                        size_memop(len) |
+                                        devend_memop(DEVICE_NATIVE_ENDIAN),
+                                        attrs);
+        if (release_lock) {
+            bql_unlock();
+        }
+    } else {
+        uint8_t *ptr = qemu_map_ram_ptr(mr->ram_block, mr_offset);
+        val = ldn_p(ptr, len);
+        r = MEMTX_OK;
+    }
+
+    stn_he_p(buf, len, val);
+    return r;
+}
+
 enum write_rom_type {
     WRITE_DATA,
     FLUSH_CACHE,
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 22/26] rust/memory: Implement vm_memory::GuestMemoryRegion for MemoryRegionSection
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (20 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 21/26] memory: Add store/load interfaces for Rust side Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 23/26] rust/memory: Implement vm_memory::GuestMemory for FlatView Zhao Liu
                   ` (5 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Although QEMU already has native memory region abstraction, this is
MemoryRegion, which supports overlapping.

But vm-memory doesn't support overlapped memory, so MemoryRegionSection
is more proper to implement vm_memory::GuestMemoryRegion trait.

Implement vm_memory::GuestMemoryRegion for MemoryRegionSection, and
provide low-level memory write/read/store/load bindings based on
MemoryRegionSection.

Additionally, add necessay helpers (fuzz_dma_read() and
is_access_allowed()) for MemoryRegionSection.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/qemu-api/src/memory.rs | 393 +++++++++++++++++++++++++++++++++++-
 1 file changed, 391 insertions(+), 2 deletions(-)

diff --git a/rust/qemu-api/src/memory.rs b/rust/qemu-api/src/memory.rs
index e40fad6cf19e..c8faa3b9c1e9 100644
--- a/rust/qemu-api/src/memory.rs
+++ b/rust/qemu-api/src/memory.rs
@@ -2,17 +2,33 @@
 // Author(s): Paolo Bonzini <pbonzini@redhat.com>
 // SPDX-License-Identifier: GPL-2.0-or-later
 
-//! Bindings for `MemoryRegion`, `MemoryRegionOps` and `MemTxAttrs`
+//! Bindings for `MemoryRegion`, `MemoryRegionOps`, `MemTxAttrs` and
+//! `MemoryRegionSection`.
 
 use std::{
     ffi::{c_uint, c_void, CStr, CString},
+    io::ErrorKind,
     marker::PhantomData,
+    mem::size_of,
+    ops::Deref,
+    sync::atomic::Ordering,
 };
 
+// FIXME: Convert hwaddr to GuestAddress
 pub use bindings::{hwaddr, MemTxAttrs};
+pub use vm_memory::GuestAddress;
+use vm_memory::{
+    bitmap::BS, Address, AtomicAccess, Bytes, GuestMemoryError, GuestMemoryRegion,
+    GuestMemoryResult, GuestUsize, MemoryRegionAddress, ReadVolatile, VolatileSlice, WriteVolatile,
+};
 
 use crate::{
-    bindings::{self, device_endian, memory_region_init_io},
+    bindings::{
+        self, device_endian, memory_region_init_io, section_access_allowed,
+        section_covers_region_addr, section_fuzz_dma_read, section_get_host_addr,
+        section_rust_load, section_rust_read_continue_step, section_rust_store,
+        section_rust_write_continue_step, MEMTX_OK,
+    },
     callbacks::FnCall,
     cell::Opaque,
     prelude::*,
@@ -202,3 +218,376 @@ unsafe impl ObjectType for MemoryRegion {
     unspecified: true,
     ..Zeroable::ZERO
 };
+
+/// A safe wrapper around [`bindings::MemoryRegionSection`].
+///
+/// This struct is fundamental for integrating QEMU's memory model with
+/// the [`vm-memory`] ecosystem.  It directly maps to the concept of
+/// [`GuestMemoryRegion`](vm_memory::GuestMemoryRegion) and implements
+/// that trait.
+///
+/// ### `MemoryRegion` vs. `MemoryRegionSection`
+///
+/// Although QEMU already has native memory region abstraction, this is
+/// [`MemoryRegion`], which supports overlapping.  But `vm-memory` doesn't
+/// support overlapped memory, so `MemoryRegionSection` is more proper
+/// to implement [`GuestMemoryRegion`](vm_memory::GuestMemoryRegion)
+/// trait.
+///
+/// One point should pay attention is,
+/// [`MemoryRegionAddress`](vm_memory::MemoryRegionAddress) represents the
+/// address or offset within the `MemoryRegionSection`.  But traditional C
+/// bindings treats memory region address or offset as the offset within
+/// `MemoryRegion`.
+///
+/// Therefore, it's necessary to do conversion when calling C bindings
+/// with `MemoryRegionAddress` from the context of `MemoryRegionSection`.
+///
+/// ### Usage
+///
+/// Considerring memory access is almost always through `AddressSpace`
+/// in QEMU, `MemoryRegionSection` is intended for **internal use only**
+///  within the `vm-memory` backend implementation.
+///
+/// Device and other external users should **not** use or create
+/// `MemoryRegionSection`s directly.  Instead, they should work with the
+/// higher-level `MemoryRegion` API to create and manage their device's
+/// memory.  This separation of concerns mirrors the C API and avoids
+/// confusion about different memory abstractions.
+#[repr(transparent)]
+#[derive(qemu_api_macros::Wrapper)]
+pub struct MemoryRegionSection(Opaque<bindings::MemoryRegionSection>);
+
+unsafe impl Send for MemoryRegionSection {}
+unsafe impl Sync for MemoryRegionSection {}
+
+impl Deref for MemoryRegionSection {
+    type Target = bindings::MemoryRegionSection;
+
+    fn deref(&self) -> &Self::Target {
+        // SAFETY: Opaque<> wraps a pointer from C side. The validity
+        // of the pointer is confirmed at the creation of Opaque<>.
+        unsafe { &*self.0.as_ptr() }
+    }
+}
+
+impl MemoryRegionSection {
+    /// A fuzz testing hook for DMA read.
+    ///
+    /// When CONFIG_FUZZ is not set, this hook will do nothing.
+    #[allow(dead_code)]
+    fn fuzz_dma_read(&self, addr: GuestAddress, len: GuestUsize) -> &Self {
+        // SAFETY: Opaque<> ensures the pointer is valid, and here it
+        // takes into account the offset conversion between MemoryRegionSection
+        // and MemoryRegion.
+        unsafe {
+            section_fuzz_dma_read(
+                self.as_mut_ptr(),
+                addr.checked_add(self.deref().offset_within_region)
+                    .unwrap()
+                    .raw_value(),
+                len,
+            )
+        };
+        self
+    }
+
+    /// A helper to check if the memory access is allowed.
+    ///
+    /// This is needed for memory write/read.
+    #[allow(dead_code)]
+    fn is_access_allowed(&self, addr: MemoryRegionAddress, len: GuestUsize) -> bool {
+        // SAFETY: Opaque<> ensures the pointer is valid, and here it
+        // takes into account the offset conversion between MemoryRegionSection
+        // and MemoryRegion.
+        let allowed = unsafe {
+            section_access_allowed(
+                self.as_mut_ptr(),
+                MEMTXATTRS_UNSPECIFIED,
+                addr.checked_add(self.deref().offset_within_region)
+                    .unwrap()
+                    .raw_value(),
+                len,
+            )
+        };
+        allowed
+    }
+}
+
+impl Bytes<MemoryRegionAddress> for MemoryRegionSection {
+    type E = GuestMemoryError;
+
+    /// The memory wirte interface based on `MemoryRegionSection`.
+    ///
+    /// This function - as an intermediate step - is called by FlatView's
+    /// write(). And it shouldn't be called to access memory directly.
+    fn write(&self, buf: &[u8], addr: MemoryRegionAddress) -> GuestMemoryResult<usize> {
+        let len = buf.len() as u64;
+        let mut remain = len;
+
+        // SAFETY: the pointers and reference are convertible and the
+        // offset conversion is considerred.
+        let ret = unsafe {
+            section_rust_write_continue_step(
+                self.as_mut_ptr(),
+                MEMTXATTRS_UNSPECIFIED,
+                buf.as_ptr(),
+                len,
+                addr.checked_add(self.deref().offset_within_region)
+                    .unwrap()
+                    .raw_value(),
+                &mut remain,
+            )
+        };
+
+        if ret == MEMTX_OK {
+            return Ok(remain as usize);
+        } else {
+            return Err(GuestMemoryError::InvalidBackendAddress);
+        }
+    }
+
+    /// The memory read interface based on `MemoryRegionSection`.
+    ///
+    /// This function - as an intermediate step - is called by FlatView's
+    /// read(). And it shouldn't be called to access memory directly.
+    fn read(&self, buf: &mut [u8], addr: MemoryRegionAddress) -> GuestMemoryResult<usize> {
+        let len = buf.len() as u64;
+        let mut remain = len;
+
+        // SAFETY: the pointers and reference are convertible and the
+        // offset conversion is considerred.
+        let ret = unsafe {
+            section_rust_read_continue_step(
+                self.as_mut_ptr(),
+                MEMTXATTRS_UNSPECIFIED,
+                buf.as_mut_ptr(),
+                len,
+                addr.checked_add(self.deref().offset_within_region)
+                    .unwrap()
+                    .raw_value(),
+                &mut remain,
+            )
+        };
+
+        if ret == MEMTX_OK {
+            return Ok(remain as usize);
+        } else {
+            return Err(GuestMemoryError::InvalidBackendAddress);
+        }
+    }
+
+    /// The memory store interface based on `MemoryRegionSection`.
+    ///
+    /// This function - as the low-level store implementation - is
+    /// called by FlatView's store(). And it shouldn't be called to
+    ///  access memory directly.
+    fn store<T: AtomicAccess>(
+        &self,
+        val: T,
+        addr: MemoryRegionAddress,
+        _order: Ordering,
+    ) -> GuestMemoryResult<()> {
+        let len = size_of::<T>();
+
+        if len > size_of::<u64>() {
+            return Err(GuestMemoryError::IOError(std::io::Error::new(
+                ErrorKind::InvalidInput,
+                "failed to store the data more then 8 bytes",
+            )));
+        }
+
+        // Note: setcion_rust_store() accepts `const uint8_t *buf`.
+        //
+        // This is a "compromise" solution: vm-memory requires AtomicAccess
+        // but QEMU uses uint64_t as the default type. Here we can't convert
+        // AtomicAccess to u64, since complier will complain "an `as`
+        // expression can only be used to convert between primitive types or
+        // to coerce to a specific trait object", or other endless errors
+        // about convertion to u64.
+        //
+        // Fortunately, we can use a byte array to bridge the Rust wrapper
+        // and the C binding. This approach is not without a trade-off,
+        // however: the section_rust_store() function requires an additional
+        // conversion from bytes to a uint64_t. This performance overhead is
+        // considered acceptable.
+        //
+        // SAFETY: the pointers are convertible and the offset conversion is
+        // considerred.
+        let res = unsafe {
+            section_rust_store(
+                self.as_mut_ptr(),
+                addr.checked_add(self.deref().offset_within_region)
+                    .unwrap()
+                    .raw_value(),
+                val.as_slice().as_ptr(),
+                MEMTXATTRS_UNSPECIFIED,
+                len as u64,
+            )
+        };
+
+        match res {
+            MEMTX_OK => Ok(()),
+            _ => Err(GuestMemoryError::InvalidBackendAddress),
+        }
+    }
+
+    /// The memory load interface based on `MemoryRegionSection`.
+    ///
+    /// This function - as the low-level load implementation - is
+    /// called by FlatView's load(). And it shouldn't be called to
+    /// access memory directly.
+    fn load<T: AtomicAccess>(
+        &self,
+        addr: MemoryRegionAddress,
+        _order: Ordering,
+    ) -> GuestMemoryResult<T> {
+        let len = size_of::<T>();
+
+        if len > size_of::<u64>() {
+            return Err(GuestMemoryError::IOError(std::io::Error::new(
+                ErrorKind::InvalidInput,
+                "failed to load the data more then 8 bytes",
+            )));
+        }
+
+        let mut val: T = T::zeroed();
+
+        // Note: setcion_rust_load() accepts `uint8_t *buf`.
+        //
+        // It has the similar reason as store() with the slight difference,
+        // which is section_rust_load() requires additional conversion of
+        // uint64_t to bytes.
+        //
+        // SAFETY: the pointers are convertible and the offset conversion is
+        // considerred.
+        let res = unsafe {
+            section_rust_load(
+                self.as_mut_ptr(),
+                addr.checked_add(self.deref().offset_within_region)
+                    .unwrap()
+                    .raw_value(),
+                val.as_mut_slice().as_mut_ptr(),
+                MEMTXATTRS_UNSPECIFIED,
+                size_of::<T>() as u64,
+            )
+        };
+
+        match res {
+            MEMTX_OK => Ok(val),
+            _ => Err(GuestMemoryError::InvalidBackendAddress),
+        }
+    }
+
+    fn write_slice(&self, _buf: &[u8], _addr: MemoryRegionAddress) -> GuestMemoryResult<()> {
+        unimplemented!()
+    }
+
+    fn read_slice(&self, _buf: &mut [u8], _addr: MemoryRegionAddress) -> GuestMemoryResult<()> {
+        unimplemented!()
+    }
+
+    fn read_volatile_from<F>(
+        &self,
+        _addr: MemoryRegionAddress,
+        _src: &mut F,
+        _count: usize,
+    ) -> GuestMemoryResult<usize>
+    where
+        F: ReadVolatile,
+    {
+        unimplemented!()
+    }
+
+    fn read_exact_volatile_from<F>(
+        &self,
+        _addr: MemoryRegionAddress,
+        _src: &mut F,
+        _count: usize,
+    ) -> GuestMemoryResult<()>
+    where
+        F: ReadVolatile,
+    {
+        unimplemented!()
+    }
+
+    fn write_volatile_to<F>(
+        &self,
+        _addr: MemoryRegionAddress,
+        _dst: &mut F,
+        _count: usize,
+    ) -> GuestMemoryResult<usize>
+    where
+        F: WriteVolatile,
+    {
+        unimplemented!()
+    }
+
+    fn write_all_volatile_to<F>(
+        &self,
+        _addr: MemoryRegionAddress,
+        _dst: &mut F,
+        _count: usize,
+    ) -> GuestMemoryResult<()>
+    where
+        F: WriteVolatile,
+    {
+        unimplemented!()
+    }
+}
+
+impl GuestMemoryRegion for MemoryRegionSection {
+    type B = ();
+
+    /// Get the memory size covered by this MemoryRegionSection.
+    fn len(&self) -> GuestUsize {
+        self.deref().size as GuestUsize
+    }
+
+    /// Return the minimum (inclusive) Guest physical address managed by
+    /// this MemoryRegionSection.
+    fn start_addr(&self) -> GuestAddress {
+        GuestAddress(self.deref().offset_within_address_space)
+    }
+
+    fn bitmap(&self) -> BS<'_, Self::B> {
+        ()
+    }
+
+    /// Check whether the @addr is covered by this MemoryRegionSection.
+    fn check_address(&self, addr: MemoryRegionAddress) -> Option<MemoryRegionAddress> {
+        // SAFETY: the pointer is convertible and the offset conversion is
+        // considerred.
+        if unsafe {
+            section_covers_region_addr(
+                self.as_mut_ptr(),
+                addr.checked_add(self.deref().offset_within_region)
+                    .unwrap()
+                    .raw_value(),
+            )
+        } {
+            Some(addr)
+        } else {
+            None
+        }
+    }
+
+    /// Get the host virtual address from the offset of this MemoryRegionSection
+    /// (@addr).
+    fn get_host_address(&self, addr: MemoryRegionAddress) -> GuestMemoryResult<*mut u8> {
+        self.check_address(addr)
+            .ok_or(GuestMemoryError::InvalidBackendAddress)
+            .map(|addr|
+                // SAFETY: the pointers are convertible and the offset
+                // conversion is considerred.
+                unsafe { section_get_host_addr(self.as_mut_ptr(), addr.raw_value()) })
+    }
+
+    fn get_slice(
+        &self,
+        _offset: MemoryRegionAddress,
+        _count: usize,
+    ) -> GuestMemoryResult<VolatileSlice<BS<Self::B>>> {
+        unimplemented!()
+    }
+}
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 23/26] rust/memory: Implement vm_memory::GuestMemory for FlatView
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (21 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 22/26] rust/memory: Implement vm_memory::GuestMemoryRegion for MemoryRegionSection Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:30 ` [RFC 24/26] rust/memory: Provide AddressSpace bindings Zhao Liu
                   ` (4 subsequent siblings)
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Implement vm_memory::GuestMemory for FlafView, and provide memory
write/read/store/load bindings.

At the meanwhile, add a RAII guard to help protect FlatView's life
time at Rust side.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/qemu-api/src/memory.rs | 433 +++++++++++++++++++++++++++++++++++-
 1 file changed, 429 insertions(+), 4 deletions(-)

diff --git a/rust/qemu-api/src/memory.rs b/rust/qemu-api/src/memory.rs
index c8faa3b9c1e9..23347f35e5da 100644
--- a/rust/qemu-api/src/memory.rs
+++ b/rust/qemu-api/src/memory.rs
@@ -2,8 +2,8 @@
 // Author(s): Paolo Bonzini <pbonzini@redhat.com>
 // SPDX-License-Identifier: GPL-2.0-or-later
 
-//! Bindings for `MemoryRegion`, `MemoryRegionOps`, `MemTxAttrs` and
-//! `MemoryRegionSection`.
+//! Bindings for `MemoryRegion`, `MemoryRegionOps`, `MemTxAttrs`
+//! `MemoryRegionSection` and `FlatView`.
 
 use std::{
     ffi::{c_uint, c_void, CStr, CString},
@@ -11,6 +11,7 @@
     marker::PhantomData,
     mem::size_of,
     ops::Deref,
+    ptr::NonNull,
     sync::atomic::Ordering,
 };
 
@@ -18,13 +19,14 @@
 pub use bindings::{hwaddr, MemTxAttrs};
 pub use vm_memory::GuestAddress;
 use vm_memory::{
-    bitmap::BS, Address, AtomicAccess, Bytes, GuestMemoryError, GuestMemoryRegion,
+    bitmap::BS, Address, AtomicAccess, Bytes, GuestMemory, GuestMemoryError, GuestMemoryRegion,
     GuestMemoryResult, GuestUsize, MemoryRegionAddress, ReadVolatile, VolatileSlice, WriteVolatile,
 };
 
 use crate::{
     bindings::{
-        self, device_endian, memory_region_init_io, section_access_allowed,
+        self, address_space_lookup_section, device_endian, flatview_ref,
+        flatview_translate_section, flatview_unref, memory_region_init_io, section_access_allowed,
         section_covers_region_addr, section_fuzz_dma_read, section_get_host_addr,
         section_rust_load, section_rust_read_continue_step, section_rust_store,
         section_rust_write_continue_step, MEMTX_OK,
@@ -591,3 +593,426 @@ fn get_slice(
         unimplemented!()
     }
 }
+
+/// A safe wrapper around [`bindings::FlatView`].
+///
+/// [`Flaftview`] represents a collection of memory regions, and maps to
+/// [`GuestMemoryRegion`](vm_memory::GuestMemoryRegion).
+///
+/// The memory details are hidden beneath this wrapper. Direct memory access
+/// is not allowed.  Instead, memory access, e.g., write/read/store/load
+/// should process through [`Bytes<GuestAddress>`].
+#[repr(transparent)]
+#[derive(qemu_api_macros::Wrapper)]
+pub struct FlatView(Opaque<bindings::FlatView>);
+
+unsafe impl Send for FlatView {}
+unsafe impl Sync for FlatView {}
+
+impl Deref for FlatView {
+    type Target = bindings::FlatView;
+
+    fn deref(&self) -> &Self::Target {
+        // SAFETY: Opaque<> wraps a pointer from C side. The validity
+        // of the pointer is confirmed at the creation of Opaque<>.
+        unsafe { &*self.0.as_ptr() }
+    }
+}
+
+impl FlatView {
+    /// Translate guest address to the offset within a MemoryRegionSection.
+    ///
+    /// Ideally, this helper should be integrated into
+    /// GuestMemory::to_region_addr(), but we haven't reached there yet.
+    fn translate(
+        &self,
+        addr: GuestAddress,
+        len: GuestUsize,
+        is_write: bool,
+    ) -> Option<(&MemoryRegionSection, MemoryRegionAddress, GuestUsize)> {
+        let mut remain = len as hwaddr;
+        let mut raw_addr: hwaddr = 0;
+
+        // SAFETY: the pointers and reference are convertible and the
+        // offset conversion is considerred.
+        let ptr = unsafe {
+            flatview_translate_section(
+                self.as_mut_ptr(),
+                addr.raw_value(),
+                &mut raw_addr,
+                &mut remain,
+                is_write,
+                MEMTXATTRS_UNSPECIFIED,
+            )
+        };
+
+        if ptr.is_null() {
+            return None;
+        }
+
+        // SAFETY: the pointer is valid and not NULL.
+        let s = unsafe { <FlatView as GuestMemory>::R::from_raw(ptr) };
+        Some((
+            s,
+            MemoryRegionAddress(raw_addr)
+                .checked_sub(s.deref().offset_within_region)
+                .unwrap(),
+            remain as GuestUsize,
+        ))
+    }
+}
+
+impl Bytes<GuestAddress> for FlatView {
+    type E = GuestMemoryError;
+
+    /// The memory wirte interface based on `FlatView`.
+    ///
+    /// This function is similar to `flatview_write` in C side, but it
+    /// only supports MEMTXATTRS_UNSPECIFIED for now.
+    ///
+    /// Note: This function should be called within RCU critical section.
+    /// Furthermore, it is only for internal use and should not be called
+    /// directly.
+    fn write(&self, buf: &[u8], addr: GuestAddress) -> GuestMemoryResult<usize> {
+        self.try_access(
+            buf.len(),
+            addr,
+            true,
+            |offset, count, caddr, region| -> GuestMemoryResult<usize> {
+                // vm-memory provides an elegent way to advance (See
+                // ReadVolatile::read_volatile), but at this moment,
+                // this simple way is enough.
+                let sub_buf = &buf[offset..offset + count];
+                region.write(sub_buf, caddr)
+            },
+        )
+    }
+
+    /// The memory wirte interface based on `FlatView`.
+    ///
+    /// This function is similar to `flatview_read` in C side, but it
+    /// only supports MEMTXATTRS_UNSPECIFIED for now.
+    ///
+    /// Note: This function should be called within RCU critical section.
+    /// Furthermore, it is only for internal use and should not be called
+    /// directly.
+    fn read(&self, buf: &mut [u8], addr: GuestAddress) -> GuestMemoryResult<usize> {
+        if buf.len() == 0 {
+            return Ok(0);
+        }
+
+        self.try_access(
+            buf.len(),
+            addr,
+            false,
+            |offset, count, caddr, region| -> GuestMemoryResult<usize> {
+                // vm-memory provides an elegent way to advance (See
+                // ReadVolatile::write_volatile), but at this moment,
+                // this simple way is enough.
+                let sub_buf = &mut buf[offset..offset + count];
+                region
+                    .fuzz_dma_read(addr, sub_buf.len() as GuestUsize)
+                    .read(sub_buf, caddr)
+            },
+        )
+    }
+
+    /// The memory store interface based on `FlatView`.
+    ///
+    /// This function supports MEMTXATTRS_UNSPECIFIED, and only supports
+    /// native endian, which means before calling this function, make sure
+    /// the endian of value follows target's endian.
+    ///
+    /// Note: This function should be called within RCU critical section.
+    /// Furthermore, it is only for internal use and should not be called
+    /// directly.
+    fn store<T: AtomicAccess>(
+        &self,
+        val: T,
+        addr: GuestAddress,
+        order: Ordering,
+    ) -> GuestMemoryResult<()> {
+        self.translate(addr, size_of::<T>() as GuestUsize, true)
+            .ok_or(GuestMemoryError::InvalidGuestAddress(addr))
+            .and_then(|(region, region_addr, remain)| {
+                // Though C side handles this cross region case via MMIO
+                // by default, it still looks very suspicious for store/
+                // load. It happens Bytes::store() doesn't support more
+                // argument to identify this case, so report an error
+                // directly!
+                if remain < size_of::<T>() as GuestUsize {
+                    return Err(GuestMemoryError::InvalidBackendAddress);
+                }
+
+                region.store(val, region_addr, order)
+            })
+    }
+
+    /// The memory load interface based on `FlatView`.
+    ///
+    /// This function supports MEMTXATTRS_UNSPECIFIED, and only supports
+    /// native endian, which means the value returned by this function
+    /// follows target's endian.
+    ///
+    /// Note: This function should be called within RCU critical section.
+    /// Furthermore, it is only for internal use and should not be called
+    /// directly.
+    fn load<T: AtomicAccess>(&self, addr: GuestAddress, order: Ordering) -> GuestMemoryResult<T> {
+        self.translate(addr, size_of::<T>() as GuestUsize, false)
+            .ok_or(GuestMemoryError::InvalidGuestAddress(addr))
+            .and_then(|(region, region_addr, remain)| {
+                // Though C side handles this cross region case via MMIO
+                // by default, it still looks very suspicious for store/
+                // load. It happens Bytes::load() doesn't support more
+                // arguments to identify this case, so report an error
+                // directly!
+                if remain < size_of::<T>() as GuestUsize {
+                    return Err(GuestMemoryError::InvalidBackendAddress);
+                }
+
+                region
+                    .fuzz_dma_read(addr, size_of::<T> as GuestUsize)
+                    .load(region_addr, order)
+            })
+    }
+
+    fn write_slice(&self, _buf: &[u8], _addr: GuestAddress) -> GuestMemoryResult<()> {
+        unimplemented!()
+    }
+
+    fn read_slice(&self, _buf: &mut [u8], _addr: GuestAddress) -> GuestMemoryResult<()> {
+        unimplemented!()
+    }
+
+    fn read_volatile_from<F>(
+        &self,
+        _addr: GuestAddress,
+        _src: &mut F,
+        _count: usize,
+    ) -> GuestMemoryResult<usize>
+    where
+        F: ReadVolatile,
+    {
+        unimplemented!()
+    }
+
+    fn read_exact_volatile_from<F>(
+        &self,
+        _addr: GuestAddress,
+        _src: &mut F,
+        _count: usize,
+    ) -> GuestMemoryResult<()>
+    where
+        F: ReadVolatile,
+    {
+        unimplemented!()
+    }
+
+    fn write_volatile_to<F>(
+        &self,
+        _addr: GuestAddress,
+        _dst: &mut F,
+        _count: usize,
+    ) -> GuestMemoryResult<usize>
+    where
+        F: WriteVolatile,
+    {
+        unimplemented!()
+    }
+
+    fn write_all_volatile_to<F>(
+        &self,
+        _addr: GuestAddress,
+        _dst: &mut F,
+        _count: usize,
+    ) -> GuestMemoryResult<()>
+    where
+        F: WriteVolatile,
+    {
+        unimplemented!()
+    }
+}
+
+impl GuestMemory for FlatView {
+    type R = MemoryRegionSection;
+
+    /// Get the number of `MemoryRegionSection`s managed by this `FlatView`.
+    fn num_regions(&self) -> usize {
+        self.deref().nr.try_into().unwrap()
+    }
+
+    /// Find the `MemoryRegionSection` which covers @addr
+    fn find_region(&self, addr: GuestAddress) -> Option<&Self::R> {
+        // set resolve_subpage as true by default
+        //
+        // SAFETY: bindings::FlatView has `dispatch` field and the pointer is
+        // valid, although accessing the field of C structure is ugly.
+        let raw =
+            unsafe { address_space_lookup_section(self.deref().dispatch, addr.raw_value(), true) };
+
+        if !raw.is_null() {
+            let s = unsafe { Self::R::from_raw(raw) };
+            Some(s)
+        } else {
+            None
+        }
+    }
+
+    /// Return an empty iterator.
+    ///
+    /// This function always triggers panic under debug mode.
+    fn iter(&self) -> impl Iterator<Item = &Self::R> {
+        assert!(false); // Do not use this iter()!
+
+        // QEMU has a linear iteration in C side named `flatview_for_each_range`,
+        // but it iterates `FlatRange` instead of `MemoryRegionSection`.
+        //
+        // It is still possible to have a `Iterator` based on `MemoryRegionSection`,
+        // by iterating `FlatView::dispatch::map::sections`.
+        //
+        // However, it is not worth it. QEMU has implemented the two-level "page"
+        // walk in `phys_page_find`, which is more efficient than linear
+        // iteration. Therefore, there is no need to reinvent the wheel on the
+        // Rust side, at least for now.
+        //
+        // Just return an empty iterator to satisfy the trait's contract.
+        // This makes the code compile, but the iterator won't yield
+        // any items.
+        std::iter::empty()
+    }
+
+    fn to_region_addr(&self, _addr: GuestAddress) -> Option<(&Self::R, MemoryRegionAddress)> {
+        // Note: This method should implement FlatView::translate(), but
+        // its function signature is unfriendly to QEMU's translation. QEMU
+        // needs to distinguish write access or not, and care about the
+        // remianing bytes of the region.
+        //
+        // FIXME: Once GuestMemory::to_region_addr() could meet QEMU's
+        // requirements, move FlatView::translate() here.
+        unimplemented!()
+    }
+
+    /// Try to access a contiguous block of guest memory, executing a callback
+    /// for each memory region that backs the requested address range.
+    ///
+    /// This method is the core of memory access.  It iterates through each
+    /// `MemoryRegionSection` that corresponds to the guest address
+    /// range [`addr`, `addr` + `count`) and invokes the provided closure `f`
+    /// for each section.
+    fn try_access<F>(
+        &self,
+        count: usize,
+        addr: GuestAddress,
+        is_write: bool,
+        mut f: F,
+    ) -> GuestMemoryResult<usize>
+    where
+        F: FnMut(usize, usize, MemoryRegionAddress, &Self::R) -> GuestMemoryResult<usize>,
+    {
+        // FIXME: it's tricky to add more argument in try_access(), e.g.,
+        // attrs. Or maybe it's possible to move try_access() to Bytes trait,
+        // then it can accept a generic type which contains the address and
+        // other arguments.
+
+        if count == 0 {
+            return Ok(count);
+        }
+
+        let mut total = 0;
+        let mut curr = addr;
+
+        while total < count {
+            let len = (count - total) as GuestUsize;
+            let (region, start, remain) = self.translate(curr, len, is_write).unwrap();
+
+            if !region.is_access_allowed(start, remain) {
+                // FIXME: could we return something like MEMTX_ACCESS_ERROR?
+                return Err(GuestMemoryError::InvalidGuestAddress(addr));
+            }
+
+            match f(total as usize, remain as usize, start, region) {
+                // no more data
+                Ok(0) => return Ok(total),
+                // made some progress
+                Ok(res) => {
+                    if res as GuestUsize > remain {
+                        return Err(GuestMemoryError::CallbackOutOfRange);
+                    }
+
+                    total = match total.checked_add(res) {
+                        Some(x) if x < count => x,
+                        Some(x) if x == count => return Ok(x),
+                        _ => return Err(GuestMemoryError::CallbackOutOfRange),
+                    };
+
+                    curr = match curr.overflowing_add(res as GuestUsize) {
+                        (x @ GuestAddress(0), _) | (x, false) => x,
+                        (_, true) => return Err(GuestMemoryError::GuestAddressOverflow),
+                    };
+                }
+                // error happened
+                e => return e,
+            }
+        }
+
+        if total == 0 {
+            Err(GuestMemoryError::InvalidGuestAddress(addr))
+        } else {
+            Ok(total)
+        }
+    }
+}
+
+/// A RAII guard that provides temporary access to a `FlatView`.
+///
+/// Upon creation, this guard increments the reference count of the
+/// underlying `FlatView`.  When the guard goes out of of scope, it
+/// automatically decrements the count.
+///
+/// As long as the guard lives, the access to `FlatView` is valid.
+#[derive(Debug)]
+pub struct FlatViewRefGuard(NonNull<FlatView>);
+
+impl Drop for FlatViewRefGuard {
+    fn drop(&mut self) {
+        // SAFETY: the pointer is convertible.
+        unsafe { flatview_unref(self.0.as_ref().as_mut_ptr()) };
+    }
+}
+
+impl FlatViewRefGuard {
+    /// Attempt to create a new RAII guard for the given `FlatView`.
+    ///
+    /// This may fail if the `FlatView`'s reference count is already zero.
+    pub fn new(flat: &FlatView) -> Option<Self> {
+        // SAFETY: the pointer is convertible.
+        if unsafe { flatview_ref(flat.as_mut_ptr()) } {
+            Some(FlatViewRefGuard(NonNull::from(flat)))
+        } else {
+            None
+        }
+    }
+}
+
+impl Deref for FlatViewRefGuard {
+    type Target = FlatView;
+
+    fn deref(&self) -> &Self::Target {
+        // SAFETY: the pointer and reference are convertible.
+        unsafe { &*self.0.as_ptr() }
+    }
+}
+
+impl Clone for FlatViewRefGuard {
+    /// Clone the guard, which involves incrementing the reference
+    /// count again.
+    ///
+    /// This method will **panic** if the reference count of the underlying
+    /// `FlatView` cannot be incremented (e.g., if it is zero, meaning the
+    /// object is being destroyed).  This can happen in concurrent scenarios.
+    fn clone(&self) -> Self {
+        FlatViewRefGuard::new(self.deref()).expect(
+            "Failed to clone FlatViewRefGuard: the FlatView may have been destroyed concurrently.",
+        )
+    }
+}
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 24/26] rust/memory: Provide AddressSpace bindings
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (22 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 23/26] rust/memory: Implement vm_memory::GuestMemory for FlatView Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 13:50   ` Paolo Bonzini
  2025-08-07 12:30 ` [RFC 25/26] rust/memory: Add binding to check target endian Zhao Liu
                   ` (3 subsequent siblings)
  27 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

QEMU's AddressSpace matches vm_memory::GuestAddressSpace very well,
so it's straightforward to implement vm_memory::GuestAddressSpace trait
for AddressSpace structure.

And since QEMU's memory is almost entirely processed through
AddressSpace, provide the high-level memory write/read/store/load
interfaces for Rust side use.

Additionally, provide the safe binding for address_space_memory.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/qemu-api/src/memory.rs | 149 +++++++++++++++++++++++++++++++++---
 1 file changed, 140 insertions(+), 9 deletions(-)

diff --git a/rust/qemu-api/src/memory.rs b/rust/qemu-api/src/memory.rs
index 23347f35e5da..42bba23cf3f8 100644
--- a/rust/qemu-api/src/memory.rs
+++ b/rust/qemu-api/src/memory.rs
@@ -3,7 +3,7 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 
 //! Bindings for `MemoryRegion`, `MemoryRegionOps`, `MemTxAttrs`
-//! `MemoryRegionSection` and `FlatView`.
+//! `MemoryRegionSection`, `FlatView` and `AddressSpace`.
 
 use std::{
     ffi::{c_uint, c_void, CStr, CString},
@@ -11,7 +11,7 @@
     marker::PhantomData,
     mem::size_of,
     ops::Deref,
-    ptr::NonNull,
+    ptr::{addr_of, NonNull},
     sync::atomic::Ordering,
 };
 
@@ -19,21 +19,25 @@
 pub use bindings::{hwaddr, MemTxAttrs};
 pub use vm_memory::GuestAddress;
 use vm_memory::{
-    bitmap::BS, Address, AtomicAccess, Bytes, GuestMemory, GuestMemoryError, GuestMemoryRegion,
-    GuestMemoryResult, GuestUsize, MemoryRegionAddress, ReadVolatile, VolatileSlice, WriteVolatile,
+    bitmap::BS, Address, AtomicAccess, Bytes, GuestAddressSpace, GuestMemory, GuestMemoryError,
+    GuestMemoryRegion, GuestMemoryResult, GuestUsize, MemoryRegionAddress, ReadVolatile,
+    VolatileSlice, WriteVolatile,
 };
 
 use crate::{
     bindings::{
-        self, address_space_lookup_section, device_endian, flatview_ref,
-        flatview_translate_section, flatview_unref, memory_region_init_io, section_access_allowed,
-        section_covers_region_addr, section_fuzz_dma_read, section_get_host_addr,
-        section_rust_load, section_rust_read_continue_step, section_rust_store,
-        section_rust_write_continue_step, MEMTX_OK,
+        self, address_space_lookup_section, address_space_memory, address_space_to_flatview,
+        device_endian, flatview_ref, flatview_translate_section, flatview_unref,
+        memory_region_init_io, section_access_allowed, section_covers_region_addr,
+        section_fuzz_dma_read, section_get_host_addr, section_rust_load,
+        section_rust_read_continue_step, section_rust_store, section_rust_write_continue_step,
+        MEMTX_OK,
     },
     callbacks::FnCall,
     cell::Opaque,
+    error::{Error, Result},
     prelude::*,
+    rcu::{rcu_read_lock, rcu_read_unlock},
     uninit::MaybeUninitField,
     zeroable::Zeroable,
 };
@@ -1016,3 +1020,130 @@ fn clone(&self) -> Self {
         )
     }
 }
+
+/// A safe wrapper around [`bindings::AddressSpace`].
+///
+/// [`AddressSpace`] is the address space abstraction in QEMU, which
+/// provides memory access for the Guest memory it managed.
+#[repr(transparent)]
+#[derive(qemu_api_macros::Wrapper)]
+pub struct AddressSpace(Opaque<bindings::AddressSpace>);
+
+unsafe impl Send for AddressSpace {}
+unsafe impl Sync for AddressSpace {}
+
+impl GuestAddressSpace for AddressSpace {
+    type M = FlatView;
+    type T = FlatViewRefGuard;
+
+    /// Get the memory of the [`AddressSpace`].
+    ///
+    /// This function retrieves the [`FlatView`] for the current
+    /// [`AddressSpace`].  And it should be called from an RCU
+    /// critical section.  The returned [`FlatView`] is used for
+    /// short-term memory access.
+    ///
+    /// Note, this function method may **panic** if [`FlatView`] is
+    /// being distroying.  Fo this case, we should consider to providing
+    /// the more stable binding with [`bindings::address_space_get_flatview`].
+    fn memory(&self) -> Self::T {
+        let flatp = unsafe { address_space_to_flatview(self.0.as_mut_ptr()) };
+        FlatViewRefGuard::new(unsafe { Self::M::from_raw(flatp) }).expect(
+            "Failed to clone FlatViewRefGuard: the FlatView may have been destroyed concurrently.",
+        )
+    }
+}
+
+/// The helper to convert [`vm_memory::GuestMemoryError`] to
+/// [`crate::error::Error`].
+#[track_caller]
+fn guest_mem_err_to_qemu_err(err: GuestMemoryError) -> Error {
+    match err {
+        GuestMemoryError::InvalidGuestAddress(addr) => {
+            Error::from(format!("Invalid guest address: {:#x}", addr.raw_value()))
+        }
+        GuestMemoryError::InvalidBackendAddress => Error::from("Invalid backend memory address"),
+        GuestMemoryError::GuestAddressOverflow => {
+            Error::from("Guest address addition resulted in an overflow")
+        }
+        GuestMemoryError::CallbackOutOfRange => {
+            Error::from("Callback accessed memory out of range")
+        }
+        GuestMemoryError::IOError(io_err) => Error::with_error("Guest memory I/O error", io_err),
+        other_err => Error::with_error("An unexpected guest memory error occurred", other_err),
+    }
+}
+
+impl AddressSpace {
+    /// The write interface of `AddressSpace`.
+    ///
+    /// This function is similar to `address_space_write` in C side.
+    ///
+    /// But it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
+    pub fn write(&self, buf: &[u8], addr: GuestAddress) -> Result<usize> {
+        rcu_read_lock();
+        let r = self.memory().deref().write(buf, addr);
+        rcu_read_unlock();
+        r.map_err(guest_mem_err_to_qemu_err)
+    }
+
+    /// The read interface of `AddressSpace`.
+    ///
+    /// This function is similar to `address_space_read_full` in C side.
+    ///
+    /// But it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
+    ///
+    /// It should also be noted that this function does not support the fast
+    /// path like `address_space_read` in C side.
+    pub fn read(&self, buf: &mut [u8], addr: GuestAddress) -> Result<usize> {
+        rcu_read_lock();
+        let r = self.memory().deref().read(buf, addr);
+        rcu_read_unlock();
+        r.map_err(guest_mem_err_to_qemu_err)
+    }
+
+    /// The store interface of `AddressSpace`.
+    ///
+    /// This function is similar to `address_space_st{size}` in C side.
+    ///
+    /// But it only assumes @val follows target-endian by default. So ensure
+    /// the endian of `val` aligned with target, before using this method.
+    ///
+    /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
+    pub fn store<T: AtomicAccess>(&self, addr: GuestAddress, val: T) -> Result<()> {
+        rcu_read_lock();
+        let r = self.memory().deref().store(val, addr, Ordering::Relaxed);
+        rcu_read_unlock();
+        r.map_err(guest_mem_err_to_qemu_err)
+    }
+
+    /// The load interface of `AddressSpace`.
+    ///
+    /// This function is similar to `address_space_ld{size}` in C side.
+    ///
+    /// But it only support target-endian by default.  The returned value is
+    /// with target-endian.
+    ///
+    /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
+    pub fn load<T: AtomicAccess>(&self, addr: GuestAddress) -> Result<T> {
+        rcu_read_lock();
+        let r = self.memory().deref().load(addr, Ordering::Relaxed);
+        rcu_read_unlock();
+        r.map_err(guest_mem_err_to_qemu_err)
+    }
+}
+
+/// The safe binding around [`bindings::address_space_memory`].
+///
+/// `ADDRESS_SPACE_MEMORY` provides the complete address space
+/// abstraction for the whole Guest memory.
+pub static ADDRESS_SPACE_MEMORY: &AddressSpace = unsafe {
+    let ptr: *const bindings::AddressSpace = addr_of!(address_space_memory);
+
+    // SAFETY: AddressSpace is #[repr(transparent)].
+    let wrapper_ptr: *const AddressSpace = ptr.cast();
+
+    // SAFETY: `address_space_memory` structure is valid in C side during
+    // the whole QEMU life.
+    &*wrapper_ptr
+};
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 25/26] rust/memory: Add binding to check target endian
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (23 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 24/26] rust/memory: Provide AddressSpace bindings Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:44   ` Manos Pitsidianakis
  2025-08-07 12:30 ` [RFC 26/26] rust/hpet: Use safe binding to access address space Zhao Liu
                   ` (2 subsequent siblings)
  27 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Add a binding (target_is_big_endian()) to check whether target is big
endian or not. This could help user to adjust endian before calling
AddresssSpace::store() or after calling AddressSpace::load().

Add the example in the documentation of AddresssSpace::store() to help
explain how to use it.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/qemu-api/src/memory.rs | 28 +++++++++++++++++++++++++---
 rust/qemu-api/wrapper.h     |  1 +
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/rust/qemu-api/src/memory.rs b/rust/qemu-api/src/memory.rs
index 42bba23cf3f8..a8eb83c95ead 100644
--- a/rust/qemu-api/src/memory.rs
+++ b/rust/qemu-api/src/memory.rs
@@ -31,7 +31,7 @@
         memory_region_init_io, section_access_allowed, section_covers_region_addr,
         section_fuzz_dma_read, section_get_host_addr, section_rust_load,
         section_rust_read_continue_step, section_rust_store, section_rust_write_continue_step,
-        MEMTX_OK,
+        target_big_endian, MEMTX_OK,
     },
     callbacks::FnCall,
     cell::Opaque,
@@ -1107,9 +1107,25 @@ pub fn read(&self, buf: &mut [u8], addr: GuestAddress) -> Result<usize> {
     /// This function is similar to `address_space_st{size}` in C side.
     ///
     /// But it only assumes @val follows target-endian by default. So ensure
-    /// the endian of `val` aligned with target, before using this method.
+    /// the endian of `val` aligned with target, before using this method.  The
+    /// taget-endian can be checked with [`target_is_big_endian`].
     ///
     /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
+    ///
+    /// # Examples
+    ///
+    /// ```
+    /// use qemu_api::memory::{ADDRESS_SPACE_MEMORY, target_is_big_endian};
+    ///
+    /// let addr = GuestAddress(0x123438000);
+    /// let val: u32 = 5;
+    /// let val_end = if target_is_big_endian() {
+    ///     val.to_be()
+    /// } else {
+    ///     val.to_le()
+    /// }
+    ///
+    /// assert!(ADDRESS_SPACE_MEMORY.store(addr, val_end).is_ok());
     pub fn store<T: AtomicAccess>(&self, addr: GuestAddress, val: T) -> Result<()> {
         rcu_read_lock();
         let r = self.memory().deref().store(val, addr, Ordering::Relaxed);
@@ -1122,7 +1138,8 @@ pub fn store<T: AtomicAccess>(&self, addr: GuestAddress, val: T) -> Result<()> {
     /// This function is similar to `address_space_ld{size}` in C side.
     ///
     /// But it only support target-endian by default.  The returned value is
-    /// with target-endian.
+    /// with target-endian.  The taget-endian can be checked with
+    /// [`target_is_big_endian`].
     ///
     /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
     pub fn load<T: AtomicAccess>(&self, addr: GuestAddress) -> Result<T> {
@@ -1147,3 +1164,8 @@ pub fn load<T: AtomicAccess>(&self, addr: GuestAddress) -> Result<T> {
     // the whole QEMU life.
     &*wrapper_ptr
 };
+
+pub fn target_is_big_endian() -> bool {
+    // SAFETY: the return value is boolean, so it is always valid.
+    unsafe { target_big_endian() }
+}
diff --git a/rust/qemu-api/wrapper.h b/rust/qemu-api/wrapper.h
index ce0ac8d3f550..c466b93054aa 100644
--- a/rust/qemu-api/wrapper.h
+++ b/rust/qemu-api/wrapper.h
@@ -70,3 +70,4 @@ typedef enum memory_order {
 #include "system/address-spaces.h"
 #include "hw/char/pl011.h"
 #include "qemu/rcu.h"
+#include "qemu/target-info.h"
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [RFC 26/26] rust/hpet: Use safe binding to access address space
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (24 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 25/26] rust/memory: Add binding to check target endian Zhao Liu
@ 2025-08-07 12:30 ` Zhao Liu
  2025-08-07 12:42 ` [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
  2025-08-07 14:13 ` Paolo Bonzini
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:30 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

Currently, HPET uses unsafe address_space_stl_le() to store MSI message.

Considerring HPET is used for x86 machines, and they're little endian.
So address_space_stl_le() equals to address_space_stl(), which makes it
possible to replace address_space_stl_le() with AddressSpace::store().

Therefore, use the safe binding - AddressSpace::store(), to access
address space.

Since then, the last unsafe piece of HPET has been filled in.

Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
 rust/hw/timer/hpet/src/device.rs | 27 ++++++++++-----------------
 1 file changed, 10 insertions(+), 17 deletions(-)

diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 9fd75bf096e4..e7d5b57f2fe2 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -6,19 +6,17 @@
     ffi::{c_int, c_void, CStr},
     mem::MaybeUninit,
     pin::Pin,
-    ptr::{addr_of_mut, null_mut, NonNull},
+    ptr::NonNull,
     slice::from_ref,
 };
 
 use qemu_api::{
-    bindings::{
-        address_space_memory, address_space_stl_le, qdev_prop_bit, qdev_prop_bool,
-        qdev_prop_uint32, qdev_prop_usize,
-    },
+    bindings::{qdev_prop_bit, qdev_prop_bool, qdev_prop_uint32, qdev_prop_usize},
     cell::{BqlCell, BqlRefCell},
     irq::InterruptSource,
     memory::{
-        hwaddr, MemoryRegion, MemoryRegionOps, MemoryRegionOpsBuilder, MEMTXATTRS_UNSPECIFIED,
+        hwaddr, GuestAddress, MemoryRegion, MemoryRegionOps, MemoryRegionOpsBuilder,
+        ADDRESS_SPACE_MEMORY,
     },
     prelude::*,
     qdev::{DeviceImpl, DeviceState, Property, ResetType, ResettablePhasesImpl},
@@ -327,17 +325,12 @@ fn set_irq(&mut self, set: bool) {
 
         if set && self.is_int_enabled() && self.get_state().is_hpet_enabled() {
             if self.is_fsb_route_enabled() {
-                // SAFETY:
-                // the parameters are valid.
-                unsafe {
-                    address_space_stl_le(
-                        addr_of_mut!(address_space_memory),
-                        self.fsb >> 32,  // Timer N FSB int addr
-                        self.fsb as u32, // Timer N FSB int value, truncate!
-                        MEMTXATTRS_UNSPECIFIED,
-                        null_mut(),
-                    );
-                }
+                ADDRESS_SPACE_MEMORY
+                    .store(
+                        GuestAddress(self.fsb >> 32), // Timer N FSB int addr
+                        self.fsb as u32,              // Timer N FSB int value, truncate!
+                    )
+                    .expect("Failed to store into ADDRESS_SPACE_MEMORY.");
             } else if self.is_int_level_triggered() {
                 self.get_state().irqs[route].raise();
             } else {
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [RFC 18/26] memory: Rename flatview_access_allowed() to memory_region_access_allowed()
  2025-08-07 12:30 ` [RFC 18/26] memory: Rename flatview_access_allowed() to memory_region_access_allowed() Zhao Liu
@ 2025-08-07 12:41   ` Manos Pitsidianakis
  0 siblings, 0 replies; 58+ messages in thread
From: Manos Pitsidianakis @ 2025-08-07 12:41 UTC (permalink / raw)
  To: Zhao Liu
  Cc: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>

On Thu, Aug 7, 2025 at 3:10 PM Zhao Liu <zhao1.liu@intel.com> wrote:
>
> flatview_access_allowed() accepts `MemoryRegion *mr` as an argument, so
> it's based on MemoryRegion and should be named as
> memory_region_access_allowed().
>
> Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> ---
>  system/physmem.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/system/physmem.c b/system/physmem.c
> index d2106d0ffa87..8aaaab4d3a74 100644
> --- a/system/physmem.c
> +++ b/system/physmem.c
> @@ -2921,7 +2921,7 @@ bool prepare_mmio_access(MemoryRegion *mr)
>  }
>
>  /**
> - * flatview_access_allowed
> + * memory_region_access_allowed
>   * @mr: #MemoryRegion to be accessed
>   * @attrs: memory transaction attributes
>   * @addr: address within that memory region
> @@ -2931,8 +2931,8 @@ bool prepare_mmio_access(MemoryRegion *mr)
>   *
>   * Returns: true if transaction is allowed, false if denied.
>   */
> -static bool flatview_access_allowed(MemoryRegion *mr, MemTxAttrs attrs,
> -                                    hwaddr addr, hwaddr len)
> +static bool memory_region_access_allowed(MemoryRegion *mr, MemTxAttrs attrs,
> +                                         hwaddr addr, hwaddr len)
>  {
>      if (likely(!attrs.memory)) {
>          return true;
> @@ -2952,7 +2952,7 @@ static MemTxResult flatview_write_continue_step(MemTxAttrs attrs,
>                                                  hwaddr len, hwaddr mr_addr,
>                                                  hwaddr *l, MemoryRegion *mr)
>  {
> -    if (!flatview_access_allowed(mr, attrs, mr_addr, *l)) {
> +    if (!memory_region_access_allowed(mr, attrs, mr_addr, *l)) {
>          return MEMTX_ACCESS_ERROR;
>      }
>
> @@ -3036,7 +3036,7 @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
>
>      l = len;
>      mr = flatview_translate(fv, addr, &mr_addr, &l, true, attrs);
> -    if (!flatview_access_allowed(mr, attrs, addr, len)) {
> +    if (!memory_region_access_allowed(mr, attrs, addr, len)) {
>          return MEMTX_ACCESS_ERROR;
>      }
>      return flatview_write_continue(fv, addr, attrs, buf, len,
> @@ -3048,7 +3048,7 @@ static MemTxResult flatview_read_continue_step(MemTxAttrs attrs, uint8_t *buf,
>                                                 hwaddr *l,
>                                                 MemoryRegion *mr)
>  {
> -    if (!flatview_access_allowed(mr, attrs, mr_addr, *l)) {
> +    if (!memory_region_access_allowed(mr, attrs, mr_addr, *l)) {
>          return MEMTX_ACCESS_ERROR;
>      }
>
> @@ -3127,7 +3127,7 @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
>
>      l = len;
>      mr = flatview_translate(fv, addr, &mr_addr, &l, false, attrs);
> -    if (!flatview_access_allowed(mr, attrs, addr, len)) {
> +    if (!memory_region_access_allowed(mr, attrs, addr, len)) {
>          return MEMTX_ACCESS_ERROR;
>      }
>      return flatview_read_continue(fv, addr, attrs, buf, len,
> --
> 2.34.1
>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (25 preceding siblings ...)
  2025-08-07 12:30 ` [RFC 26/26] rust/hpet: Use safe binding to access address space Zhao Liu
@ 2025-08-07 12:42 ` Zhao Liu
  2025-08-07 14:13 ` Paolo Bonzini
  27 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-07 12:42 UTC (permalink / raw)
  To: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong, Zhao Liu

On Thu, Aug 07, 2025 at 08:30:01PM +0800, Zhao Liu wrote:
> Date: Thu, 7 Aug 2025 20:30:01 +0800
> From: Zhao Liu <zhao1.liu@intel.com>
> Subject: [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm
> X-Mailer: git-send-email 2.34.1
> 
> Hi,
> 
> This RFC series explores integrating the vm-memory API into QEMU's
> rust/memory bindings.
> 
> Thanks to Paolo and Manos's many suggestions and feedback, I have
> resolved many issues over the past few months, but there are still
> some open issues that I would like to discuss.
> 
> This series finally provides the following safe interfaces in Rust:
>  * AddressSpace::write in Rust <=> address_space_write in C
>    - **but only** supports MEMTXATTRS_UNSPECIFIED
> 
>  * AddressSpace::read in Rust <=> address_space_read_full in C
>    - **but only** supports MEMTXATTRS_UNSPECIFIED.
> 
>  * AddressSpace::store in Rust <=> address_space_st{size} in C
>    - **but only** supports MEMTXATTRS_UNSPECIFIED and native endian.
> 
>  * AddressSpace::load in Rust <=> address_space_ld{size} in C
>    - **but only** supports MEMTXATTRS_UNSPECIFIED and native endian.
> 
> And this series involves changes mainly to these three parts:
>  * NEW QEMU memory APIs wrapper at C side.
>  * Extra changes for vm-memory (downstream for now).
>  * NEW QEMU memory bindings/APIs based on vm-memory at Rust side.
> 
> Although the number of line changes appears to be significant, more
> than half of them are documentation and comments.
> 
> (Note: the latest vm-memory v0.16.2 crate didn't contain Paolo's
>  commit 5f59e29c3d30 ("guest_memory: let multiple regions slice one
>  global bitmap"), so I have to pull the vm-memory from github directly.)
> 
> Thanks for your feedback!

BTW, this is my branch which includes all the patches:

https://gitlab.com/zhao.liu/qemu/-/tree/rust-vm-memory-v1-08-04-2025

Regards,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 25/26] rust/memory: Add binding to check target endian
  2025-08-07 12:30 ` [RFC 25/26] rust/memory: Add binding to check target endian Zhao Liu
@ 2025-08-07 12:44   ` Manos Pitsidianakis
  2025-08-13 14:48     ` Zhao Liu
  0 siblings, 1 reply; 58+ messages in thread
From: Manos Pitsidianakis @ 2025-08-07 12:44 UTC (permalink / raw)
  To: Zhao Liu
  Cc: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 7, 2025 at 3:10 PM Zhao Liu <zhao1.liu@intel.com> wrote:
>
> Add a binding (target_is_big_endian()) to check whether target is big
> endian or not. This could help user to adjust endian before calling

s/adjust endian/adjust endianness/

> AddresssSpace::store() or after calling AddressSpace::load().

No strong preference, but maybe we can keep the same name as C,
target_big_endian()? Just for consistency.

Either way:

Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>

>
> Add the example in the documentation of AddresssSpace::store() to help
> explain how to use it.
>
> Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> ---
>  rust/qemu-api/src/memory.rs | 28 +++++++++++++++++++++++++---
>  rust/qemu-api/wrapper.h     |  1 +
>  2 files changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/rust/qemu-api/src/memory.rs b/rust/qemu-api/src/memory.rs
> index 42bba23cf3f8..a8eb83c95ead 100644
> --- a/rust/qemu-api/src/memory.rs
> +++ b/rust/qemu-api/src/memory.rs
> @@ -31,7 +31,7 @@
>          memory_region_init_io, section_access_allowed, section_covers_region_addr,
>          section_fuzz_dma_read, section_get_host_addr, section_rust_load,
>          section_rust_read_continue_step, section_rust_store, section_rust_write_continue_step,
> -        MEMTX_OK,
> +        target_big_endian, MEMTX_OK,
>      },
>      callbacks::FnCall,
>      cell::Opaque,
> @@ -1107,9 +1107,25 @@ pub fn read(&self, buf: &mut [u8], addr: GuestAddress) -> Result<usize> {
>      /// This function is similar to `address_space_st{size}` in C side.
>      ///
>      /// But it only assumes @val follows target-endian by default. So ensure
> -    /// the endian of `val` aligned with target, before using this method.
> +    /// the endian of `val` aligned with target, before using this method.  The
> +    /// taget-endian can be checked with [`target_is_big_endian`].
>      ///
>      /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
> +    ///
> +    /// # Examples
> +    ///
> +    /// ```
> +    /// use qemu_api::memory::{ADDRESS_SPACE_MEMORY, target_is_big_endian};
> +    ///
> +    /// let addr = GuestAddress(0x123438000);
> +    /// let val: u32 = 5;
> +    /// let val_end = if target_is_big_endian() {
> +    ///     val.to_be()
> +    /// } else {
> +    ///     val.to_le()
> +    /// }
> +    ///
> +    /// assert!(ADDRESS_SPACE_MEMORY.store(addr, val_end).is_ok());
>      pub fn store<T: AtomicAccess>(&self, addr: GuestAddress, val: T) -> Result<()> {
>          rcu_read_lock();
>          let r = self.memory().deref().store(val, addr, Ordering::Relaxed);
> @@ -1122,7 +1138,8 @@ pub fn store<T: AtomicAccess>(&self, addr: GuestAddress, val: T) -> Result<()> {
>      /// This function is similar to `address_space_ld{size}` in C side.
>      ///
>      /// But it only support target-endian by default.  The returned value is
> -    /// with target-endian.
> +    /// with target-endian.  The taget-endian can be checked with
> +    /// [`target_is_big_endian`].
>      ///
>      /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
>      pub fn load<T: AtomicAccess>(&self, addr: GuestAddress) -> Result<T> {
> @@ -1147,3 +1164,8 @@ pub fn load<T: AtomicAccess>(&self, addr: GuestAddress) -> Result<T> {
>      // the whole QEMU life.
>      &*wrapper_ptr
>  };
> +
> +pub fn target_is_big_endian() -> bool {
> +    // SAFETY: the return value is boolean, so it is always valid.
> +    unsafe { target_big_endian() }
> +}
> diff --git a/rust/qemu-api/wrapper.h b/rust/qemu-api/wrapper.h
> index ce0ac8d3f550..c466b93054aa 100644
> --- a/rust/qemu-api/wrapper.h
> +++ b/rust/qemu-api/wrapper.h
> @@ -70,3 +70,4 @@ typedef enum memory_order {
>  #include "system/address-spaces.h"
>  #include "hw/char/pl011.h"
>  #include "qemu/rcu.h"
> +#include "qemu/target-info.h"
> --
> 2.34.1
>


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 07/26] subprojects: Add winapi crate
  2025-08-07 12:30 ` [RFC 07/26] subprojects: Add winapi crate Zhao Liu
@ 2025-08-07 13:17   ` Paolo Bonzini
  2025-08-08  7:33     ` Zhao Liu
  0 siblings, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-07 13:17 UTC (permalink / raw)
  To: Zhao Liu, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On 8/7/25 14:30, Zhao Liu wrote:
> Signed-off-by: Zhao Liu <zhao1.liu@intel.com>

I created https://github.com/rust-vmm/vm-memory/pull/335 so this is not 
needed.

Paolo



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 13/26] rust: Add RCU bindings
  2025-08-07 12:29   ` Manos Pitsidianakis
@ 2025-08-07 13:38     ` Paolo Bonzini
  2025-08-09  7:21       ` Zhao Liu
  2025-08-12 10:31       ` Zhao Liu
  0 siblings, 2 replies; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-07 13:38 UTC (permalink / raw)
  To: Manos Pitsidianakis, Zhao Liu
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Alex Bennée, Thomas Huth, Junjie Mao, qemu-devel, qemu-rust,
	Dapeng Mi, Chuanxiao Dong

On 8/7/25 14:29, Manos Pitsidianakis wrote:

>> +//! Bindings for `rcu_read_lock` and `rcu_read_unlock`.
>> +//! More details about RCU in QEMU, please refer docs/devel/rcu.rst.
>> +
> 
> How about a RAII guard type? e.g. RCUGuard and runs `rcu_read_unlock` on Drop.

Clippy says Rcu not RCU.  :)

You're right, not just because it's nice but also because it bounds the 
dereference of the FlatView.  Something like this build on top of the 
guard object:

pub struct RcuCell<T> {
     data: AtomicPtr<T>
}

impl<T> RcuCell {
     pub fn raw_get(&self) -> *mut T {
         self.data.load(Ordering::Acquire)
     }

     pub fn get<'g>(&self, _: &'g RcuGuard) -> Option<&'g T> {
         unsafe {
             self.raw_get().as_ref()
         }
     }
}

Using this is a bit ugly, because you need transmute, but it's isolated:

impl AddressSpace {
    pub fn get_flatview(&self, rcu: &'g Guard) -> &'g FlatView {
        let flatp = unsafe {
            std::mem::transmute::<&*mut FlatView, &RcuCell<FlatView>>(
                &self.0.as_ptr().current_map)
        };
        flatp.get(rcu)
    }
}

impl GuestAddressSpace for AddressSpace {
     fn memory(&self) -> Self::T {
         let rcu = RcuGuard::guard();
         FlatViewRefGuard::new(self.get_flatview(rcu))
     }
}

> Destructors are not guaranteed to run or run only once, but the former
> should happen when things go wrong e.g. crashes/aborts. You can add a
> flag in the RCUGuard to make sure Drop runs unlock only once (since it
> takes &mut and not ownership)

Yeah I think many things would go wrong if Arc could run its drop 
implementation more than once.

Paolo



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 24/26] rust/memory: Provide AddressSpace bindings
  2025-08-07 12:30 ` [RFC 24/26] rust/memory: Provide AddressSpace bindings Zhao Liu
@ 2025-08-07 13:50   ` Paolo Bonzini
  2025-08-13 14:47     ` Zhao Liu
  0 siblings, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-07 13:50 UTC (permalink / raw)
  To: Zhao Liu, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On 8/7/25 14:30, Zhao Liu wrote:
> +impl GuestAddressSpace for AddressSpace {
> +    type M = FlatView;
> +    type T = FlatViewRefGuard;
> +
> +    /// Get the memory of the [`AddressSpace`].
> +    ///
> +    /// This function retrieves the [`FlatView`] for the current
> +    /// [`AddressSpace`].  And it should be called from an RCU
> +    /// critical section.  The returned [`FlatView`] is used for
> +    /// short-term memory access.
> +    ///
> +    /// Note, this function method may **panic** if [`FlatView`] is
> +    /// being distroying.  Fo this case, we should consider to providing
> +    /// the more stable binding with [`bindings::address_space_get_flatview`].
> +    fn memory(&self) -> Self::T {
> +        let flatp = unsafe { address_space_to_flatview(self.0.as_mut_ptr()) };
> +        FlatViewRefGuard::new(unsafe { Self::M::from_raw(flatp) }).expect(
> +            "Failed to clone FlatViewRefGuard: the FlatView may have been destroyed concurrently.",
> +        )

This is essentially address_space_get_flatview().  You can call it 
directly, or you need to loop if FlatViewRefGuard finds a zero reference 
count.

> +    }
> +}
> +
> +impl AddressSpace {
> +    /// The write interface of `AddressSpace`.
> +    ///
> +    /// This function is similar to `address_space_write` in C side.
> +    ///
> +    /// But it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
> +    pub fn write(&self, buf: &[u8], addr: GuestAddress) -> Result<usize> {
> +        rcu_read_lock();
> +        let r = self.memory().deref().write(buf, addr);
> +        rcu_read_unlock();

self.memory() must not need rcu_read_lock/unlock around it, they should 
be called by the memory() function itself.

> +        r.map_err(guest_mem_err_to_qemu_err)
> +    }

I think it's ok to return the vm-memory error.  Ultimately, the error 
will be either ignored or turned into a device error condition, but I 
don't think it's ever going to become an Error**.

> +    /// The store interface of `AddressSpace`.
> +    ///
> +    /// This function is similar to `address_space_st{size}` in C side.
> +    ///
> +    /// But it only assumes @val follows target-endian by default. So ensure
> +    /// the endian of `val` aligned with target, before using this method.

QEMU is trying to get rid of target endianness.  We should use the 
vm-memory BeNN and LeNN as much as possible.  It would be great if you 
could write either

     ADDRESS_SPACE_MEMORY.store::<Le32>(addr, 42);

or

     let n = Le32(42);
     ADDRESS_SPACE_MEMORY.store(addr, n);

but not

     ADDRESS_SPACE_MEMORY.store(addr, 42);

(Also I've not looked at the patches closely enough, but wouldn't 
store() use *host* endianness? Same in patch 23).

Paolo

> +    /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
> +    pub fn store<T: AtomicAccess>(&self, addr: GuestAddress, val: T) -> Result<()> {
> +        rcu_read_lock();
> +        let r = self.memory().deref().store(val, addr, Ordering::Relaxed);
> +        rcu_read_unlock();
> +        r.map_err(guest_mem_err_to_qemu_err)
> +    }
> +
> +    /// The load interface of `AddressSpace`.
> +    ///
> +    /// This function is similar to `address_space_ld{size}` in C side.
> +    ///
> +    /// But it only support target-endian by default.  The returned value is
> +    /// with target-endian.
> +    ///
> +    /// And it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
> +    pub fn load<T: AtomicAccess>(&self, addr: GuestAddress) -> Result<T> {
> +        rcu_read_lock();
> +        let r = self.memory().deref().load(addr, Ordering::Relaxed);
> +        rcu_read_unlock();
> +        r.map_err(guest_mem_err_to_qemu_err)
> +    }
> +}
> +
> +/// The safe binding around [`bindings::address_space_memory`].
> +///
> +/// `ADDRESS_SPACE_MEMORY` provides the complete address space
> +/// abstraction for the whole Guest memory.
> +pub static ADDRESS_SPACE_MEMORY: &AddressSpace = unsafe {
> +    let ptr: *const bindings::AddressSpace = addr_of!(address_space_memory);
> +
> +    // SAFETY: AddressSpace is #[repr(transparent)].
> +    let wrapper_ptr: *const AddressSpace = ptr.cast();
> +
> +    // SAFETY: `address_space_memory` structure is valid in C side during
> +    // the whole QEMU life.
> +    &*wrapper_ptr
> +};



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 01/26] rust/hpet: Fix the error caused by vm-memory
  2025-08-07 12:30 ` [RFC 01/26] rust/hpet: Fix the error caused by vm-memory Zhao Liu
@ 2025-08-07 13:52   ` Paolo Bonzini
  2025-08-08  7:27     ` Zhao Liu
  0 siblings, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-07 13:52 UTC (permalink / raw)
  To: Zhao Liu, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On 8/7/25 14:30, Zhao Liu wrote:
> error[E0283]: type annotations needed
>     --> hw/timer/hpet/src/device.rs:884:55
>      |
> 884 |         self.num_timers == self.num_timers_save.get().into()
>      |                         --                            ^^^^
>      |                         |
>      |                         type must be known at this point
>      |
>      = note: multiple `impl`s satisfying `usize: PartialEq<_>` found in the following crates: `core`, `vm_memory`:
>              - impl PartialEq<vm_memory::endian::BeSize> for usize;
>              - impl PartialEq<vm_memory::endian::LeSize> for usize;
>              - impl<host> PartialEq for usize
>                where the constant `host` has type `bool`;
> help: try using a fully qualified path to specify the expected types
>      |
> 884 |         self.num_timers == <u8 as Into<T>>::into(self.num_timers_save.get())
>      |                            ++++++++++++++++++++++                          ~

Oh, interesting.  In this case, you can write:

     usize::from(self.num_timers_save.get())

Paolo



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 12/26] rcu: Make rcu_read_lock & rcu_read_unlock not inline
  2025-08-07 12:30 ` [RFC 12/26] rcu: Make rcu_read_lock & rcu_read_unlock not inline Zhao Liu
@ 2025-08-07 13:54   ` Paolo Bonzini
  2025-08-08  8:19     ` Zhao Liu
  0 siblings, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-07 13:54 UTC (permalink / raw)
  To: Zhao Liu, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On 8/7/25 14:30, Zhao Liu wrote:
> Make rcu_read_lock & rcu_read_unlock not inline, then bindgen could
> generate the bindings.
> 
> Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Zhao Liu <zhao1.liu@intel.com>

Either this, or keep it inline and add wrappers rust_rcu_read_lock() and 
rust_rcu_read_unlock().

Paolo

> ---
>   include/qemu/rcu.h | 45 ++-------------------------------------------
>   util/rcu.c         | 43 +++++++++++++++++++++++++++++++++++++++++++
>   2 files changed, 45 insertions(+), 43 deletions(-)
> 
> diff --git a/include/qemu/rcu.h b/include/qemu/rcu.h
> index 020dbe4d8b77..34d955204b81 100644
> --- a/include/qemu/rcu.h
> +++ b/include/qemu/rcu.h
> @@ -75,49 +75,8 @@ struct rcu_reader_data {
>   
>   QEMU_DECLARE_CO_TLS(struct rcu_reader_data, rcu_reader)
>   
> -static inline void rcu_read_lock(void)
> -{
> -    struct rcu_reader_data *p_rcu_reader = get_ptr_rcu_reader();
> -    unsigned ctr;
> -
> -    if (p_rcu_reader->depth++ > 0) {
> -        return;
> -    }
> -
> -    ctr = qatomic_read(&rcu_gp_ctr);
> -    qatomic_set(&p_rcu_reader->ctr, ctr);
> -
> -    /*
> -     * Read rcu_gp_ptr and write p_rcu_reader->ctr before reading
> -     * RCU-protected pointers.
> -     */
> -    smp_mb_placeholder();
> -}
> -
> -static inline void rcu_read_unlock(void)
> -{
> -    struct rcu_reader_data *p_rcu_reader = get_ptr_rcu_reader();
> -
> -    assert(p_rcu_reader->depth != 0);
> -    if (--p_rcu_reader->depth > 0) {
> -        return;
> -    }
> -
> -    /* Ensure that the critical section is seen to precede the
> -     * store to p_rcu_reader->ctr.  Together with the following
> -     * smp_mb_placeholder(), this ensures writes to p_rcu_reader->ctr
> -     * are sequentially consistent.
> -     */
> -    qatomic_store_release(&p_rcu_reader->ctr, 0);
> -
> -    /* Write p_rcu_reader->ctr before reading p_rcu_reader->waiting.  */
> -    smp_mb_placeholder();
> -    if (unlikely(qatomic_read(&p_rcu_reader->waiting))) {
> -        qatomic_set(&p_rcu_reader->waiting, false);
> -        qemu_event_set(&rcu_gp_event);
> -    }
> -}
> -
> +void rcu_read_lock(void);
> +void rcu_read_unlock(void);
>   void synchronize_rcu(void);
>   
>   /*
> diff --git a/util/rcu.c b/util/rcu.c
> index b703c86f15a3..2dfd82796e1e 100644
> --- a/util/rcu.c
> +++ b/util/rcu.c
> @@ -141,6 +141,49 @@ static void wait_for_readers(void)
>       QLIST_SWAP(&registry, &qsreaders, node);
>   }
>   
> +void rcu_read_lock(void)
> +{
> +    struct rcu_reader_data *p_rcu_reader = get_ptr_rcu_reader();
> +    unsigned ctr;
> +
> +    if (p_rcu_reader->depth++ > 0) {
> +        return;
> +    }
> +
> +    ctr = qatomic_read(&rcu_gp_ctr);
> +    qatomic_set(&p_rcu_reader->ctr, ctr);
> +
> +    /*
> +     * Read rcu_gp_ptr and write p_rcu_reader->ctr before reading
> +     * RCU-protected pointers.
> +     */
> +    smp_mb_placeholder();
> +}
> +
> +void rcu_read_unlock(void)
> +{
> +    struct rcu_reader_data *p_rcu_reader = get_ptr_rcu_reader();
> +
> +    assert(p_rcu_reader->depth != 0);
> +    if (--p_rcu_reader->depth > 0) {
> +        return;
> +    }
> +
> +    /* Ensure that the critical section is seen to precede the
> +     * store to p_rcu_reader->ctr.  Together with the following
> +     * smp_mb_placeholder(), this ensures writes to p_rcu_reader->ctr
> +     * are sequentially consistent.
> +     */
> +    qatomic_store_release(&p_rcu_reader->ctr, 0);
> +
> +    /* Write p_rcu_reader->ctr before reading p_rcu_reader->waiting.  */
> +    smp_mb_placeholder();
> +    if (unlikely(qatomic_read(&p_rcu_reader->waiting))) {
> +        qatomic_set(&p_rcu_reader->waiting, false);
> +        qemu_event_set(&rcu_gp_event);
> +    }
> +}
> +
>   void synchronize_rcu(void)
>   {
>       QEMU_LOCK_GUARD(&rcu_sync_lock);



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection
  2025-08-07 12:30 ` [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection Zhao Liu
@ 2025-08-07 13:57   ` Paolo Bonzini
  2025-08-12 15:39     ` Zhao Liu
  0 siblings, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-07 13:57 UTC (permalink / raw)
  To: Zhao Liu, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On 8/7/25 14:30, Zhao Liu wrote:
> Rust side will use cell::Opaque<> to hide details of C structure, and
> this could help avoid the direct operation on C memory from Rust side.
> 
> Therefore, it's necessary to wrap a translation binding and make it only
> return the pointer to MemoryRegionSection, instead of the copy.
> 
> As the first step, make flatview_do_translate return a pointer to
> MemoryRegionSection, so that we can build a wrapper based on it.

Independent of Rust, doing the copy as late as possible is good, but 
make it return a "const MemoryRegionSection*" so that there's no risk of 
overwriting data.  Hopefully this does not show a bigger problem!

Paolo

> In addtion, add a global variable `unassigned_section` to help get a
> pointer to an invalid MemoryRegionSection.



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU memory backend
  2025-08-07 12:30 ` [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU memory backend Zhao Liu
@ 2025-08-07 13:59   ` Paolo Bonzini
  2025-08-08  8:17     ` Zhao Liu
  0 siblings, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-07 13:59 UTC (permalink / raw)
  To: Zhao Liu, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On 8/7/25 14:30, Zhao Liu wrote:
> Add 2 patches to support QEMU memory backend implementation.
> 
> Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> ---
>   .../packagefiles/vm-memory-0.16-rs/0001.diff  |  81 +++++++++++++
>   .../packagefiles/vm-memory-0.16-rs/0002.diff  | 111 ++++++++++++++++++
>   subprojects/vm-memory-0.16-rs.wrap            |   2 +
>   3 files changed, 194 insertions(+)
>   create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
>   create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0002.diff
> 
> diff --git a/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
> new file mode 100644
> index 000000000000..037193108d45
> --- /dev/null
> +++ b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
> @@ -0,0 +1,81 @@
> +From 298f8ba019b2fe159fa943e0ae4dfd3c83ee64e0 Mon Sep 17 00:00:00 2001
> +From: Zhao Liu <zhao1.liu@intel.com>
> +Date: Wed, 6 Aug 2025 11:31:11 +0800
> +Subject: [PATCH 1/2] guest_memory: Add a marker tarit to implement
> + Bytes<GuestAddress> for GuestMemory

This was a bit surprising.  Maybe this is something where GuestMemory 
needs some extra flexibility.

> @@ -0,0 +1,111 @@
> +From 2af7ea12a589fde619690e5060c01710cb6f2e0e Mon Sep 17 00:00:00 2001
> +From: Zhao Liu <zhao1.liu@intel.com>
> +Date: Wed, 6 Aug 2025 14:27:14 +0800
> +Subject: [PATCH 2/2] guest_memory: Add is_write argument for
> + GuestMemory::try_access()

This should be fine.  But Hanna is also working on IOMMU so maybe this 
won't be needed!

Paolo



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm
  2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
                   ` (26 preceding siblings ...)
  2025-08-07 12:42 ` [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
@ 2025-08-07 14:13 ` Paolo Bonzini
  2025-08-13 14:56   ` Zhao Liu
  27 siblings, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-07 14:13 UTC (permalink / raw)
  To: Zhao Liu, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao, Hanna Reitz
  Cc: qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

[Adding Hanna who's been working on vm-memory]

On 8/7/25 14:30, Zhao Liu wrote:
> Hi,
> 
> This RFC series explores integrating the vm-memory API into QEMU's
> rust/memory bindings.
> 
> Thanks to Paolo and Manos's many suggestions and feedback, I have
> resolved many issues over the past few months, but there are still
> some open issues that I would like to discuss.
> 
> This series finally provides the following safe interfaces in Rust:
>   * AddressSpace::write in Rust <=> address_space_write in C
>     - **but only** supports MEMTXATTRS_UNSPECIFIED
> 
>   * AddressSpace::read in Rust <=> address_space_read_full in C
>     - **but only** supports MEMTXATTRS_UNSPECIFIED.
> 
>   * AddressSpace::store in Rust <=> address_space_st{size} in C
>     - **but only** supports MEMTXATTRS_UNSPECIFIED and native endian.
> 
>   * AddressSpace::load in Rust <=> address_space_ld{size} in C
>     - **but only** supports MEMTXATTRS_UNSPECIFIED and native endian.

Endianness can be handled by BeNN and LeNN.  For MemTxAttrs we can use 
Bytes<(GuestAddress, MemTxAttrs)> (a variant on something you mention 
below).

Thinking out loud: maybe if we do our implementation in 
Bytes<(GuestAddress, MemTxAttrs)>, and Bytes<GuestAddress>::try_access 
wraps Bytes<(GuestAddress, MemTxAttrs)>, your downstream-only changes 
are not needed anymore?

> And this series involves changes mainly to these three parts:
>   * NEW QEMU memory APIs wrapper at C side.
>   * Extra changes for vm-memory (downstream for now).
>   * NEW QEMU memory bindings/APIs based on vm-memory at Rust side.
> 
> Although the number of line changes appears to be significant, more
> than half of them are documentation and comments.
Yep, thanks for writing them.

This is a good RFC, it's complete enough to show the challenges and the 
things that are missing stand up easily.

I'll look into what vm-memory is missing so that we can simplify QEMU's 
code further, but the basic traits match which is nice.  And the final 
outcome, which is essentially:

     let (addr, value) = (GuestAddress(self.fsb >> 32),
                          Le32(self.fsb as u32));
     ADDRESS_SPACE_MEMORY.memory().store(addr, value);

is as clean as it can be, if anything a bit wordy due to the 
GuestAddress "newtype" wrapper.  (If we decide it's too bad, the 
convenience methods in AddressSpace can automatically do the 
GuestAddress conversion...)

Paolo



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 01/26] rust/hpet: Fix the error caused by vm-memory
  2025-08-07 13:52   ` Paolo Bonzini
@ 2025-08-08  7:27     ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-08  7:27 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Manos Pitsidianakis, Alex Bennée, Thomas Huth, Junjie Mao,
	qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 07, 2025 at 03:52:37PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 15:52:37 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 01/26] rust/hpet: Fix the error caused by vm-memory
> 
> On 8/7/25 14:30, Zhao Liu wrote:
> > error[E0283]: type annotations needed
> >     --> hw/timer/hpet/src/device.rs:884:55
> >      |
> > 884 |         self.num_timers == self.num_timers_save.get().into()
> >      |                         --                            ^^^^
> >      |                         |
> >      |                         type must be known at this point
> >      |
> >      = note: multiple `impl`s satisfying `usize: PartialEq<_>` found in the following crates: `core`, `vm_memory`:
> >              - impl PartialEq<vm_memory::endian::BeSize> for usize;
> >              - impl PartialEq<vm_memory::endian::LeSize> for usize;
> >              - impl<host> PartialEq for usize
> >                where the constant `host` has type `bool`;
> > help: try using a fully qualified path to specify the expected types
> >      |
> > 884 |         self.num_timers == <u8 as Into<T>>::into(self.num_timers_save.get())
> >      |                            ++++++++++++++++++++++                          ~
> 
> Oh, interesting.  In this case, you can write:
> 
>     usize::from(self.num_timers_save.get())

Ah, yes, this way is simpler! Thanks.

-Zhao




^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 07/26] subprojects: Add winapi crate
  2025-08-07 13:17   ` Paolo Bonzini
@ 2025-08-08  7:33     ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-08  7:33 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Manos Pitsidianakis, Alex Bennée, Thomas Huth, Junjie Mao,
	qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 07, 2025 at 03:17:52PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 15:17:52 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 07/26] subprojects: Add winapi crate
> 
> On 8/7/25 14:30, Zhao Liu wrote:
> > Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> 
> I created https://github.com/rust-vmm/vm-memory/pull/335 so this is not
> needed.

Nice! This is better than what I had previously considered for fixing
Windows compilation. Thanks.

-Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU memory backend
  2025-08-08  8:17     ` Zhao Liu
@ 2025-08-08  8:17       ` Paolo Bonzini
  2025-08-08  8:51         ` Zhao Liu
  0 siblings, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-08  8:17 UTC (permalink / raw)
  To: Zhao Liu, Hanna Reitz
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Manos Pitsidianakis, Alex Bennée, Thomas Huth, Junjie Mao,
	qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On 8/8/25 10:17, Zhao Liu wrote:
> (+Hanna: I would like to align with Hanna on 0002.diff patch :-))
> 
> On Thu, Aug 07, 2025 at 03:59:26PM +0200, Paolo Bonzini wrote:
>> Date: Thu, 7 Aug 2025 15:59:26 +0200
>> From: Paolo Bonzini <pbonzini@redhat.com>
>> Subject: Re: [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU
>>   memory backend
>>
>> On 8/7/25 14:30, Zhao Liu wrote:
>>> Add 2 patches to support QEMU memory backend implementation.
>>>
>>> Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
>>> ---
>>>    .../packagefiles/vm-memory-0.16-rs/0001.diff  |  81 +++++++++++++
>>>    .../packagefiles/vm-memory-0.16-rs/0002.diff  | 111 ++++++++++++++++++
>>>    subprojects/vm-memory-0.16-rs.wrap            |   2 +
>>>    3 files changed, 194 insertions(+)
>>>    create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
>>>    create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0002.diff
>>>
>>> diff --git a/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
>>> new file mode 100644
>>> index 000000000000..037193108d45
>>> --- /dev/null
>>> +++ b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
>>> @@ -0,0 +1,81 @@
>>> +From 298f8ba019b2fe159fa943e0ae4dfd3c83ee64e0 Mon Sep 17 00:00:00 2001
>>> +From: Zhao Liu <zhao1.liu@intel.com>
>>> +Date: Wed, 6 Aug 2025 11:31:11 +0800
>>> +Subject: [PATCH 1/2] guest_memory: Add a marker tarit to implement
>>> + Bytes<GuestAddress> for GuestMemory
>>
>> This was a bit surprising.  Maybe this is something where GuestMemory needs
>> some extra flexibility.
> 
> At least, the default GuestMemory::try_access() need to re-implement in
> QEMU, and this is because GuestMemory::iter() doesn't fit for QEMU's
> case, and GuestMemory::to_region_addr() also needs adjustment to support
> complete translation.
> 
> For details,
> 
> 1) iter() - QEMU has implemented the two-level "page" walk in
>     `phys_page_find`, which is more efficient than linear iteration.
> 
> 2) to_region_addr() - it's function signature is:
> 
>      fn to_region_addr(
>          &self,
> 	addr: GuestAddress
>      ) -> Option<(&Self::R, MemoryRegionAddress)>;
> 
> but QEMU currentlt wants:
> 
>      fn translate(
>          &self,
>          addr: GuestAddress,
>          len: GuestUsize,
>          is_write: bool,
>      ) -> Option<(&MemoryRegionSection, MemoryRegionAddress, GuestUsize)>
> 
> `is_write` is mainly about IOMMU (and read-only case, but that could be
> workaround I think).
> 
> And the 3rd member `GuestUsize` of (&MemoryRegionSection,
> MemoryRegionAddress, GuestUsize) indicates the remianing size, which is
> used to detect cross-region case. Maybe this `GuestUsize` is not
> necessary in its return, since we can check the size of `MemoryRegionSection`
> later. But this would be a bit repetitive.
> 
> But at least, this marker trait is acceptable, right? :-)
> 
> The marker trait for GuestMemoryRegion is introduced at commit 66ff347
> ("refactor: use matches! instead of to_string() for tests").
> 
>>> @@ -0,0 +1,111 @@
>>> +From 2af7ea12a589fde619690e5060c01710cb6f2e0e Mon Sep 17 00:00:00 2001
>>> +From: Zhao Liu <zhao1.liu@intel.com>
>>> +Date: Wed, 6 Aug 2025 14:27:14 +0800
>>> +Subject: [PATCH 2/2] guest_memory: Add is_write argument for
>>> + GuestMemory::try_access()
>>
>> This should be fine.  But Hanna is also working on IOMMU so maybe this won't
>> be needed!
> 
> I'm not sure what method could align with Hanna's design. If there's
> another interface/method, I can have a try.

For example she already has similar fixes in 
https://github.com/rust-vmm/vm-memory/pull/327:

https://github.com/rust-vmm/vm-memory/pull/327/commits/9bcd5ac9b9ae37d1fb421f86f0aff310411933af
    Bytes: Fix read() and write()

    read() and write() must not ignore the `count` parameter: The
    mappings passed into the `try_access()` closure are only valid for up
    to `count` bytes, not more.

https://github.com/rust-vmm/vm-memory/pull/327/commits/2b83c72be656e5d46b83cb3a66d580e56cf33d5b
     Bytes: Do not use to_region_addr()

     When we switch to a (potentially) virtual memory model [...]
     the one memory-region-referencing part we are going to keep is
     `try_access()` [...] switch `Bytes::load()` and `store()` from using
     `to_region_addr()` to `try_access()`.

With some luck, your custom implementation of Bytes<GuestAddress> is not 
needed once vm-memory supports iommu.

Paolo



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU memory backend
  2025-08-07 13:59   ` Paolo Bonzini
@ 2025-08-08  8:17     ` Zhao Liu
  2025-08-08  8:17       ` Paolo Bonzini
  0 siblings, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-08  8:17 UTC (permalink / raw)
  To: Paolo Bonzini, Hanna Reitz
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Manos Pitsidianakis, Alex Bennée, Thomas Huth, Junjie Mao,
	qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

(+Hanna: I would like to align with Hanna on 0002.diff patch :-))

On Thu, Aug 07, 2025 at 03:59:26PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 15:59:26 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU
>  memory backend
> 
> On 8/7/25 14:30, Zhao Liu wrote:
> > Add 2 patches to support QEMU memory backend implementation.
> > 
> > Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> > ---
> >   .../packagefiles/vm-memory-0.16-rs/0001.diff  |  81 +++++++++++++
> >   .../packagefiles/vm-memory-0.16-rs/0002.diff  | 111 ++++++++++++++++++
> >   subprojects/vm-memory-0.16-rs.wrap            |   2 +
> >   3 files changed, 194 insertions(+)
> >   create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
> >   create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0002.diff
> > 
> > diff --git a/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
> > new file mode 100644
> > index 000000000000..037193108d45
> > --- /dev/null
> > +++ b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
> > @@ -0,0 +1,81 @@
> > +From 298f8ba019b2fe159fa943e0ae4dfd3c83ee64e0 Mon Sep 17 00:00:00 2001
> > +From: Zhao Liu <zhao1.liu@intel.com>
> > +Date: Wed, 6 Aug 2025 11:31:11 +0800
> > +Subject: [PATCH 1/2] guest_memory: Add a marker tarit to implement
> > + Bytes<GuestAddress> for GuestMemory
> 
> This was a bit surprising.  Maybe this is something where GuestMemory needs
> some extra flexibility.

At least, the default GuestMemory::try_access() need to re-implement in
QEMU, and this is because GuestMemory::iter() doesn't fit for QEMU's
case, and GuestMemory::to_region_addr() also needs adjustment to support
complete translation.

For details,

1) iter() - QEMU has implemented the two-level "page" walk in
   `phys_page_find`, which is more efficient than linear iteration.

2) to_region_addr() - it's function signature is:

    fn to_region_addr(
        &self,
	addr: GuestAddress
    ) -> Option<(&Self::R, MemoryRegionAddress)>;

but QEMU currentlt wants:

    fn translate(
        &self,
        addr: GuestAddress,
        len: GuestUsize,
        is_write: bool,
    ) -> Option<(&MemoryRegionSection, MemoryRegionAddress, GuestUsize)>

`is_write` is mainly about IOMMU (and read-only case, but that could be
workaround I think).

And the 3rd member `GuestUsize` of (&MemoryRegionSection,
MemoryRegionAddress, GuestUsize) indicates the remianing size, which is
used to detect cross-region case. Maybe this `GuestUsize` is not
necessary in its return, since we can check the size of `MemoryRegionSection`
later. But this would be a bit repetitive.

But at least, this marker trait is acceptable, right? :-)

The marker trait for GuestMemoryRegion is introduced at commit 66ff347
("refactor: use matches! instead of to_string() for tests").

> > @@ -0,0 +1,111 @@
> > +From 2af7ea12a589fde619690e5060c01710cb6f2e0e Mon Sep 17 00:00:00 2001
> > +From: Zhao Liu <zhao1.liu@intel.com>
> > +Date: Wed, 6 Aug 2025 14:27:14 +0800
> > +Subject: [PATCH 2/2] guest_memory: Add is_write argument for
> > + GuestMemory::try_access()
> 
> This should be fine.  But Hanna is also working on IOMMU so maybe this won't
> be needed!

I'm not sure what method could align with Hanna's design. If there's
another interface/method, I can have a try.

Or, should I just ignore all IOMMU code path directly? This may need
to decouple some C codes. For example, I can split flatview_do_translate
into a non-iommu case and an iommu case.

Thanks,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 12/26] rcu: Make rcu_read_lock & rcu_read_unlock not inline
  2025-08-07 13:54   ` Paolo Bonzini
@ 2025-08-08  8:19     ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-08  8:19 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Manos Pitsidianakis, Alex Bennée, Thomas Huth, Junjie Mao,
	qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 07, 2025 at 03:54:06PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 15:54:06 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 12/26] rcu: Make rcu_read_lock & rcu_read_unlock not
>  inline
> 
> On 8/7/25 14:30, Zhao Liu wrote:
> > Make rcu_read_lock & rcu_read_unlock not inline, then bindgen could
> > generate the bindings.
> > 
> > Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
> > Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> 
> Either this, or keep it inline and add wrappers rust_rcu_read_lock() and
> rust_rcu_read_unlock().

I see, the wrappers are better - we can keep the performance gain from
inline at C side.

Thanks,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU memory backend
  2025-08-08  8:17       ` Paolo Bonzini
@ 2025-08-08  8:51         ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-08  8:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Hanna Reitz, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Manos Pitsidianakis,
	Alex Bennée, Thomas Huth, Junjie Mao, qemu-devel, qemu-rust,
	Dapeng Mi, Chuanxiao Dong

On Fri, Aug 08, 2025 at 10:17:51AM +0200, Paolo Bonzini wrote:
> Date: Fri, 8 Aug 2025 10:17:51 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU
>  memory backend
> 
> On 8/8/25 10:17, Zhao Liu wrote:
> > (+Hanna: I would like to align with Hanna on 0002.diff patch :-))
> > 
> > On Thu, Aug 07, 2025 at 03:59:26PM +0200, Paolo Bonzini wrote:
> > > Date: Thu, 7 Aug 2025 15:59:26 +0200
> > > From: Paolo Bonzini <pbonzini@redhat.com>
> > > Subject: Re: [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU
> > >   memory backend
> > > 
> > > On 8/7/25 14:30, Zhao Liu wrote:
> > > > Add 2 patches to support QEMU memory backend implementation.
> > > > 
> > > > Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> > > > ---
> > > >    .../packagefiles/vm-memory-0.16-rs/0001.diff  |  81 +++++++++++++
> > > >    .../packagefiles/vm-memory-0.16-rs/0002.diff  | 111 ++++++++++++++++++
> > > >    subprojects/vm-memory-0.16-rs.wrap            |   2 +
> > > >    3 files changed, 194 insertions(+)
> > > >    create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
> > > >    create mode 100644 subprojects/packagefiles/vm-memory-0.16-rs/0002.diff
> > > > 
> > > > diff --git a/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
> > > > new file mode 100644
> > > > index 000000000000..037193108d45
> > > > --- /dev/null
> > > > +++ b/subprojects/packagefiles/vm-memory-0.16-rs/0001.diff
> > > > @@ -0,0 +1,81 @@
> > > > +From 298f8ba019b2fe159fa943e0ae4dfd3c83ee64e0 Mon Sep 17 00:00:00 2001
> > > > +From: Zhao Liu <zhao1.liu@intel.com>
> > > > +Date: Wed, 6 Aug 2025 11:31:11 +0800
> > > > +Subject: [PATCH 1/2] guest_memory: Add a marker tarit to implement
> > > > + Bytes<GuestAddress> for GuestMemory
> > > 
> > > This was a bit surprising.  Maybe this is something where GuestMemory needs
> > > some extra flexibility.
> > 
> > At least, the default GuestMemory::try_access() need to re-implement in
> > QEMU, and this is because GuestMemory::iter() doesn't fit for QEMU's
> > case, and GuestMemory::to_region_addr() also needs adjustment to support
> > complete translation.
> > 
> > For details,
> > 
> > 1) iter() - QEMU has implemented the two-level "page" walk in
> >     `phys_page_find`, which is more efficient than linear iteration.
> > 
> > 2) to_region_addr() - it's function signature is:
> > 
> >      fn to_region_addr(
> >          &self,
> > 	addr: GuestAddress
> >      ) -> Option<(&Self::R, MemoryRegionAddress)>;
> > 
> > but QEMU currentlt wants:
> > 
> >      fn translate(
> >          &self,
> >          addr: GuestAddress,
> >          len: GuestUsize,
> >          is_write: bool,
> >      ) -> Option<(&MemoryRegionSection, MemoryRegionAddress, GuestUsize)>
> > 
> > `is_write` is mainly about IOMMU (and read-only case, but that could be
> > workaround I think).
> > 
> > And the 3rd member `GuestUsize` of (&MemoryRegionSection,
> > MemoryRegionAddress, GuestUsize) indicates the remianing size, which is
> > used to detect cross-region case. Maybe this `GuestUsize` is not
> > necessary in its return, since we can check the size of `MemoryRegionSection`
> > later. But this would be a bit repetitive.
> > 
> > But at least, this marker trait is acceptable, right? :-)
> > 
> > The marker trait for GuestMemoryRegion is introduced at commit 66ff347
> > ("refactor: use matches! instead of to_string() for tests").
> > 
> > > > @@ -0,0 +1,111 @@
> > > > +From 2af7ea12a589fde619690e5060c01710cb6f2e0e Mon Sep 17 00:00:00 2001
> > > > +From: Zhao Liu <zhao1.liu@intel.com>
> > > > +Date: Wed, 6 Aug 2025 14:27:14 +0800
> > > > +Subject: [PATCH 2/2] guest_memory: Add is_write argument for
> > > > + GuestMemory::try_access()
> > > 
> > > This should be fine.  But Hanna is also working on IOMMU so maybe this won't
> > > be needed!
> > 
> > I'm not sure what method could align with Hanna's design. If there's
> > another interface/method, I can have a try.
> 
> For example she already has similar fixes in
> https://github.com/rust-vmm/vm-memory/pull/327:
> 
> https://github.com/rust-vmm/vm-memory/pull/327/commits/9bcd5ac9b9ae37d1fb421f86f0aff310411933af
>    Bytes: Fix read() and write()
> 
>    read() and write() must not ignore the `count` parameter: The
>    mappings passed into the `try_access()` closure are only valid for up
>    to `count` bytes, not more.
> 
> https://github.com/rust-vmm/vm-memory/pull/327/commits/2b83c72be656e5d46b83cb3a66d580e56cf33d5b
>     Bytes: Do not use to_region_addr()
> 
>     When we switch to a (potentially) virtual memory model [...]
>     the one memory-region-referencing part we are going to keep is
>     `try_access()` [...] switch `Bytes::load()` and `store()` from using
>     `to_region_addr()` to `try_access()`.
> 
> With some luck, your custom implementation of Bytes<GuestAddress> is not
> needed once vm-memory supports iommu.

Nice! I took a quick look, and these patches seem to be shaped as what
this RFC wants. I'll give them a try.

Thanks,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 13/26] rust: Add RCU bindings
  2025-08-07 13:38     ` Paolo Bonzini
@ 2025-08-09  7:21       ` Zhao Liu
  2025-08-09  9:13         ` Paolo Bonzini
  2025-08-12 10:31       ` Zhao Liu
  1 sibling, 1 reply; 58+ messages in thread
From: Zhao Liu @ 2025-08-09  7:21 UTC (permalink / raw)
  To: Paolo Bonzini, Manos Pitsidianakis, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

Thank you both!!

Please correct me is I'm wrong :).

On Thu, Aug 07, 2025 at 03:38:52PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 15:38:52 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 13/26] rust: Add RCU bindings
> 
> On 8/7/25 14:29, Manos Pitsidianakis wrote:
> 
> > > +//! Bindings for `rcu_read_lock` and `rcu_read_unlock`.
> > > +//! More details about RCU in QEMU, please refer docs/devel/rcu.rst.
> > > +
> > 
> > How about a RAII guard type? e.g. RCUGuard and runs `rcu_read_unlock` on Drop.
> 
> Clippy says Rcu not RCU.  :)
> 
> You're right, not just because it's nice but also because it bounds the
> dereference of the FlatView.  Something like this build on top of the guard
> object:
> 
> pub struct RcuCell<T> {
>     data: AtomicPtr<T>
> }
> 
> impl<T> RcuCell {
>     pub fn raw_get(&self) -> *mut T {
>         self.data.load(Ordering::Acquire)

I understand this tries to provide an equivalent to qatomic_rcu_read.
Ordering::Acquire is especially necessary, because at C side,
qatomic_rcu_read has a barrier.

>     }
> 
>     pub fn get<'g>(&self, _: &'g RcuGuard) -> Option<&'g T> {
>         unsafe {
>             self.raw_get().as_ref()
>         }
>     }
> }
> 
> Using this is a bit ugly, because you need transmute, but it's isolated:
> 
> impl AddressSpace {
>    pub fn get_flatview(&self, rcu: &'g Guard) -> &'g FlatView {

IIUC, this lifetime is using the "branded type" pattern as ParentInit.

>        let flatp = unsafe {
>            std::mem::transmute::<&*mut FlatView, &RcuCell<FlatView>>(
>                &self.0.as_ptr().current_map)
>        };
>        flatp.get(rcu)
>    }
> }
> 
> impl GuestAddressSpace for AddressSpace {
>     fn memory(&self) -> Self::T {
>         let rcu = RcuGuard::guard();
>         FlatViewRefGuard::new(self.get_flatview(rcu))
>     }
> }

With RcuGuard, then we are actually calling qatomic_rcu_read in the
rcu critical section, which greatly enhances safety. This is a good
design for RCU binding.

> > Destructors are not guaranteed to run or run only once, but the former
> > should happen when things go wrong e.g. crashes/aborts. You can add a
> > flag in the RCUGuard to make sure Drop runs unlock only once (since it
> > takes &mut and not ownership)
> 
> Yeah I think many things would go wrong if Arc could run its drop
> implementation more than once.
 
Good point.

In addition, about rcu_read_lock_held(), I thought at C side, there're
so many comments are saying "Called within RCU critical section" but
without any check.

So I wonder whether we should do some check for RCU critical section,
just like bql check via `assert!(bql_locked())`. Maybe we can have a
Rcu debug feature to cover all these checks.

Thanks,
Zhao




^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 13/26] rust: Add RCU bindings
  2025-08-09  7:21       ` Zhao Liu
@ 2025-08-09  9:13         ` Paolo Bonzini
  2025-08-09  9:26           ` Manos Pitsidianakis
  0 siblings, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-09  9:13 UTC (permalink / raw)
  To: Zhao Liu
  Cc: Manos Pitsidianakis, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

[-- Attachment #1: Type: text/plain, Size: 1387 bytes --]

Il sab 9 ago 2025, 09:00 Zhao Liu <zhao1.liu@intel.com> ha scritto:

> >     pub fn get<'g>(&self, _: &'g RcuGuard) -> Option<&'g T> {
> >         unsafe {
> >             self.raw_get().as_ref()
> >         }
> >     }
> > }
> >
> > Using this is a bit ugly, because you need transmute, but it's isolated:
> >
> > impl AddressSpace {
> >    pub fn get_flatview(&self, rcu: &'g Guard) -> &'g FlatView {
>
> IIUC, this lifetime is using the "branded type" pattern as ParentInit.
>

No, it's much simpler (that one uses the combination of for<'identity> and
PhantomData as explained in the comment). It says that the lifetime of the
returned reference cannot exceed the guard. It's just like

pub fn get_item(&self, array: &'g [u8]) -> &'g u8 {
   &array[self.0]
}

Except that the guard is only there to limit the lifetime and not to hold
data.

In addition, about rcu_read_lock_held(), I thought at C side, there're
> so many comments are saying "Called within RCU critical section" but
> without any check.
>
> So I wonder whether we should do some check for RCU critical section,
> just like bql check via `assert!(bql_locked())`. Maybe we can have a
> Rcu debug feature to cover all these checks.
>

In Rust you would just pass a &RcuGuard into the function (or store it in a
struct) for a zero-cost assertion that you are in the RCU critical section.

Paolo


> Thanks,
> Zhao
>
>
>

[-- Attachment #2: Type: text/html, Size: 2722 bytes --]

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 13/26] rust: Add RCU bindings
  2025-08-09  9:13         ` Paolo Bonzini
@ 2025-08-09  9:26           ` Manos Pitsidianakis
  2025-08-12 10:43             ` Zhao Liu
  0 siblings, 1 reply; 58+ messages in thread
From: Manos Pitsidianakis @ 2025-08-09  9:26 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Zhao Liu, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

[-- Attachment #1: Type: text/plain, Size: 1596 bytes --]

On Sat, 9 Aug 2025, 12:13 Paolo Bonzini, <pbonzini@redhat.com> wrote:

>
>
> Il sab 9 ago 2025, 09:00 Zhao Liu <zhao1.liu@intel.com> ha scritto:
>
>> >     pub fn get<'g>(&self, _: &'g RcuGuard) -> Option<&'g T> {
>> >         unsafe {
>> >             self.raw_get().as_ref()
>> >         }
>> >     }
>> > }
>> >
>> > Using this is a bit ugly, because you need transmute, but it's isolated:
>> >
>> > impl AddressSpace {
>> >    pub fn get_flatview(&self, rcu: &'g Guard) -> &'g FlatView {
>>
>> IIUC, this lifetime is using the "branded type" pattern as ParentInit.
>>
>
> No, it's much simpler (that one uses the combination of for<'identity> and
> PhantomData as explained in the comment). It says that the lifetime of the
> returned reference cannot exceed the guard. It's just like
>
> pub fn get_item(&self, array: &'g [u8]) -> &'g u8 {
>    &array[self.0]
> }
>
> Except that the guard is only there to limit the lifetime and not to hold
> data.
>
> In addition, about rcu_read_lock_held(), I thought at C side, there're
>> so many comments are saying "Called within RCU critical section" but
>> without any check.
>>
>> So I wonder whether we should do some check for RCU critical section,
>> just like bql check via `assert!(bql_locked())`. Maybe we can have a
>> Rcu debug feature to cover all these checks.
>>
>
> In Rust you would just pass a &RcuGuard into the function (or store it in
> a struct) for a zero-cost assertion that you are in the RCU critical
> section.
>

Agreed. You could put debug_asserts for sanity check for good measure.

Paolo
>
>
>> Thanks,
>> Zhao
>>
>>
>>

[-- Attachment #2: Type: text/html, Size: 3428 bytes --]

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 13/26] rust: Add RCU bindings
  2025-08-07 13:38     ` Paolo Bonzini
  2025-08-09  7:21       ` Zhao Liu
@ 2025-08-12 10:31       ` Zhao Liu
  1 sibling, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-12 10:31 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Manos Pitsidianakis, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 07, 2025 at 03:38:52PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 15:38:52 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 13/26] rust: Add RCU bindings
> 
> On 8/7/25 14:29, Manos Pitsidianakis wrote:
> 
> > > +//! Bindings for `rcu_read_lock` and `rcu_read_unlock`.
> > > +//! More details about RCU in QEMU, please refer docs/devel/rcu.rst.
> > > +
> > 
> > How about a RAII guard type? e.g. RCUGuard and runs `rcu_read_unlock` on Drop.
> 
> Clippy says Rcu not RCU.  :)
> 
> You're right, not just because it's nice but also because it bounds the
> dereference of the FlatView.  Something like this build on top of the guard
> object:
> 
> pub struct RcuCell<T> {
>     data: AtomicPtr<T>
> }
> 
> impl<T> RcuCell {
>     pub fn raw_get(&self) -> *mut T {
>         self.data.load(Ordering::Acquire)
>     }
> 
>     pub fn get<'g>(&self, _: &'g RcuGuard) -> Option<&'g T> {
>         unsafe {
>             self.raw_get().as_ref()
>         }
>     }
> }

I just implement a simple RcuGuard (but this doesn't consider the refer
count or flag. I would like to talk more about this at the last of this
reply.):

pub struct RcuGuard;

impl RcuGuard {
    pub fn new() -> Self {
        unsafe { bindings::rcu_read_lock() };
        Self
    }
}

impl Drop for RcuGuard {
    fn drop(&mut self) {
        unsafe { bindings::rcu_read_unlock() };
    }
}

> Using this is a bit ugly, because you need transmute, but it's isolated:
> 
> impl AddressSpace {
>    pub fn get_flatview(&self, rcu: &'g Guard) -> &'g FlatView {
>        let flatp = unsafe {
>            std::mem::transmute::<&*mut FlatView, &RcuCell<FlatView>>(
>                &self.0.as_ptr().current_map)
>        };
>        flatp.get(rcu)
>    }
> }
>
> impl GuestAddressSpace for AddressSpace {
>     fn memory(&self) -> Self::T {
>         let rcu = RcuGuard::guard();
>         FlatViewRefGuard::new(self.get_flatview(rcu))
>     }
> }

Why not use a constructor RcuCell::new() to replace transmute()? Then
we just need to play with the pointer without touching memory.

impl<T> RcuCell<T> {
    pub fn new(p: *mut T) -> Self {
        Self {
            data: AtomicPtr::new(p),
        }
    }
}

Then we could :

impl Deref for AddressSpace {
    type Target = bindings::AddressSpace;

    fn deref(&self) -> &Self::Target {
        unsafe { &*self.0.as_ptr() }
    }
}

impl AddressSpace {
    pub fn get_flatview<'g>(&self, rcu: &'g RcuGuard) -> &'g FlatView {
        let flatp = RcuCell::new(self.deref().current_map);
        unsafe { FlatView::from_raw(flatp.get(rcu).unwrap()) }
    }
}

impl GuestAddressSpace for AddressSpace {
    fn memory(&self) -> Self::T {
        let rcu = RcuGuard::new();
        FlatViewRefGuard::new(self.get_flatview(&rcu)).unwrap()
    }
}

> > Destructors are not guaranteed to run or run only once, but the former
> > should happen when things go wrong e.g. crashes/aborts. You can add a
> > flag in the RCUGuard to make sure Drop runs unlock only once (since it
> > takes &mut and not ownership)
> 
> Yeah I think many things would go wrong if Arc could run its drop
> implementation more than once.

wait, isn't RCU be held at thread-local case? We shouldn't share RCU
guard/cell at Arc<>. Furthermore, it seems necessary to introduce
`NotThreadSafe` into QEMU from kernel.

pub type NotThreadSafe = PhantomData<*mut ()>;

Then we could have stronger restrictions on RCU stuff, just like
kernel'rcu:

pub struct RcuGuard(NotThreadSafe);

Maybe we can also add `NotThreadSafe` to RcuCell. But the lifetime has
already restrict its use.

For another consideration about "guaranteeing to run" (for crashes/aborts
case), QEMU program will stop and OS will clean every thing up. Then we
don't need to care about the state of RCU, right?

Thanks,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 13/26] rust: Add RCU bindings
  2025-08-09  9:26           ` Manos Pitsidianakis
@ 2025-08-12 10:43             ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-12 10:43 UTC (permalink / raw)
  To: Paolo Bonzini, Manos Pitsidianakis
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Alex Bennée, Thomas Huth, Junjie Mao, qemu-devel, qemu-rust,
	Dapeng Mi, Chuanxiao Dong

> >> > impl AddressSpace {
> >> >    pub fn get_flatview(&self, rcu: &'g Guard) -> &'g FlatView {
> >>
> >> IIUC, this lifetime is using the "branded type" pattern as ParentInit.
> >>
> >
> > No, it's much simpler (that one uses the combination of for<'identity> and
> > PhantomData as explained in the comment). It says that the lifetime of the
> > returned reference cannot exceed the guard. It's just like
> >
> > pub fn get_item(&self, array: &'g [u8]) -> &'g u8 {
> >    &array[self.0]
> > }
> >
> > Except that the guard is only there to limit the lifetime and not to hold
> > data.

I see. It's clear for me now. Thank you!

> > In addition, about rcu_read_lock_held(), I thought at C side, there're
> >> so many comments are saying "Called within RCU critical section" but
> >> without any check.
> >>
> >> So I wonder whether we should do some check for RCU critical section,
> >> just like bql check via `assert!(bql_locked())`. Maybe we can have a
> >> Rcu debug feature to cover all these checks.
> >>
> >
> > In Rust you would just pass a &RcuGuard into the function (or store it in
> > a struct) for a zero-cost assertion that you are in the RCU critical
> > section.
> >
> 
> Agreed. You could put debug_asserts for sanity check for good measure.

Thanks!

Then I see, the most RCU critical part is accessing FlatView through
AddressSpace.

Here, require RCU by function signature (&RcuGuard) is very convenient:

pub fn get_flatview<'g>(&self, rcu: &'g RcuGuard) -> &'g FlatView;

As for the methods of FlatView itself and the lower-level interfaces
(e.g., the write & read detail methods), although RCU critical sections
are also required, there is no need to worry about them because the
upper-level access already ensures RCU lock is held. Of course, it would
be better to add &RcuGuard to some structures and check them with
debug_assert/assert.

Regards,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection
  2025-08-07 13:57   ` Paolo Bonzini
@ 2025-08-12 15:39     ` Zhao Liu
  2025-08-12 15:42       ` Manos Pitsidianakis
  2025-08-12 19:23       ` Paolo Bonzini
  0 siblings, 2 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-12 15:39 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Manos Pitsidianakis, Alex Bennée, Thomas Huth, Junjie Mao,
	qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 07, 2025 at 03:57:17PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 15:57:17 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 16/26] memory: Make flatview_do_translate() return a
>  pointer to MemoryRegionSection
> 
> On 8/7/25 14:30, Zhao Liu wrote:
> > Rust side will use cell::Opaque<> to hide details of C structure, and
> > this could help avoid the direct operation on C memory from Rust side.
> > 
> > Therefore, it's necessary to wrap a translation binding and make it only
> > return the pointer to MemoryRegionSection, instead of the copy.
> > 
> > As the first step, make flatview_do_translate return a pointer to
> > MemoryRegionSection, so that we can build a wrapper based on it.
> 
> Independent of Rust, doing the copy as late as possible is good, but make it
> return a "const MemoryRegionSection*" so that there's no risk of overwriting
> data.

Yes, const MemoryRegionSection* is helpful...

> Hopefully this does not show a bigger problem!

...then we will get `*const bindings::MemoryRegionSection` from
flatview_translate_section().

This is mainly about how to construct Opaque<T> from `*cont T`:

impl FlatView {
    fn translate(
        &self,
        addr: GuestAddress,
        len: GuestUsize,
        is_write: bool,
    ) -> Option<(&MemoryRegionSection, MemoryRegionAddress, GuestUsize)> {
        ...
        let ptr = unsafe {
            flatview_translate_section(
                self.as_mut_ptr(),
                addr.raw_value(),
                &mut raw_addr,
                &mut remain,
                is_write,
                MEMTXATTRS_UNSPECIFIED,
            )
        };

        ...

------> // Note here, Opaque<>::from_raw() requires *mut T.
	// And we can definitely convert *cont T to *mut T!
        let s = unsafe { <FlatView as GuestMemory>::R::from_raw(ptr as *mut _) };
        ...
    }

But look closer to Opaque<>, it has 2 safe methods: as_mut_ptr() &
raw_get().

These 2 methods indicate that the T pointed by Qpaque<T> is mutable,
which has the conflict with the original `*const bindings::MemoryRegionSection`.

So from this point, it seems unsafe to use Qpaque<> on this case.

To address this, I think we need:
 - rich comments about this MemoryRegionSection is actually immuatble.
 - modify other C functions to accept `const *MemoryRegionSection` as
   argument.

What do you think?

Thanks,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection
  2025-08-12 15:39     ` Zhao Liu
@ 2025-08-12 15:42       ` Manos Pitsidianakis
  2025-08-13 15:12         ` Zhao Liu
  2025-08-12 19:23       ` Paolo Bonzini
  1 sibling, 1 reply; 58+ messages in thread
From: Manos Pitsidianakis @ 2025-08-12 15:42 UTC (permalink / raw)
  To: Zhao Liu
  Cc: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, open list:ARM SMMU <qemu-arm@nongnu.org>, ,
	qemu-rust, Dapeng Mi, Chuanxiao Dong

[-- Attachment #1: Type: text/plain, Size: 2822 bytes --]

On Tue, 12 Aug 2025, 18:17 Zhao Liu, <zhao1.liu@intel.com> wrote:

> On Thu, Aug 07, 2025 at 03:57:17PM +0200, Paolo Bonzini wrote:
> > Date: Thu, 7 Aug 2025 15:57:17 +0200
> > From: Paolo Bonzini <pbonzini@redhat.com>
> > Subject: Re: [RFC 16/26] memory: Make flatview_do_translate() return a
> >  pointer to MemoryRegionSection
> >
> > On 8/7/25 14:30, Zhao Liu wrote:
> > > Rust side will use cell::Opaque<> to hide details of C structure, and
> > > this could help avoid the direct operation on C memory from Rust side.
> > >
> > > Therefore, it's necessary to wrap a translation binding and make it
> only
> > > return the pointer to MemoryRegionSection, instead of the copy.
> > >
> > > As the first step, make flatview_do_translate return a pointer to
> > > MemoryRegionSection, so that we can build a wrapper based on it.
> >
> > Independent of Rust, doing the copy as late as possible is good, but
> make it
> > return a "const MemoryRegionSection*" so that there's no risk of
> overwriting
> > data.
>
> Yes, const MemoryRegionSection* is helpful...
>
> > Hopefully this does not show a bigger problem!
>
> ...then we will get `*const bindings::MemoryRegionSection` from
> flatview_translate_section().
>
> This is mainly about how to construct Opaque<T> from `*cont T`:
>
> impl FlatView {
>     fn translate(
>         &self,
>         addr: GuestAddress,
>         len: GuestUsize,
>         is_write: bool,
>     ) -> Option<(&MemoryRegionSection, MemoryRegionAddress, GuestUsize)> {
>         ...
>         let ptr = unsafe {
>             flatview_translate_section(
>                 self.as_mut_ptr(),
>                 addr.raw_value(),
>                 &mut raw_addr,
>                 &mut remain,
>                 is_write,
>                 MEMTXATTRS_UNSPECIFIED,
>             )
>         };
>
>         ...
>
> ------> // Note here, Opaque<>::from_raw() requires *mut T.
>         // And we can definitely convert *cont T to *mut T!
>         let s = unsafe { <FlatView as GuestMemory>::R::from_raw(ptr as
> *mut _) };
>         ...
>     }
>
> But look closer to Opaque<>, it has 2 safe methods: as_mut_ptr() &
> raw_get().
>
> These 2 methods indicate that the T pointed by Qpaque<T> is mutable,
> which has the conflict with the original `*const
> bindings::MemoryRegionSection`.
>
> So from this point, it seems unsafe to use Qpaque<> on this case.
>

Yes, the usual approach is to have a Ref and a RefMut type e.g. Opaque and
OpaqueMut, and the OpaqueMut type can dereference immutably as an Opaque.

See std::cell::{Ref, RefMut} for inspiration.


To address this, I think we need:
>  - rich comments about this MemoryRegionSection is actually immuatble.
>  - modify other C functions to accept `const *MemoryRegionSection` as
>    argument.
>
> What do you think?
>
> Thanks,
> Zhao
>
>

[-- Attachment #2: Type: text/html, Size: 4084 bytes --]

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection
  2025-08-12 15:39     ` Zhao Liu
  2025-08-12 15:42       ` Manos Pitsidianakis
@ 2025-08-12 19:23       ` Paolo Bonzini
  2025-08-13 15:10         ` Zhao Liu
  1 sibling, 1 reply; 58+ messages in thread
From: Paolo Bonzini @ 2025-08-12 19:23 UTC (permalink / raw)
  To: Zhao Liu
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Manos Pitsidianakis, Alex Bennée, Thomas Huth, Junjie Mao,
	qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

[-- Attachment #1: Type: text/plain, Size: 895 bytes --]

Il mar 12 ago 2025, 17:17 Zhao Liu <zhao1.liu@intel.com> ha scritto:

> But look closer to Opaque<>, it has 2 safe methods: as_mut_ptr() &
> raw_get().
>
> These 2 methods indicate that the T pointed by Qpaque<T> is mutable,
> which has the conflict with the original `*const
> bindings::MemoryRegionSection`.
>
> So from this point, it seems unsafe to use Qpaque<> on this case.
>

Yes, it's similar to NonNull<>. I am not sure that you need Opaque<> here;
since the pointer is const, maybe you can just dereference it to a
&bindings::MemoryRegionSection. Is it useful to have the Opaque<> wrapper
here?

To address this, I think we need:
>  - rich comments about this MemoryRegionSection is actually immuatble.
>  - modify other C functions to accept `const *MemoryRegionSection` as
>    argument.
>

Yes, adding const is useful in this case.

Paolo

What do you think?
>
> Thanks,
> Zhao
>
>

[-- Attachment #2: Type: text/html, Size: 1920 bytes --]

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 24/26] rust/memory: Provide AddressSpace bindings
  2025-08-07 13:50   ` Paolo Bonzini
@ 2025-08-13 14:47     ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-13 14:47 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Manos Pitsidianakis, Alex Bennée, Thomas Huth, Junjie Mao,
	qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 07, 2025 at 03:50:45PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 15:50:45 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 24/26] rust/memory: Provide AddressSpace bindings
> 
> On 8/7/25 14:30, Zhao Liu wrote:
> > +impl GuestAddressSpace for AddressSpace {
> > +    type M = FlatView;
> > +    type T = FlatViewRefGuard;
> > +
> > +    /// Get the memory of the [`AddressSpace`].
> > +    ///
> > +    /// This function retrieves the [`FlatView`] for the current
> > +    /// [`AddressSpace`].  And it should be called from an RCU
> > +    /// critical section.  The returned [`FlatView`] is used for
> > +    /// short-term memory access.
> > +    ///
> > +    /// Note, this function method may **panic** if [`FlatView`] is
> > +    /// being distroying.  Fo this case, we should consider to providing
> > +    /// the more stable binding with [`bindings::address_space_get_flatview`].
> > +    fn memory(&self) -> Self::T {
> > +        let flatp = unsafe { address_space_to_flatview(self.0.as_mut_ptr()) };
> > +        FlatViewRefGuard::new(unsafe { Self::M::from_raw(flatp) }).expect(
> > +            "Failed to clone FlatViewRefGuard: the FlatView may have been destroyed concurrently.",
> > +        )
> 
> This is essentially address_space_get_flatview().  You can call it directly,
> or you need to loop if FlatViewRefGuard finds a zero reference count.

Yes. Here address_space_get_flatview() is better.

> > +    }
> > +}
> > +
> > +impl AddressSpace {
> > +    /// The write interface of `AddressSpace`.
> > +    ///
> > +    /// This function is similar to `address_space_write` in C side.
> > +    ///
> > +    /// But it assumes the memory attributes is MEMTXATTRS_UNSPECIFIED.
> > +    pub fn write(&self, buf: &[u8], addr: GuestAddress) -> Result<usize> {
> > +        rcu_read_lock();
> > +        let r = self.memory().deref().write(buf, addr);
> > +        rcu_read_unlock();
> 
> self.memory() must not need rcu_read_lock/unlock around it, they should be
> called by the memory() function itself.

Ah, then rcu just ensures &FlatView is valid since we increments its
ref count during rcu critical section.

But rcu will no longer cover the entire write process!

Combining this RcuGuard proposal in the reply of patch 13:

https://lore.kernel.org/qemu-devel/aJsX9HH%2FJwblZEYO@intel.com/

impl AddressSpace {
    pub fn get_flatview<'g>(&self, rcu: &'g RcuGuard) -> &'g FlatView {
        let flatp = RcuCell::new(self.deref().current_map);
        unsafe { FlatView::from_raw(flatp.get(rcu).unwrap()) }
    }
}

impl GuestAddressSpace for AddressSpace {
    fn memory(&self) -> Self::T {
        let rcu = RcuGuard::new();
        FlatViewRefGuard::new(self.get_flatview(&rcu)).unwrap()
    }
}

rcu is dropped at the end of memory(). So `&'g RcuGuard` is not enough
for this case.

> > +        r.map_err(guest_mem_err_to_qemu_err)
> > +    }
> 
> I think it's ok to return the vm-memory error.  Ultimately, the error will
> be either ignored or turned into a device error condition, but I don't think
> it's ever going to become an Error**.

Sure, and for HPET, the error isn't be handled except panic...

> > +    /// The store interface of `AddressSpace`.
> > +    ///
> > +    /// This function is similar to `address_space_st{size}` in C side.
> > +    ///
> > +    /// But it only assumes @val follows target-endian by default. So ensure
> > +    /// the endian of `val` aligned with target, before using this method.
> 
> QEMU is trying to get rid of target endianness.  We should use the vm-memory
> BeNN and LeNN as much as possible.  It would be great if you could write
> either

Yes, this is the ideal way. 

This will involve the changes in both vm-memory and QEMU:

* vm-memory: we need to implement AtomicAccess trait for BeNN and LeNN in
  vm-memory (but this is not a big deal).

* For QEMU,

Now to handle AtomicAccess, I've abstracted a uniform C store() binding
in patch 21:

MemTxResult section_rust_store(MemoryRegionSection *section,
                               hwaddr mr_offset, const uint8_t *buf,
                               MemTxAttrs attrs, hwaddr len);

If you haven't looked at this, you can see the comments in:

impl Bytes<MemoryRegionAddress> for MemoryRegionSection {
    fn store<T: AtomicAccess>(
        &self,
        val: T,
        addr: MemoryRegionAddress,
        _order: Ordering,
    ) -> GuestMemoryResult<()> {}
}

section_rust_store() supports target endian by default like
address_space_st(). If we wants to add LE & BE support, I think we have
2 options:
 1) Add another endian argument in section_rust_store, but this also
    requires to pass endian informantion in Bytes trait. Ethier we need to
    implement Bytes<(MemoryRegionAddress, DeviceEndiann)>, or we need to
    add endian info in AtomicAccess.

 2) simplify section_rust_store() further: ignore endian stuff and just
    store the data from *buf to MMIO/ram. For this way, we need to
    adjust adjust_endianness() to do nothing:
    section_rust_store()
     -> memory_region_dispatch_write()
      -> adjust_endianness()

    However, adjust_endianness() is still very useful, especially for
    QEMU, the caller of store() doesn't know the access is for MMIO or
    RAM.

So I prefer 1) for now, and maybe it's better to add endian info in
AtomicAccess.

>     ADDRESS_SPACE_MEMORY.store::<Le32>(addr, 42);
> 
> or
> 
>     let n = Le32(42);
>     ADDRESS_SPACE_MEMORY.store(addr, n);
> 
> but not
> 
>     ADDRESS_SPACE_MEMORY.store(addr, 42);

Yes, this way is similar with my previous attempt...but I don't know
what's the best to handler LE/BE, so this RFC just omitted these cases,
and chose a simplest case - native endian.

> (Also I've not looked at the patches closely enough, but wouldn't store()
> use *host* endianness? Same in patch 23).

It seems QEMU's interfaces don't use *host* endianness?

I'm referring address_space_ld*() & address_space_st*(), and their doc
said:

/* address_space_ld*: load from an address space
 * address_space_st*: store to an address space
 *
 * These functions perform a load or store of the byte, word,
 * longword or quad to the specified address within the AddressSpace.
 * The _le suffixed functions treat the data as little endian;
 * _be indicates big endian; no suffix indicates "same endianness
 * as guest CPU".
 *
 * The "guest CPU endianness" accessors are deprecated for use outside
 * target-* code; devices should be CPU-agnostic and use either the LE
 * or the BE accessors.
 */

I also considerred host endianness. But host endianness doesn't align
with C side... C side only supports little/big/native (target/guest)
endianness.

So, do you think Rust side should consider host endianness? Or maybe
we can add DEVICE_HOST_ENDIAN in device_endian. But is there the way to
check what the specific endianness of Host is?

Thanks,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 25/26] rust/memory: Add binding to check target endian
  2025-08-07 12:44   ` Manos Pitsidianakis
@ 2025-08-13 14:48     ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-13 14:48 UTC (permalink / raw)
  To: Manos Pitsidianakis
  Cc: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 07, 2025 at 03:44:57PM +0300, Manos Pitsidianakis wrote:
> Date: Thu, 7 Aug 2025 15:44:57 +0300
> From: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
> Subject: Re: [RFC 25/26] rust/memory: Add binding to check target endian
> 
> On Thu, Aug 7, 2025 at 3:10 PM Zhao Liu <zhao1.liu@intel.com> wrote:
> >
> > Add a binding (target_is_big_endian()) to check whether target is big
> > endian or not. This could help user to adjust endian before calling
> 
> s/adjust endian/adjust endianness/
> 
> > AddresssSpace::store() or after calling AddressSpace::load().
> 
> No strong preference, but maybe we can keep the same name as C,
> target_big_endian()? Just for consistency.
> 
> Either way:
> 
> Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>

Thanks! If the next version still supports target-endian, I'll keep
the same name.

Regards,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm
  2025-08-07 14:13 ` Paolo Bonzini
@ 2025-08-13 14:56   ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-13 14:56 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Manos Pitsidianakis, Alex Bennée, Thomas Huth, Junjie Mao,
	Hanna Reitz, qemu-devel, qemu-rust, Dapeng Mi, Chuanxiao Dong

On Thu, Aug 07, 2025 at 04:13:00PM +0200, Paolo Bonzini wrote:
> Date: Thu, 7 Aug 2025 16:13:00 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 00/26] rust/memory: Integrate the vm-memory API from
>  rust-vmm
> 
> [Adding Hanna who's been working on vm-memory]
> 
> On 8/7/25 14:30, Zhao Liu wrote:
> > Hi,
> > 
> > This RFC series explores integrating the vm-memory API into QEMU's
> > rust/memory bindings.
> > 
> > Thanks to Paolo and Manos's many suggestions and feedback, I have
> > resolved many issues over the past few months, but there are still
> > some open issues that I would like to discuss.
> > 
> > This series finally provides the following safe interfaces in Rust:
> >   * AddressSpace::write in Rust <=> address_space_write in C
> >     - **but only** supports MEMTXATTRS_UNSPECIFIED
> > 
> >   * AddressSpace::read in Rust <=> address_space_read_full in C
> >     - **but only** supports MEMTXATTRS_UNSPECIFIED.
> > 
> >   * AddressSpace::store in Rust <=> address_space_st{size} in C
> >     - **but only** supports MEMTXATTRS_UNSPECIFIED and native endian.
> > 
> >   * AddressSpace::load in Rust <=> address_space_ld{size} in C
> >     - **but only** supports MEMTXATTRS_UNSPECIFIED and native endian.
> 
> Endianness can be handled by BeNN and LeNN.

About Endianness, I have more thoughts in the reply of patch 24.

> For MemTxAttrs we can use
> Bytes<(GuestAddress, MemTxAttrs)> (a variant on something you mention
> below).
> 
> Thinking out loud: maybe if we do our implementation in Bytes<(GuestAddress,
> MemTxAttrs)>, and Bytes<GuestAddress>::try_access wraps Bytes<(GuestAddress,
> MemTxAttrs)>, your downstream-only changes are not needed anymore?

with iommu support, the downstream patches are not necessary :-).

> > And this series involves changes mainly to these three parts:
> >   * NEW QEMU memory APIs wrapper at C side.
> >   * Extra changes for vm-memory (downstream for now).
> >   * NEW QEMU memory bindings/APIs based on vm-memory at Rust side.
> > 
> > Although the number of line changes appears to be significant, more
> > than half of them are documentation and comments.
> Yep, thanks for writing them.
> 
> This is a good RFC, it's complete enough to show the challenges and the
> things that are missing stand up easily.

Thank you for your quick feedback.

> I'll look into what vm-memory is missing so that we can simplify QEMU's code
> further, but the basic traits match which is nice.  And the final outcome,
> which is essentially:
> 
>     let (addr, value) = (GuestAddress(self.fsb >> 32),
>                          Le32(self.fsb as u32));
>     ADDRESS_SPACE_MEMORY.memory().store(addr, value);
> 
> is as clean as it can be, if anything a bit wordy due to the GuestAddress
> "newtype" wrapper.  (If we decide it's too bad, the convenience methods in
> AddressSpace can automatically do the GuestAddress conversion...)

Yes! For such a single function, there're lots of changes.

Thanks,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection
  2025-08-12 19:23       ` Paolo Bonzini
@ 2025-08-13 15:10         ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-13 15:10 UTC (permalink / raw)
  To: Paolo Bonzini, Manos Pitsidianakis
  Cc: Peter Xu, David Hildenbrand, Philippe Mathieu-Daudé,
	Alex Bennée, Thomas Huth, Junjie Mao, qemu-devel, qemu-rust,
	Dapeng Mi, Chuanxiao Dong

On Tue, Aug 12, 2025 at 09:23:59PM +0200, Paolo Bonzini wrote:
> Date: Tue, 12 Aug 2025 21:23:59 +0200
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [RFC 16/26] memory: Make flatview_do_translate() return a
>  pointer to MemoryRegionSection
> 
> Il mar 12 ago 2025, 17:17 Zhao Liu <zhao1.liu@intel.com> ha scritto:
> 
> > But look closer to Opaque<>, it has 2 safe methods: as_mut_ptr() &
> > raw_get().
> >
> > These 2 methods indicate that the T pointed by Qpaque<T> is mutable,
> > which has the conflict with the original `*const
> > bindings::MemoryRegionSection`.
> >
> > So from this point, it seems unsafe to use Qpaque<> on this case.
> >
> 
> Yes, it's similar to NonNull<>. I am not sure that you need Opaque<> here;
> since the pointer is const, maybe you can just dereference it to a
> &bindings::MemoryRegionSection. Is it useful to have the Opaque<> wrapper
> here?

I agree. Opaque<> is not necessary here. We can have a simple wrapper:

pub struct MemoryRegionSection(*const bindings::MemoryRegionSection)

or

pub struct MemoryRegionSection(NonNull<bindings::MemoryRegionSection>)
with immutable only use.

In future, if there're more similar case, then we can have a OpaqueRef<>
like Manos suggested.

Thanks,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection
  2025-08-12 15:42       ` Manos Pitsidianakis
@ 2025-08-13 15:12         ` Zhao Liu
  0 siblings, 0 replies; 58+ messages in thread
From: Zhao Liu @ 2025-08-13 15:12 UTC (permalink / raw)
  To: Manos Pitsidianakis
  Cc: Paolo Bonzini, Peter Xu, David Hildenbrand,
	Philippe Mathieu-Daudé, Alex Bennée, Thomas Huth,
	Junjie Mao, open list:ARM SMMU <qemu-arm@nongnu.org>,,
	qemu-rust, Dapeng Mi, Chuanxiao Dong

> Yes, the usual approach is to have a Ref and a RefMut type e.g. Opaque and
> OpaqueMut, and the OpaqueMut type can dereference immutably as an Opaque.
> 
> See std::cell::{Ref, RefMut} for inspiration.
> 

Thanks! I'll dorp Opaque directly for this case. If there're more similar
cases, then we can have a OpaqueRef<>.

Regards,
Zhao



^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2025-08-13 14:52 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-07 12:30 [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
2025-08-07 12:30 ` [RFC 01/26] rust/hpet: Fix the error caused by vm-memory Zhao Liu
2025-08-07 13:52   ` Paolo Bonzini
2025-08-08  7:27     ` Zhao Liu
2025-08-07 12:30 ` [RFC 02/26] rust/cargo: Add the support for vm-memory Zhao Liu
2025-08-07 12:30 ` [RFC 03/26] subprojects: Add thiserror-impl crate Zhao Liu
2025-08-07 12:30 ` [RFC 04/26] subprojects: Add thiserror crate Zhao Liu
2025-08-07 12:30 ` [RFC 05/26] subprojects: Add winapi-i686-pc-windows-gnu crate Zhao Liu
2025-08-07 12:30 ` [RFC 06/26] subprojects: Add winapi-x86_64-pc-windows-gnu crate Zhao Liu
2025-08-07 12:30 ` [RFC 07/26] subprojects: Add winapi crate Zhao Liu
2025-08-07 13:17   ` Paolo Bonzini
2025-08-08  7:33     ` Zhao Liu
2025-08-07 12:30 ` [RFC 08/26] subprojects: Add vm-memory crate Zhao Liu
2025-08-07 12:30 ` [RFC 09/26] rust: Add vm-memory in meson Zhao Liu
2025-08-07 12:30 ` [RFC 10/26] subprojects/vm-memory: Patch vm-memory for QEMU memory backend Zhao Liu
2025-08-07 13:59   ` Paolo Bonzini
2025-08-08  8:17     ` Zhao Liu
2025-08-08  8:17       ` Paolo Bonzini
2025-08-08  8:51         ` Zhao Liu
2025-08-07 12:30 ` [RFC 11/26] rust/cargo: Specify the patched vm-memory crate Zhao Liu
2025-08-07 12:30 ` [RFC 12/26] rcu: Make rcu_read_lock & rcu_read_unlock not inline Zhao Liu
2025-08-07 13:54   ` Paolo Bonzini
2025-08-08  8:19     ` Zhao Liu
2025-08-07 12:30 ` [RFC 13/26] rust: Add RCU bindings Zhao Liu
2025-08-07 12:29   ` Manos Pitsidianakis
2025-08-07 13:38     ` Paolo Bonzini
2025-08-09  7:21       ` Zhao Liu
2025-08-09  9:13         ` Paolo Bonzini
2025-08-09  9:26           ` Manos Pitsidianakis
2025-08-12 10:43             ` Zhao Liu
2025-08-12 10:31       ` Zhao Liu
2025-08-07 12:30 ` [RFC 14/26] memory: Expose interfaces about Flatview reference count to Rust side Zhao Liu
2025-08-07 12:30 ` [RFC 15/26] memory: Rename address_space_lookup_region and expose it " Zhao Liu
2025-08-07 12:30 ` [RFC 16/26] memory: Make flatview_do_translate() return a pointer to MemoryRegionSection Zhao Liu
2025-08-07 13:57   ` Paolo Bonzini
2025-08-12 15:39     ` Zhao Liu
2025-08-12 15:42       ` Manos Pitsidianakis
2025-08-13 15:12         ` Zhao Liu
2025-08-12 19:23       ` Paolo Bonzini
2025-08-13 15:10         ` Zhao Liu
2025-08-07 12:30 ` [RFC 17/26] memory: Add a translation helper to return MemoryRegionSection Zhao Liu
2025-08-07 12:30 ` [RFC 18/26] memory: Rename flatview_access_allowed() to memory_region_access_allowed() Zhao Liu
2025-08-07 12:41   ` Manos Pitsidianakis
2025-08-07 12:30 ` [RFC 19/26] memory: Add MemoryRegionSection based misc helpers Zhao Liu
2025-08-07 12:30 ` [RFC 20/26] memory: Add wrappers of intermediate steps for read/write Zhao Liu
2025-08-07 12:30 ` [RFC 21/26] memory: Add store/load interfaces for Rust side Zhao Liu
2025-08-07 12:30 ` [RFC 22/26] rust/memory: Implement vm_memory::GuestMemoryRegion for MemoryRegionSection Zhao Liu
2025-08-07 12:30 ` [RFC 23/26] rust/memory: Implement vm_memory::GuestMemory for FlatView Zhao Liu
2025-08-07 12:30 ` [RFC 24/26] rust/memory: Provide AddressSpace bindings Zhao Liu
2025-08-07 13:50   ` Paolo Bonzini
2025-08-13 14:47     ` Zhao Liu
2025-08-07 12:30 ` [RFC 25/26] rust/memory: Add binding to check target endian Zhao Liu
2025-08-07 12:44   ` Manos Pitsidianakis
2025-08-13 14:48     ` Zhao Liu
2025-08-07 12:30 ` [RFC 26/26] rust/hpet: Use safe binding to access address space Zhao Liu
2025-08-07 12:42 ` [RFC 00/26] rust/memory: Integrate the vm-memory API from rust-vmm Zhao Liu
2025-08-07 14:13 ` Paolo Bonzini
2025-08-13 14:56   ` Zhao Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).