* [PATCH 01/22] rust/migration: Add Sync implementation for Migratable<>
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 02/22] rust/migration: Fix missing name in the VMSD of Migratable<> Zhao Liu
` (20 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
It's common to define MemoryRegionOps<T> and VMStateDescription<T> as
static variables, and this requires T to implement Sync.
Migratable<T> is usually embedded in device state, so it's necessary to
implement Sync for Migratable<T>.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/migration/src/migratable.rs | 3 +++
1 file changed, 3 insertions(+)
diff --git a/rust/migration/src/migratable.rs b/rust/migration/src/migratable.rs
index ded6fe8f4a6c..5c47c7d1c2fa 100644
--- a/rust/migration/src/migratable.rs
+++ b/rust/migration/src/migratable.rs
@@ -340,6 +340,9 @@ pub struct Migratable<T: ToMigrationStateShared> {
runtime_state: T,
}
+// SAFETY: the migration_state asserts via `BqlCell` that the BQL is taken.
+unsafe impl<T: ToMigrationStateShared + Sync> Sync for Migratable<T> {}
+
impl<T: ToMigrationStateShared> std::ops::Deref for Migratable<T> {
type Target = T;
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 02/22] rust/migration: Fix missing name in the VMSD of Migratable<>
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
2025-11-13 5:19 ` [PATCH 01/22] rust/migration: Add Sync implementation for Migratable<> Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 03/22] rust/migration: Check name field in VMStateDescriptionBuilder Zhao Liu
` (19 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
The VMStateDescription of Migratable<T> missed the name field, and this
casused segmentation fault in vmstate_save_state_v() when it tries to
write name field by json_writer_str().
Due to the limitation of const, a custom name based on type would be
more difficult. Instead, a straightforward and simple approach is to
have all Migratable<T> instances use the same VMSD name -
"migratable-wrapper".
This is availiable because Migratable<T> is always a field within a
VMSD, and its parent VMSD should have a distinct name.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/migration/src/migratable.rs | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/rust/migration/src/migratable.rs b/rust/migration/src/migratable.rs
index 5c47c7d1c2fa..76e524774a3c 100644
--- a/rust/migration/src/migratable.rs
+++ b/rust/migration/src/migratable.rs
@@ -421,7 +421,11 @@ impl<T: 'static + ToMigrationStateShared> Migratable<T> {
Migratable::<T>::FIELD
};
+ // All Migratable<T> instances share the same name. This is fine because
+ // Migratable<T> is always a field within a VMSD. The parent VMSD has the
+ // different name to distinguish child Migratable<T>.
const VMSD: &'static bindings::VMStateDescription = VMStateDescriptionBuilder::<Self>::new()
+ .name(c"migratable-wrapper")
.version_id(1)
.minimum_version_id(1)
.pre_save(&Self::pre_save)
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 03/22] rust/migration: Check name field in VMStateDescriptionBuilder
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
2025-11-13 5:19 ` [PATCH 01/22] rust/migration: Add Sync implementation for Migratable<> Zhao Liu
2025-11-13 5:19 ` [PATCH 02/22] rust/migration: Fix missing name in the VMSD of Migratable<> Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 04/22] rust/bql: Add BqlGuard to provide BQL context Zhao Liu
` (18 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
The name field is necessary for VMStateDescription, so that it's
necessary to check if it is set when build VMStateDescription.
Since is_null()/as_ref() become rustc v1.84 and pointer cannot cast to
integer in const, use Option<> to check name with a new field in
VMStateDescriptionBuilder instead.
This can be simplified in future when QEMU bumps up rustc version.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/migration/src/vmstate.rs | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/rust/migration/src/vmstate.rs b/rust/migration/src/vmstate.rs
index 267f9c8e053a..8087f10832f1 100644
--- a/rust/migration/src/vmstate.rs
+++ b/rust/migration/src/vmstate.rs
@@ -530,7 +530,11 @@ macro_rules! vmstate_subsections {
unsafe impl<T: Sync> Sync for VMStateDescription<T> {}
#[derive(Clone)]
-pub struct VMStateDescriptionBuilder<T>(bindings::VMStateDescription, PhantomData<fn(&T)>);
+pub struct VMStateDescriptionBuilder<T>(
+ bindings::VMStateDescription,
+ Option<*const std::os::raw::c_char>, // the name of VMStateDescription
+ PhantomData<fn(&T)>,
+);
#[derive(Debug)]
pub struct InvalidError;
@@ -591,7 +595,7 @@ fn from(_value: InvalidError) -> Errno {
impl<T> VMStateDescriptionBuilder<T> {
#[must_use]
pub const fn name(mut self, name_str: &CStr) -> Self {
- self.0.name = ::std::ffi::CStr::as_ptr(name_str);
+ self.1 = Some(::std::ffi::CStr::as_ptr(name_str));
self
}
@@ -717,13 +721,16 @@ pub const fn subsections(mut self, subs: &'static VMStateSubsections) -> Self {
}
#[must_use]
- pub const fn build(self) -> VMStateDescription<T> {
+ pub const fn build(mut self) -> VMStateDescription<T> {
+ // FIXME: is_null()/as_ref() become const since v1.84.
+ assert!(self.1.is_some(), "VMStateDescription requires name field!");
+ self.0.name = self.1.unwrap();
VMStateDescription::<T>(self.0, PhantomData)
}
#[must_use]
pub const fn new() -> Self {
- Self(bindings::VMStateDescription::ZERO, PhantomData)
+ Self(bindings::VMStateDescription::ZERO, None, PhantomData)
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 04/22] rust/bql: Add BqlGuard to provide BQL context
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (2 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 03/22] rust/migration: Check name field in VMStateDescriptionBuilder Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 05/22] rust/bql: Ensure BQL locked early at BqlRefCell borrowing Zhao Liu
` (17 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
There's some case requiring BQL context, for example, accessing
BqlCell/BqlRefCell or operation on IRQ in lockless IO.
Note though BqlCell/BqlRefCell ensures BQL won't be released during
reference exists, they don't create BQL context if BQL is not locked,
instead, they just assert and panic when BQL is not locked.
Therefore, it's necessary to provide a way to create BQL context by hand
at Rust side. BqlGuard is suitable for this need.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
include/qemu/main-loop.h | 22 ++++++++-
rust/bql/src/lib.rs | 101 ++++++++++++++++++++++++++++++++++++++-
stubs/iothread-lock.c | 11 +++++
system/cpus.c | 10 ++++
4 files changed, 142 insertions(+), 2 deletions(-)
diff --git a/include/qemu/main-loop.h b/include/qemu/main-loop.h
index 0d55c636b21a..7dd7c8211f02 100644
--- a/include/qemu/main-loop.h
+++ b/include/qemu/main-loop.h
@@ -250,7 +250,7 @@ AioContext *iohandler_get_aio_context(void);
/**
* rust_bql_mock_lock:
*
- * Called from Rust doctests to make bql_lock() return true.
+ * Called from Rust doctests to make bql_locked() return true.
* Do not touch.
*/
void rust_bql_mock_lock(void);
@@ -368,6 +368,16 @@ bool qemu_in_main_thread(void);
#define bql_lock() bql_lock_impl(__FILE__, __LINE__)
void bql_lock_impl(const char *file, int line);
+/**
+ * @rust_bql_lock: A wrapper over bql_lock().
+ *
+ * This function is used to allow bindgen to generate bql_lock()
+ * binding.
+ *
+ * Do not call this function directly! Use bql_lock() instead.
+ */
+void rust_bql_lock(void);
+
/**
* bql_unlock: Unlock the Big QEMU Lock (BQL).
*
@@ -383,6 +393,16 @@ void bql_lock_impl(const char *file, int line);
*/
void bql_unlock(void);
+/**
+ * @rust_bql_unlock: A wrapper over bql_unlock().
+ *
+ * This function is used to allow bindgen to generate bql_unlock()
+ * binding.
+ *
+ * Do not call this function directly! Use bql_unlock() instead.
+ */
+void rust_bql_unlock(void);
+
/**
* BQL_LOCK_GUARD
*
diff --git a/rust/bql/src/lib.rs b/rust/bql/src/lib.rs
index ef08221e9c1a..48cc76a7557c 100644
--- a/rust/bql/src/lib.rs
+++ b/rust/bql/src/lib.rs
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-or-later
mod bindings;
-use bindings::{bql_block_unlock, bql_locked, rust_bql_mock_lock};
+use bindings::{bql_block_unlock, bql_locked, rust_bql_lock, rust_bql_mock_lock, rust_bql_unlock};
mod cell;
pub use cell::*;
@@ -27,3 +27,102 @@ pub fn block_unlock(increase: bool) {
bql_block_unlock(increase);
}
}
+
+// this function is private since user should get BQL context via BqlGuard.
+fn lock() {
+ // SAFETY: the function locks bql which lifetime is enough to cover
+ // the device's entire lifetime.
+ unsafe {
+ rust_bql_lock();
+ }
+}
+
+// this function is private since user should get BQL context via BqlGuard.
+fn unlock() {
+ // SAFETY: the function unlocks bql which lifetime is enough to
+ // cover the device's entire lifetime.
+ unsafe {
+ rust_bql_unlock();
+ }
+}
+
+/// An RAII guard to ensure a block of code runs within the BQL context.
+///
+/// It checks if the BQL is already locked at its creation:
+/// * If not, it locks the BQL and will release it when it is dropped.
+/// * If yes, it blocks BQL unlocking until its lifetime is end.
+#[must_use]
+pub struct BqlGuard {
+ locked: bool,
+}
+
+impl BqlGuard {
+ /// Creates a new `BqlGuard` to ensure BQL is locked during its
+ /// lifetime.
+ ///
+ /// # Examples
+ ///
+ /// ```
+ /// use bql::{BqlCell, BqlGuard};
+ ///
+ /// fn foo() {
+ /// let _guard = BqlGuard::new(); // BQL is locked
+ ///
+ /// let c = BqlCell::new(5);
+ /// assert_eq!(c.get(), 5);
+ /// } // BQL could be unlocked
+ /// ```
+ pub fn new() -> Self {
+ if !is_locked() {
+ lock();
+ Self { locked: true }
+ } else {
+ block_unlock(true);
+ Self { locked: false }
+ }
+ }
+}
+
+impl Default for BqlGuard {
+ fn default() -> Self {
+ Self::new()
+ }
+}
+
+impl Drop for BqlGuard {
+ fn drop(&mut self) {
+ if self.locked {
+ unlock();
+ } else {
+ block_unlock(false);
+ }
+ }
+}
+
+/// Executes a closure (function) within the BQL context.
+///
+/// This function creates a `BqlGuard`.
+///
+/// # Examples
+///
+/// ```should_panic
+/// use bql::{with_guard, BqlRefCell};
+///
+/// let c = BqlRefCell::new(5);
+///
+/// with_guard(|| {
+/// // BQL is locked
+/// let m = c.borrow();
+///
+/// assert_eq!(*m, 5);
+/// }); // BQL could be unlocked
+///
+/// let b = c.borrow(); // this causes a panic
+/// ```
+pub fn with_guard<F, R>(f: F) -> R
+where
+ F: FnOnce() -> R,
+{
+ let _guard = BqlGuard::new();
+ f()
+}
diff --git a/stubs/iothread-lock.c b/stubs/iothread-lock.c
index c89c9c7228f3..e2ebce565a06 100644
--- a/stubs/iothread-lock.c
+++ b/stubs/iothread-lock.c
@@ -23,6 +23,17 @@ void bql_unlock(void)
assert(!bql_unlock_blocked);
}
+void rust_bql_lock(void)
+{
+ rust_bql_mock_lock();
+}
+
+void rust_bql_unlock(void)
+{
+ bql_unlock();
+ bql_is_locked = false;
+}
+
void bql_block_unlock(bool increase)
{
uint32_t new_value;
diff --git a/system/cpus.c b/system/cpus.c
index ef2d2f241faa..da31bda59444 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -578,6 +578,11 @@ void bql_lock_impl(const char *file, int line)
bql_lock_fn(&bql, file, line);
}
+void rust_bql_lock(void)
+{
+ bql_lock();
+}
+
void bql_unlock(void)
{
g_assert(bql_locked());
@@ -585,6 +590,11 @@ void bql_unlock(void)
qemu_mutex_unlock(&bql);
}
+void rust_bql_unlock(void)
+{
+ bql_unlock();
+}
+
void qemu_cond_wait_bql(QemuCond *cond)
{
qemu_cond_wait(cond, &bql);
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 05/22] rust/bql: Ensure BQL locked early at BqlRefCell borrowing
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (3 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 04/22] rust/bql: Add BqlGuard to provide BQL context Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 06/22] rust/memory: Add enable_lockless_io binding Zhao Liu
` (16 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
At present, BqlRefCell checks whether BQL is locked when it blocks BQL
unlock (in bql_block_unlock).
But the such check should be done earlier - at the beginning of
BqlRefCell borrowing.
So convert BqlRefCell::borrow field from Cell<> to BqlCell<>, to
guarantee BQL is locked from the beginning when someone is trying to
borrow BqlRefCell.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/bql/src/cell.rs | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/rust/bql/src/cell.rs b/rust/bql/src/cell.rs
index 8ade7db629cf..0a436f7eb431 100644
--- a/rust/bql/src/cell.rs
+++ b/rust/bql/src/cell.rs
@@ -141,8 +141,10 @@
//! Multiple immutable borrows are allowed via [`borrow`](BqlRefCell::borrow),
//! or a single mutable borrow via [`borrow_mut`](BqlRefCell::borrow_mut). The
//! thread will panic if these rules are violated or if the BQL is not held.
+#[cfg(feature = "debug_cell")]
+use std::cell::Cell;
use std::{
- cell::{Cell, UnsafeCell},
+ cell::UnsafeCell,
cmp::Ordering,
fmt,
marker::PhantomData,
@@ -377,7 +379,7 @@ pub struct BqlRefCell<T> {
// for std::cell::BqlRefCell), so that we can use offset_of! on it.
// UnsafeCell and repr(C) both prevent usage of niches.
value: UnsafeCell<T>,
- borrow: Cell<BorrowFlag>,
+ borrow: BqlCell<BorrowFlag>,
// Stores the location of the earliest currently active borrow.
// This gets updated whenever we go from having zero borrows
// to having a single borrow. When a borrow occurs, this gets included
@@ -426,7 +428,7 @@ impl<T> BqlRefCell<T> {
pub const fn new(value: T) -> BqlRefCell<T> {
BqlRefCell {
value: UnsafeCell::new(value),
- borrow: Cell::new(UNUSED),
+ borrow: BqlCell::new(UNUSED),
#[cfg(feature = "debug_cell")]
borrowed_at: Cell::new(None),
}
@@ -688,12 +690,12 @@ fn from(t: T) -> BqlRefCell<T> {
}
struct BorrowRef<'b> {
- borrow: &'b Cell<BorrowFlag>,
+ borrow: &'b BqlCell<BorrowFlag>,
}
impl<'b> BorrowRef<'b> {
#[inline]
- fn new(borrow: &'b Cell<BorrowFlag>) -> Option<BorrowRef<'b>> {
+ fn new(borrow: &'b BqlCell<BorrowFlag>) -> Option<BorrowRef<'b>> {
let b = borrow.get().wrapping_add(1);
if !is_reading(b) {
// Incrementing borrow can result in a non-reading value (<= 0) in these cases:
@@ -789,12 +791,12 @@ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
}
struct BorrowRefMut<'b> {
- borrow: &'b Cell<BorrowFlag>,
+ borrow: &'b BqlCell<BorrowFlag>,
}
impl<'b> BorrowRefMut<'b> {
#[inline]
- fn new(borrow: &'b Cell<BorrowFlag>) -> Option<BorrowRefMut<'b>> {
+ fn new(borrow: &'b BqlCell<BorrowFlag>) -> Option<BorrowRefMut<'b>> {
// There must currently be no existing references when borrow_mut() is
// called, so we explicitly only allow going from UNUSED to UNUSED - 1.
match borrow.get() {
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 06/22] rust/memory: Add enable_lockless_io binding
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (4 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 05/22] rust/bql: Ensure BQL locked early at BqlRefCell borrowing Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 07/22] rust/hpet: Reduce unnecessary mutable self argument Zhao Liu
` (15 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Directly wrap the memory_region_enable_lockless_io() in
MemoryRegion::enable_lockless_io.
Although memory_region_enable_lockless_io() just sets 2 fields at C
side, it's not necessary to totally implement it at Rust side, instead,
simply wrap and reuse existing C interfaces.
This will save some effort when modifying the interface in the future
(changes are likely to occur, as there is a TODO in the C code).
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/system/src/memory.rs | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/rust/system/src/memory.rs b/rust/system/src/memory.rs
index 4b3316bf767f..e4bfbed1febe 100644
--- a/rust/system/src/memory.rs
+++ b/rust/system/src/memory.rs
@@ -12,7 +12,9 @@
use common::{callbacks::FnCall, uninit::MaybeUninitField, zeroable::Zeroable, Opaque};
use qom::prelude::*;
-use crate::bindings::{self, device_endian, memory_region_init_io};
+use crate::bindings::{
+ self, device_endian, memory_region_enable_lockless_io, memory_region_init_io,
+};
pub use crate::bindings::{hwaddr, MemTxAttrs};
pub struct MemoryRegionOps<T>(
@@ -172,6 +174,17 @@ pub fn init_io<T: IsA<Object>>(
);
}
}
+
+ /// Enable lockless (BQL free) acceess.
+ ///
+ /// Enable BQL-free access for devices that are well prepared to handle
+ /// locking during I/O themselves: either by doing fine grained locking or
+ /// by providing lock-free I/O schemes.
+ pub fn enable_lockless_io(&self) {
+ // SAFETY: MemoryRegion is wrapped by Opaque<>, it's safe to get a
+ // raw pointer as *mut MemoryRegion.
+ unsafe { memory_region_enable_lockless_io(self.as_mut_ptr()) }
+ }
}
unsafe impl ObjectType for MemoryRegion {
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 07/22] rust/hpet: Reduce unnecessary mutable self argument
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (5 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 06/22] rust/memory: Add enable_lockless_io binding Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 08/22] rust/hpet: Rename HPETRegister to DecodedRegister Zhao Liu
` (14 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Methods of Timer and InterruptSource have been made as safe, so make the
related methods of HPETTimer to accept immutable self reference.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 3564aa79c6e5..5e08b91494cf 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -324,7 +324,7 @@ fn get_int_route(&self) -> usize {
}
}
- fn set_irq(&mut self, set: bool) {
+ fn set_irq(&self, set: bool) {
let route = self.get_int_route();
if set && self.is_int_enabled() && self.get_state().is_hpet_enabled() {
@@ -350,7 +350,7 @@ fn set_irq(&mut self, set: bool) {
}
}
- fn update_irq(&mut self, set: bool) {
+ fn update_irq(&self, set: bool) {
// If Timer N Interrupt Enable bit is 0, "the timer will
// still operate and generate appropriate status bits, but
// will not cause an interrupt"
@@ -388,7 +388,7 @@ fn set_timer(&mut self) {
self.arm_timer(self.cmp64);
}
- fn del_timer(&mut self) {
+ fn del_timer(&self) {
// Just remove the timer from the timer_list without destroying
// this timer instance.
self.qemu_timer.delete();
@@ -657,7 +657,7 @@ fn set_cfg_reg(&self, shift: u32, len: u32, val: u64) {
self.counter.set(self.get_ticks());
for timer in self.timers.iter().take(self.num_timers) {
- timer.borrow_mut().del_timer();
+ timer.borrow().del_timer();
}
}
@@ -681,7 +681,7 @@ fn set_int_status_reg(&self, shift: u32, _len: u32, val: u64) {
for (index, timer) in self.timers.iter().take(self.num_timers).enumerate() {
if cleared & (1 << index) != 0 {
- timer.borrow_mut().update_irq(false);
+ timer.borrow().update_irq(false);
}
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 08/22] rust/hpet: Rename HPETRegister to DecodedRegister
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (6 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 07/22] rust/hpet: Reduce unnecessary mutable self argument Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 09/22] rust/hpet: Rename decoded "reg" enumeration to "target" Zhao Liu
` (13 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
HPETRegister represents the layout of register spaces of HPET timer
block and timer N, and is used to decode register address into register
enumeration.
To avoid confusion with the subsequently introduced HPETRegisters (that
is used to maintain values of HPET registers), rename HPETRegister to
DecodedRegister.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 5e08b91494cf..c76f52a374de 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -102,7 +102,7 @@
#[derive(common::TryInto)]
#[repr(u64)]
#[allow(non_camel_case_types)]
-/// Timer registers, masked by 0x18
+/// Timer register enumerations, masked by 0x18
enum TimerRegister {
/// Timer N Configuration and Capability Register
CFG = 0,
@@ -115,7 +115,7 @@ enum TimerRegister {
#[derive(common::TryInto)]
#[repr(u64)]
#[allow(non_camel_case_types)]
-/// Global registers
+/// Global register enumerations
enum GlobalRegister {
/// General Capabilities and ID Register
CAP = 0,
@@ -127,7 +127,7 @@ enum GlobalRegister {
COUNTER = 0xF0,
}
-enum HPETRegister<'a> {
+enum DecodedRegister<'a> {
/// Global register in the range from `0` to `0xff`
Global(GlobalRegister),
@@ -142,7 +142,7 @@ enum HPETRegister<'a> {
struct HPETAddrDecode<'a> {
shift: u32,
len: u32,
- reg: HPETRegister<'a>,
+ reg: DecodedRegister<'a>,
}
const fn hpet_next_wrap(cur_tick: u64) -> u64 {
@@ -783,22 +783,22 @@ fn decode(&self, mut addr: hwaddr, size: u32) -> HPETAddrDecode<'_> {
addr &= !4;
let reg = if (0..=0xff).contains(&addr) {
- GlobalRegister::try_from(addr).map(HPETRegister::Global)
+ GlobalRegister::try_from(addr).map(DecodedRegister::Global)
} else {
let timer_id: usize = ((addr - 0x100) / 0x20) as usize;
if timer_id < self.num_timers {
// TODO: Add trace point - trace_hpet_ram_[read|write]_timer_id(timer_id)
TimerRegister::try_from(addr & 0x18)
- .map(|reg| HPETRegister::Timer(&self.timers[timer_id], reg))
+ .map(|reg| DecodedRegister::Timer(&self.timers[timer_id], reg))
} else {
// TODO: Add trace point - trace_hpet_timer_id_out_of_range(timer_id)
Err(addr)
}
};
- // reg is now a Result<HPETRegister, hwaddr>
- // convert the Err case into HPETRegister as well
- let reg = reg.unwrap_or_else(HPETRegister::Unknown);
+ // reg is now a Result<DecodedRegister, hwaddr>
+ // convert the Err case into DecodedRegister as well
+ let reg = reg.unwrap_or_else(DecodedRegister::Unknown);
HPETAddrDecode { shift, len, reg }
}
@@ -807,7 +807,7 @@ fn read(&self, addr: hwaddr, size: u32) -> u64 {
let HPETAddrDecode { shift, reg, .. } = self.decode(addr, size);
use GlobalRegister::*;
- use HPETRegister::*;
+ use DecodedRegister::*;
(match reg {
Timer(timer, tn_reg) => timer.borrow_mut().read(tn_reg),
Global(CAP) => self.capability.get(), /* including HPET_PERIOD 0x004 */
@@ -834,7 +834,7 @@ fn write(&self, addr: hwaddr, value: u64, size: u32) {
// TODO: Add trace point - trace_hpet_ram_write(addr, value)
use GlobalRegister::*;
- use HPETRegister::*;
+ use DecodedRegister::*;
match reg {
Timer(timer, tn_reg) => timer.borrow_mut().write(tn_reg, value, shift, len),
Global(CAP) => {} // General Capabilities and ID Register: Read Only
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 09/22] rust/hpet: Rename decoded "reg" enumeration to "target"
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (7 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 08/22] rust/hpet: Rename HPETRegister to DecodedRegister Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 10/22] rust/hpet: Abstract HPETTimerRegisters struct Zhao Liu
` (12 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
HPETAddrDecode has a `reg` field so that there're many variables named
"reg" in MMIO read/write/decode functions.
In the future, there'll be other HPETRegisters/HPETTimerRegisters
structs containing values of HPET registers, and related variables or
arguments will be named as "regs".
To avoid potential confusion between many "reg" and "regs", rename
HPETAddrDecode::reg to HPETAddrDecode::target, and rename decoding
related variables from "reg" to "target".
"target" is picked as the name since this clearly reflects the field or
variable is the target decoded register.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index c76f52a374de..2105538cffe6 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -142,7 +142,7 @@ enum DecodedRegister<'a> {
struct HPETAddrDecode<'a> {
shift: u32,
len: u32,
- reg: DecodedRegister<'a>,
+ target: DecodedRegister<'a>,
}
const fn hpet_next_wrap(cur_tick: u64) -> u64 {
@@ -501,18 +501,18 @@ fn callback(&mut self) {
self.update_irq(true);
}
- const fn read(&self, reg: TimerRegister) -> u64 {
+ const fn read(&self, target: TimerRegister) -> u64 {
use TimerRegister::*;
- match reg {
+ match target {
CFG => self.config, // including interrupt capabilities
CMP => self.cmp, // comparator register
ROUTE => self.fsb,
}
}
- fn write(&mut self, reg: TimerRegister, value: u64, shift: u32, len: u32) {
+ fn write(&mut self, target: TimerRegister, value: u64, shift: u32, len: u32) {
use TimerRegister::*;
- match reg {
+ match target {
CFG => self.set_tn_cfg_reg(shift, len, value),
CMP => self.set_tn_cmp_reg(shift, len, value),
ROUTE => self.set_tn_fsb_route_reg(shift, len, value),
@@ -782,34 +782,34 @@ fn decode(&self, mut addr: hwaddr, size: u32) -> HPETAddrDecode<'_> {
let len = std::cmp::min(size * 8, 64 - shift);
addr &= !4;
- let reg = if (0..=0xff).contains(&addr) {
+ let target = if (0..=0xff).contains(&addr) {
GlobalRegister::try_from(addr).map(DecodedRegister::Global)
} else {
let timer_id: usize = ((addr - 0x100) / 0x20) as usize;
if timer_id < self.num_timers {
// TODO: Add trace point - trace_hpet_ram_[read|write]_timer_id(timer_id)
TimerRegister::try_from(addr & 0x18)
- .map(|reg| DecodedRegister::Timer(&self.timers[timer_id], reg))
+ .map(|target| DecodedRegister::Timer(&self.timers[timer_id], target))
} else {
// TODO: Add trace point - trace_hpet_timer_id_out_of_range(timer_id)
Err(addr)
}
};
- // reg is now a Result<DecodedRegister, hwaddr>
+ // `target` is now a Result<DecodedRegister, hwaddr>
// convert the Err case into DecodedRegister as well
- let reg = reg.unwrap_or_else(DecodedRegister::Unknown);
- HPETAddrDecode { shift, len, reg }
+ let target = target.unwrap_or_else(DecodedRegister::Unknown);
+ HPETAddrDecode { shift, len, target }
}
fn read(&self, addr: hwaddr, size: u32) -> u64 {
// TODO: Add trace point - trace_hpet_ram_read(addr)
- let HPETAddrDecode { shift, reg, .. } = self.decode(addr, size);
+ let HPETAddrDecode { shift, target, .. } = self.decode(addr, size);
use GlobalRegister::*;
use DecodedRegister::*;
- (match reg {
- Timer(timer, tn_reg) => timer.borrow_mut().read(tn_reg),
+ (match target {
+ Timer(timer, tn_target) => timer.borrow_mut().read(tn_target),
Global(CAP) => self.capability.get(), /* including HPET_PERIOD 0x004 */
Global(CFG) => self.config.get(),
Global(INT_STATUS) => self.int_status.get(),
@@ -830,13 +830,13 @@ fn read(&self, addr: hwaddr, size: u32) -> u64 {
}
fn write(&self, addr: hwaddr, value: u64, size: u32) {
- let HPETAddrDecode { shift, len, reg } = self.decode(addr, size);
+ let HPETAddrDecode { shift, len, target } = self.decode(addr, size);
// TODO: Add trace point - trace_hpet_ram_write(addr, value)
use GlobalRegister::*;
use DecodedRegister::*;
- match reg {
- Timer(timer, tn_reg) => timer.borrow_mut().write(tn_reg, value, shift, len),
+ match target {
+ Timer(timer, tn_target) => timer.borrow_mut().write(tn_target, value, shift, len),
Global(CAP) => {} // General Capabilities and ID Register: Read Only
Global(CFG) => self.set_cfg_reg(shift, len, value),
Global(INT_STATUS) => self.set_int_status_reg(shift, len, value),
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 10/22] rust/hpet: Abstract HPETTimerRegisters struct
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (8 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 09/22] rust/hpet: Rename decoded "reg" enumeration to "target" Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 11:24 ` Paolo Bonzini
2025-11-13 5:19 ` [PATCH 11/22] rust/hpet: Make timer register accessors as methods of HPETTimerRegisters Zhao Liu
` (11 subsequent siblings)
21 siblings, 1 reply; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Place all timer N's registers in a HPETTimerRegisters struct.
This allows all Timer N registers to be grouped together with global
registers and managed using a single lock (BqlRefCell or Mutex) in
future. And this makes it easier to apply ToMigrationState macro.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 101 ++++++++++++++++++-------------
1 file changed, 60 insertions(+), 41 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 2105538cffe6..c7c0987aeb71 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -180,6 +180,18 @@ fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
timer_cell.borrow_mut().callback()
}
+#[repr(C)]
+#[derive(Debug, Default)]
+pub struct HPETTimerRegisters {
+ // Memory-mapped, software visible timer registers
+ /// Timer N Configuration and Capability Register
+ config: u64,
+ /// Timer N Comparator Value Register
+ cmp: u64,
+ /// Timer N FSB Interrupt Route Register
+ fsb: u64,
+}
+
/// HPET Timer Abstraction
#[repr(C)]
#[derive(Debug)]
@@ -191,14 +203,7 @@ pub struct HPETTimer {
/// timer block abstraction containing this timer
state: NonNull<HPETState>,
- // Memory-mapped, software visible timer registers
- /// Timer N Configuration and Capability Register
- config: u64,
- /// Timer N Comparator Value Register
- cmp: u64,
- /// Timer N FSB Interrupt Route Register
- fsb: u64,
-
+ regs: HPETTimerRegisters,
// Hidden register state
/// comparator (extended to counter width)
cmp64: u64,
@@ -223,9 +228,7 @@ fn new(index: u8, state: *const HPETState) -> HPETTimer {
// is initialized below.
qemu_timer: unsafe { Timer::new() },
state: NonNull::new(state.cast_mut()).unwrap(),
- config: 0,
- cmp: 0,
- fsb: 0,
+ regs: Default::default(),
cmp64: 0,
period: 0,
wrap_flag: 0,
@@ -252,32 +255,32 @@ fn is_int_active(&self) -> bool {
}
const fn is_fsb_route_enabled(&self) -> bool {
- self.config & (1 << HPET_TN_CFG_FSB_ENABLE_SHIFT) != 0
+ self.regs.config & (1 << HPET_TN_CFG_FSB_ENABLE_SHIFT) != 0
}
const fn is_periodic(&self) -> bool {
- self.config & (1 << HPET_TN_CFG_PERIODIC_SHIFT) != 0
+ self.regs.config & (1 << HPET_TN_CFG_PERIODIC_SHIFT) != 0
}
const fn is_int_enabled(&self) -> bool {
- self.config & (1 << HPET_TN_CFG_INT_ENABLE_SHIFT) != 0
+ self.regs.config & (1 << HPET_TN_CFG_INT_ENABLE_SHIFT) != 0
}
const fn is_32bit_mod(&self) -> bool {
- self.config & (1 << HPET_TN_CFG_32BIT_SHIFT) != 0
+ self.regs.config & (1 << HPET_TN_CFG_32BIT_SHIFT) != 0
}
const fn is_valset_enabled(&self) -> bool {
- self.config & (1 << HPET_TN_CFG_SETVAL_SHIFT) != 0
+ self.regs.config & (1 << HPET_TN_CFG_SETVAL_SHIFT) != 0
}
fn clear_valset(&mut self) {
- self.config &= !(1 << HPET_TN_CFG_SETVAL_SHIFT);
+ self.regs.config &= !(1 << HPET_TN_CFG_SETVAL_SHIFT);
}
/// True if timer interrupt is level triggered; otherwise, edge triggered.
const fn is_int_level_triggered(&self) -> bool {
- self.config & (1 << HPET_TN_CFG_INT_TYPE_SHIFT) != 0
+ self.regs.config & (1 << HPET_TN_CFG_INT_TYPE_SHIFT) != 0
}
/// calculate next value of the general counter that matches the
@@ -296,7 +299,7 @@ fn calculate_cmp64(&self, cur_tick: u64, target: u64) -> u64 {
}
const fn get_individual_route(&self) -> usize {
- ((self.config & HPET_TN_CFG_INT_ROUTE_MASK) >> HPET_TN_CFG_INT_ROUTE_SHIFT) as usize
+ ((self.regs.config & HPET_TN_CFG_INT_ROUTE_MASK) >> HPET_TN_CFG_INT_ROUTE_SHIFT) as usize
}
fn get_int_route(&self) -> usize {
@@ -334,8 +337,8 @@ fn set_irq(&self, set: bool) {
unsafe {
address_space_stl_le(
addr_of_mut!(address_space_memory),
- self.fsb >> 32, // Timer N FSB int addr
- self.fsb as u32, // Timer N FSB int value, truncate!
+ self.regs.fsb >> 32, // Timer N FSB int addr
+ self.regs.fsb as u32, // Timer N FSB int value, truncate!
MEMTXATTRS_UNSPECIFIED,
null_mut(),
);
@@ -375,7 +378,7 @@ fn set_timer(&mut self) {
let cur_tick: u64 = self.get_state().get_ticks();
self.wrap_flag = 0;
- self.cmp64 = self.calculate_cmp64(cur_tick, self.cmp);
+ self.cmp64 = self.calculate_cmp64(cur_tick, self.regs.cmp);
if self.is_32bit_mod() {
// HPET spec says in one-shot 32-bit mode, generate an interrupt when
// counter wraps in addition to an interrupt with comparator match.
@@ -403,25 +406,25 @@ fn del_timer(&self) {
/// Configuration and Capability Register
fn set_tn_cfg_reg(&mut self, shift: u32, len: u32, val: u64) {
// TODO: Add trace point - trace_hpet_ram_write_tn_cfg(addr & 4)
- let old_val: u64 = self.config;
+ let old_val: u64 = self.regs.config;
let mut new_val: u64 = old_val.deposit(shift, len, val);
new_val = hpet_fixup_reg(new_val, old_val, HPET_TN_CFG_WRITE_MASK);
// Switch level-type interrupt to edge-type.
if deactivating_bit(old_val, new_val, HPET_TN_CFG_INT_TYPE_SHIFT) {
- // Do this before changing timer.config; otherwise, if
+ // Do this before changing timer.regs.config; otherwise, if
// HPET_TN_FSB is set, update_irq will not lower the qemu_irq.
self.update_irq(false);
}
- self.config = new_val;
+ self.regs.config = new_val;
if activating_bit(old_val, new_val, HPET_TN_CFG_INT_ENABLE_SHIFT) && self.is_int_active() {
self.update_irq(true);
}
if self.is_32bit_mod() {
- self.cmp = u64::from(self.cmp as u32); // truncate!
+ self.regs.cmp = u64::from(self.regs.cmp as u32); // truncate!
self.period = u64::from(self.period as u32); // truncate!
}
@@ -447,7 +450,7 @@ fn set_tn_cmp_reg(&mut self, shift: u32, len: u32, val: u64) {
}
if !self.is_periodic() || self.is_valset_enabled() {
- self.cmp = self.cmp.deposit(shift, length, value);
+ self.regs.cmp = self.regs.cmp.deposit(shift, length, value);
}
if self.is_periodic() {
@@ -462,18 +465,19 @@ fn set_tn_cmp_reg(&mut self, shift: u32, len: u32, val: u64) {
/// FSB Interrupt Route Register
fn set_tn_fsb_route_reg(&mut self, shift: u32, len: u32, val: u64) {
- self.fsb = self.fsb.deposit(shift, len, val);
+ self.regs.fsb = self.regs.fsb.deposit(shift, len, val);
}
fn reset(&mut self) {
self.del_timer();
- self.cmp = u64::MAX; // Comparator Match Registers reset to all 1's.
- self.config = (1 << HPET_TN_CFG_PERIODIC_CAP_SHIFT) | (1 << HPET_TN_CFG_SIZE_CAP_SHIFT);
+ self.regs.cmp = u64::MAX; // Comparator Match Registers reset to all 1's.
+ self.regs.config =
+ (1 << HPET_TN_CFG_PERIODIC_CAP_SHIFT) | (1 << HPET_TN_CFG_SIZE_CAP_SHIFT);
if self.get_state().has_msi_flag() {
- self.config |= 1 << HPET_TN_CFG_FSB_CAP_SHIFT;
+ self.regs.config |= 1 << HPET_TN_CFG_FSB_CAP_SHIFT;
}
// advertise availability of ioapic int
- self.config |=
+ self.regs.config |=
(u64::from(self.get_state().int_route_cap)) << HPET_TN_CFG_INT_ROUTE_CAP_SHIFT;
self.period = 0;
self.wrap_flag = 0;
@@ -489,9 +493,9 @@ fn callback(&mut self) {
self.cmp64 += period;
}
if self.is_32bit_mod() {
- self.cmp = u64::from(self.cmp64 as u32); // truncate!
+ self.regs.cmp = u64::from(self.cmp64 as u32); // truncate!
} else {
- self.cmp = self.cmp64;
+ self.regs.cmp = self.cmp64;
}
self.arm_timer(self.cmp64);
} else if self.wrap_flag != 0 {
@@ -504,9 +508,9 @@ fn callback(&mut self) {
const fn read(&self, target: TimerRegister) -> u64 {
use TimerRegister::*;
match target {
- CFG => self.config, // including interrupt capabilities
- CMP => self.cmp, // comparator register
- ROUTE => self.fsb,
+ CFG => self.regs.config, // including interrupt capabilities
+ CMP => self.regs.cmp, // comparator register
+ ROUTE => self.regs.fsb,
}
}
@@ -865,7 +869,7 @@ fn post_load(&self, _version_id: u8) -> Result<(), migration::Infallible> {
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
- t.cmp64 = t.calculate_cmp64(t.get_state().counter.get(), t.cmp);
+ t.cmp64 = t.calculate_cmp64(t.get_state().counter.get(), t.regs.cmp);
t.last = CLOCK_VIRTUAL.get_ns() - NANOSECONDS_PER_SECOND;
}
@@ -929,6 +933,22 @@ impl ObjectImpl for HPETState {
})
.build();
+// In fact, version_id and minimum_version_id for HPETTimerRegisters are
+// unrelated to HPETTimer's version IDs. Does not affect compatibility.
+impl_vmstate_struct!(
+ HPETTimerRegisters,
+ VMStateDescriptionBuilder::<HPETTimerRegisters>::new()
+ .name(c"hpet_timer/regs")
+ .version_id(1)
+ .minimum_version_id(1)
+ .fields(vmstate_fields! {
+ vmstate_of!(HPETTimerRegisters, config),
+ vmstate_of!(HPETTimerRegisters, cmp),
+ vmstate_of!(HPETTimerRegisters, fsb),
+ })
+ .build()
+);
+
const VMSTATE_HPET_TIMER: VMStateDescription<HPETTimer> =
VMStateDescriptionBuilder::<HPETTimer>::new()
.name(c"hpet_timer")
@@ -936,14 +956,13 @@ impl ObjectImpl for HPETState {
.minimum_version_id(1)
.fields(vmstate_fields! {
vmstate_of!(HPETTimer, index),
- vmstate_of!(HPETTimer, config),
- vmstate_of!(HPETTimer, cmp),
- vmstate_of!(HPETTimer, fsb),
+ vmstate_of!(HPETTimer, regs),
vmstate_of!(HPETTimer, period),
vmstate_of!(HPETTimer, wrap_flag),
vmstate_of!(HPETTimer, qemu_timer),
})
.build();
+
impl_vmstate_struct!(HPETTimer, VMSTATE_HPET_TIMER);
const VALIDATE_TIMERS_NAME: &CStr = c"num_timers must match";
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH 10/22] rust/hpet: Abstract HPETTimerRegisters struct
2025-11-13 5:19 ` [PATCH 10/22] rust/hpet: Abstract HPETTimerRegisters struct Zhao Liu
@ 2025-11-13 11:24 ` Paolo Bonzini
2025-11-14 4:37 ` Zhao Liu
0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2025-11-13 11:24 UTC (permalink / raw)
To: Zhao Liu, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust
On 11/13/25 06:19, Zhao Liu wrote:
> Place all timer N's registers in a HPETTimerRegisters struct.
>
> This allows all Timer N registers to be grouped together with global
> registers and managed using a single lock (BqlRefCell or Mutex) in
> future. And this makes it easier to apply ToMigrationState macro.
This is pretty much the crucial patch in the series and it's the only
one that needs more work, or some fixup at the end.
In particular, more fields of HPETTimer need to be moved to the
HPETTimerRegisters. It's possible that it would be a problem to move
the timer itself inside the mutex but, at least, the HPETTimer could be
changed to just
pub struct HPETTimer {
timer: QemuTimer,
state: NonNull<HPETState>,
index: u8,
}
as in the patch included at the end (compile-tested only). Then, the
BqlRefCell<HPETTimer> can be changed to just HPETTimer because all the
fields handle their interior-mutable fields.
Preserving the old migration format can then be solved in two ways:
1) with a handwritten ToMigrationStateShared implementation for
HPETTimer (and marking the tn_regs array as #[migration_state(omit)])
2) by also adding num_timers_save and the timer's expiration to
HPETRegisters and HPETTimerRegisters, respectively.
I think I prefer the former, but I haven't written the code so it's
not my choice. :)
I'm okay with doing these changes on top of these patches as well.
Paolo
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 75a6fd8a050..5ff02c01539 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -172,7 +172,7 @@ const fn deactivating_bit(old: u64, new: u64, shift: usize) -> bool {
}
fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
- let mut t = timer_cell.borrow_mut();
+ let t = timer_cell.borrow();
// SFAETY: state field is valid after timer initialization.
let mut regs = unsafe { t.state.as_ref() }.regs.lock().unwrap();
t.callback(&mut regs)
@@ -181,6 +181,9 @@ fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
#[repr(C)]
#[derive(Debug, Default, ToMigrationState)]
pub struct HPETTimerRegisters {
+ // Only needed for migration
+ index: u8,
+
// Memory-mapped, software visible timer registers
/// Timer N Configuration and Capability Register
config: u64,
@@ -188,9 +191,35 @@ pub struct HPETTimerRegisters {
cmp: u64,
/// Timer N FSB Interrupt Route Register
fsb: u64,
+
+ // Hidden register state
+ /// comparator (extended to counter width)
+ cmp64: u64,
+ /// Last value written to comparator
+ period: u64,
+ /// timer pop will indicate wrap for one-shot 32-bit
+ /// mode. Next pop will be actual timer expiration.
+ wrap_flag: u8,
+ /// last value armed, to avoid timer storms
+ last: u64,
}
impl HPETTimerRegisters {
+ /// calculate next value of the general counter that matches the
+ /// target (either entirely, or the low 32-bit only depending on
+ /// the timer mode).
+ fn update_cmp64(&mut self, cur_tick: u64) {
+ self.cmp64 = if self.is_32bit_mod() {
+ let mut result: u64 = cur_tick.deposit(0, 32, self.cmp);
+ if result < cur_tick {
+ result += 0x100000000;
+ }
+ result
+ } else {
+ self.cmp
+ }
+ }
+
const fn is_fsb_route_enabled(&self) -> bool {
self.config & (1 << HPET_TN_CFG_FSB_ENABLE_SHIFT) != 0
}
@@ -235,17 +264,6 @@ pub struct HPETTimer {
qemu_timer: Timer,
/// timer block abstraction containing this timer
state: NonNull<HPETState>,
-
- // Hidden register state
- /// comparator (extended to counter width)
- cmp64: u64,
- /// Last value written to comparator
- period: u64,
- /// timer pop will indicate wrap for one-shot 32-bit
- /// mode. Next pop will be actual timer expiration.
- wrap_flag: u8,
- /// last value armed, to avoid timer storms
- last: u64,
}
// SAFETY: Sync is not automatically derived due to the `state` field,
@@ -260,10 +278,6 @@ fn new(index: u8, state: *const HPETState) -> HPETTimer {
// is initialized below.
qemu_timer: unsafe { Timer::new() },
state: NonNull::new(state.cast_mut()).unwrap(),
- cmp64: 0,
- period: 0,
- wrap_flag: 0,
- last: 0,
}
}
@@ -291,22 +305,6 @@ fn is_int_active(&self, regs: &HPETRegisters) -> bool {
regs.is_timer_int_active(self.index.into())
}
- /// calculate next value of the general counter that matches the
- /// target (either entirely, or the low 32-bit only depending on
- /// the timer mode).
- fn calculate_cmp64(&self, regs: &MutexGuard<HPETRegisters>, cur_tick: u64, target: u64) -> u64 {
- let tn_regs = ®s.tn_regs[self.index as usize];
- if tn_regs.is_32bit_mod() {
- let mut result: u64 = cur_tick.deposit(0, 32, target);
- if result < cur_tick {
- result += 0x100000000;
- }
- result
- } else {
- target
- }
- }
-
fn get_int_route(&self, regs: &MutexGuard<HPETRegisters>) -> usize {
if self.index <= 1 && regs.is_legacy_mode() {
// If LegacyReplacement Route bit is set, HPET specification requires
@@ -371,35 +369,34 @@ fn update_irq(&self, regs: &mut MutexGuard<HPETRegisters>, set: bool) {
self.set_irq(regs, set);
}
- fn arm_timer(&mut self, regs: &MutexGuard<HPETRegisters>, tick: u64) {
- let tn_regs = ®s.tn_regs[self.index as usize];
+ fn arm_timer(&self, tn_regs: &mut HPETTimerRegisters, tick: u64) {
let mut ns = self.get_state().get_ns(tick);
// Clamp period to reasonable min value (1 us)
- if tn_regs.is_periodic() && ns - self.last < 1000 {
- ns = self.last + 1000;
+ if tn_regs.is_periodic() && ns - tn_regs.last < 1000 {
+ ns = tn_regs.last + 1000;
}
- self.last = ns;
- self.qemu_timer.modify(self.last);
+ tn_regs.last = ns;
+ self.qemu_timer.modify(tn_regs.last);
}
- fn set_timer(&mut self, regs: &MutexGuard<HPETRegisters>) {
- let tn_regs = ®s.tn_regs[self.index as usize];
+ fn set_timer(&self, regs: &mut MutexGuard<HPETRegisters>) {
+ let tn_regs = &mut regs.tn_regs[self.index as usize];
let cur_tick: u64 = self.get_state().get_ticks();
- self.wrap_flag = 0;
- self.cmp64 = self.calculate_cmp64(regs, cur_tick, tn_regs.cmp);
+ tn_regs.wrap_flag = 0;
+ tn_regs.update_cmp64(cur_tick);
if tn_regs.is_32bit_mod() {
// HPET spec says in one-shot 32-bit mode, generate an interrupt when
// counter wraps in addition to an interrupt with comparator match.
- if !tn_regs.is_periodic() && self.cmp64 > hpet_next_wrap(cur_tick) {
- self.wrap_flag = 1;
- self.arm_timer(regs, hpet_next_wrap(cur_tick));
+ if !tn_regs.is_periodic() && tn_regs.cmp64 > hpet_next_wrap(cur_tick) {
+ tn_regs.wrap_flag = 1;
+ self.arm_timer(tn_regs, hpet_next_wrap(cur_tick));
return;
}
}
- self.arm_timer(regs, self.cmp64);
+ self.arm_timer(tn_regs, tn_regs.cmp64);
}
fn del_timer(&self, regs: &mut MutexGuard<HPETRegisters>) {
@@ -440,7 +437,7 @@ fn prepare_tn_cfg_reg_new(
/// Configuration and Capability Register
fn set_tn_cfg_reg(
- &mut self,
+ &self,
regs: &mut MutexGuard<HPETRegisters>,
shift: u32,
len: u32,
@@ -462,7 +459,7 @@ fn set_tn_cfg_reg(
let tn_regs = &mut regs.tn_regs[self.index as usize];
if tn_regs.is_32bit_mod() {
tn_regs.cmp = u64::from(tn_regs.cmp as u32); // truncate!
- self.period = u64::from(self.period as u32); // truncate!
+ tn_regs.period = u64::from(tn_regs.period as u32); // truncate!
}
if regs.is_hpet_enabled() {
@@ -472,7 +469,7 @@ fn set_tn_cfg_reg(
/// Comparator Value Register
fn set_tn_cmp_reg(
- &mut self,
+ &self,
regs: &mut MutexGuard<HPETRegisters>,
shift: u32,
len: u32,
@@ -499,7 +496,7 @@ fn set_tn_cmp_reg(
}
if tn_regs.is_periodic() {
- self.period = self.period.deposit(shift, length, value);
+ tn_regs.period = tn_regs.period.deposit(shift, length, value);
}
tn_regs.clear_valset();
@@ -520,10 +517,11 @@ fn set_tn_fsb_route_reg(
tn_regs.fsb = tn_regs.fsb.deposit(shift, len, val);
}
- fn reset(&mut self, regs: &mut MutexGuard<HPETRegisters>) {
+ fn reset(&self, regs: &mut MutexGuard<HPETRegisters>) {
self.del_timer(regs);
let tn_regs = &mut regs.tn_regs[self.index as usize];
+ tn_regs.index = self.index;
tn_regs.cmp = u64::MAX; // Comparator Match Registers reset to all 1's.
tn_regs.config = (1 << HPET_TN_CFG_PERIODIC_CAP_SHIFT) | (1 << HPET_TN_CFG_SIZE_CAP_SHIFT);
if self.get_state().has_msi_flag() {
@@ -532,29 +530,28 @@ fn reset(&mut self, regs: &mut MutexGuard<HPETRegisters>) {
// advertise availability of ioapic int
tn_regs.config |=
(u64::from(self.get_state().int_route_cap)) << HPET_TN_CFG_INT_ROUTE_CAP_SHIFT;
- self.period = 0;
- self.wrap_flag = 0;
+ tn_regs.period = 0;
+ tn_regs.wrap_flag = 0;
}
/// timer expiration callback
- fn callback(&mut self, regs: &mut MutexGuard<HPETRegisters>) {
+ fn callback(&self, regs: &mut MutexGuard<HPETRegisters>) {
let tn_regs = &mut regs.tn_regs[self.index as usize];
- let period: u64 = self.period;
let cur_tick: u64 = self.get_state().get_ticks();
- if tn_regs.is_periodic() && period != 0 {
- while hpet_time_after(cur_tick, self.cmp64) {
- self.cmp64 += period;
+ if tn_regs.is_periodic() && tn_regs.period != 0 {
+ while hpet_time_after(cur_tick, tn_regs.cmp64) {
+ tn_regs.cmp64 += tn_regs.period;
}
if tn_regs.is_32bit_mod() {
- tn_regs.cmp = u64::from(self.cmp64 as u32); // truncate!
+ tn_regs.cmp = u64::from(tn_regs.cmp64 as u32); // truncate!
} else {
- tn_regs.cmp = self.cmp64;
+ tn_regs.cmp = tn_regs.cmp64;
}
- self.arm_timer(regs, self.cmp64);
- } else if self.wrap_flag != 0 {
- self.wrap_flag = 0;
- self.arm_timer(regs, self.cmp64);
+ self.arm_timer(tn_regs, tn_regs.cmp64);
+ } else if tn_regs.wrap_flag != 0 {
+ tn_regs.wrap_flag = 0;
+ self.arm_timer(tn_regs, tn_regs.cmp64);
}
self.update_irq(regs, true);
}
@@ -571,7 +568,7 @@ fn read(&self, target: TimerRegister, regs: &MutexGuard<HPETRegisters>) -> u64 {
}
fn write(
- &mut self,
+ &self,
target: TimerRegister,
regs: &mut MutexGuard<HPETRegisters>,
value: u64,
@@ -731,7 +728,7 @@ fn set_cfg_reg(&self, regs: &mut MutexGuard<HPETRegisters>, shift: u32, len: u32
for timer in self.timers.iter().take(self.num_timers) {
// Protect timer in lockless IO case which would not lock BQL.
bql::with_guard(|| {
- let mut t = timer.borrow_mut();
+ let t = timer.borrow_mut();
let id = t.index as usize;
let tn_regs = ®s.tn_regs[id];
@@ -992,15 +989,14 @@ fn pre_save(&self) -> Result<(), migration::Infallible> {
}
fn post_load(&self, _version_id: u8) -> Result<(), migration::Infallible> {
- let regs = self.regs.lock().unwrap();
+ let mut regs = self.regs.lock().unwrap();
+ let cnt = regs.counter;
- for timer in self.timers.iter().take(self.num_timers) {
- let mut t = timer.borrow_mut();
- let cnt = regs.counter;
- let cmp = regs.tn_regs[t.index as usize].cmp;
+ for index in 0..self.num_timers {
+ let tn_regs = &mut regs.tn_regs[index];
- t.cmp64 = t.calculate_cmp64(®s, cnt, cmp);
- t.last = CLOCK_VIRTUAL.get_ns() - NANOSECONDS_PER_SECOND;
+ tn_regs.update_cmp64(cnt);
+ tn_regs.last = CLOCK_VIRTUAL.get_ns() - NANOSECONDS_PER_SECOND;
}
// Recalculate the offset between the main counter and guest time
@@ -1072,9 +1068,12 @@ impl ObjectImpl for HPETState {
.version_id(1)
.minimum_version_id(1)
.fields(vmstate_fields! {
+ vmstate_of!(HPETTimerRegistersMigration, index),
vmstate_of!(HPETTimerRegistersMigration, config),
vmstate_of!(HPETTimerRegistersMigration, cmp),
vmstate_of!(HPETTimerRegistersMigration, fsb),
+ vmstate_of!(HPETTimerRegistersMigration, period),
+ vmstate_of!(HPETTimerRegistersMigration, wrap_flag),
})
.build()
);
@@ -1085,9 +1084,6 @@ impl ObjectImpl for HPETState {
.version_id(2)
.minimum_version_id(2)
.fields(vmstate_fields! {
- vmstate_of!(HPETTimer, index),
- vmstate_of!(HPETTimer, period),
- vmstate_of!(HPETTimer, wrap_flag),
vmstate_of!(HPETTimer, qemu_timer),
})
.build();
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH 10/22] rust/hpet: Abstract HPETTimerRegisters struct
2025-11-13 11:24 ` Paolo Bonzini
@ 2025-11-14 4:37 ` Zhao Liu
2025-11-15 7:54 ` Paolo Bonzini
0 siblings, 1 reply; 30+ messages in thread
From: Zhao Liu @ 2025-11-14 4:37 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Manos Pitsidianakis, Marc-André Lureau, Igor Mammedov,
qemu-devel, qemu-rust, Zhao Liu
On Thu, Nov 13, 2025 at 12:24:07PM +0100, Paolo Bonzini wrote:
> Date: Thu, 13 Nov 2025 12:24:07 +0100
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: Re: [PATCH 10/22] rust/hpet: Abstract HPETTimerRegisters struct
>
> On 11/13/25 06:19, Zhao Liu wrote:
> > Place all timer N's registers in a HPETTimerRegisters struct.
> >
> > This allows all Timer N registers to be grouped together with global
> > registers and managed using a single lock (BqlRefCell or Mutex) in
> > future. And this makes it easier to apply ToMigrationState macro.
>
> This is pretty much the crucial patch in the series and it's the only
> one that needs more work, or some fixup at the end.
>
> In particular, more fields of HPETTimer need to be moved to the
> HPETTimerRegisters. It's possible that it would be a problem to move
> the timer itself inside the mutex but, at least, the HPETTimer could be
> changed to just
>
> pub struct HPETTimer {
> timer: QemuTimer,
> state: NonNull<HPETState>,
> index: u8,
> }
Good idea!
> as in the patch included at the end (compile-tested only). Then, the
> BqlRefCell<HPETTimer> can be changed to just HPETTimer because all the
> fields handle their interior-mutable fields.
Yes, this will reduce BQL context in lockless IO a lot. And I'll based
on your patch to extract HPETTimer from BqlRefCell.
> Preserving the old migration format can then be solved in two ways:
>
> 1) with a handwritten ToMigrationStateShared implementation for
> HPETTimer (and marking the tn_regs array as #[migration_state(omit)])
Yes, compared with 2), I also this is the better choice, which looks
more common and could be an good example for other device.
> 2) by also adding num_timers_save and the timer's expiration to
> HPETRegisters and HPETTimerRegisters, respectively.
>
> I think I prefer the former, but I haven't written the code so it's
> not my choice. :)
>
> I'm okay with doing these changes on top of these patches as well.
Thank you!
The code attached looks good. Only a minor question below:
> @@ -181,6 +181,9 @@ fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
> #[repr(C)]
> #[derive(Debug, Default, ToMigrationState)]
> pub struct HPETTimerRegisters {
> + // Only needed for migration
> + index: u8,
I didn't get the point why we need to migrate this "index" instead of
HPETTimer::index?
Anyway, if we continue to keep index in HPETTimerRegisters, we can make
it have usize type with a "#[migration_state(try_into(u8))]" property.
If we just HPETTimer::index for migration, I think it's possible to do
type convertion in handwritten ToMigrationStateShared.
Thanks,
Zhao
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH 10/22] rust/hpet: Abstract HPETTimerRegisters struct
2025-11-14 4:37 ` Zhao Liu
@ 2025-11-15 7:54 ` Paolo Bonzini
0 siblings, 0 replies; 30+ messages in thread
From: Paolo Bonzini @ 2025-11-15 7:54 UTC (permalink / raw)
To: Zhao Liu
Cc: Manos Pitsidianakis, Marc-André Lureau, Igor Mammedov,
qemu-devel, open list:Rust-related patc...
[-- Attachment #1: Type: text/plain, Size: 1589 bytes --]
Il ven 14 nov 2025, 05:15 Zhao Liu <zhao1.liu@intel.com> ha scritto:
> Yes, this will reduce BQL context in lockless IO a lot. And I'll based
> on your patch to extract HPETTimer from BqlRefCell.
>
> > Preserving the old migration format can then be solved in two ways:
> >
> > 1) with a handwritten ToMigrationStateShared implementation for
> > HPETTimer (and marking the tn_regs array as #[migration_state(omit)])
>
> Yes, compared with 2), I also this is the better choice, which looks
> more common and could be an good example for other device.
>
> > 2) by also adding num_timers_save and the timer's expiration to
> > HPETRegisters and HPETTimerRegisters, respectively.
> >
> > I think I prefer the former, but I haven't written the code so it's
> > not my choice. :)
>
Yes, it is much better.
> @@ -181,6 +181,9 @@ fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
> > #[repr(C)]
> > #[derive(Debug, Default, ToMigrationState)]
> > pub struct HPETTimerRegisters {
> > + // Only needed for migration
> > + index: u8,
>
> I didn't get the point why we need to migrate this "index" instead of
> HPETTimer::index?
>
Just because it's still migrating HPETTimerRegisters, but yes it's not
necessary with the custom ToMigrationStateShared, as you write below.
Paolo
> Anyway, if we continue to keep index in HPETTimerRegisters, we can make
> it have usize type with a "#[migration_state(try_into(u8))]" property.
>
> If we just HPETTimer::index for migration, I think it's possible to do
> type convertion in handwritten ToMigrationStateShared.
>
> Thanks,
> Zhao
>
>
[-- Attachment #2: Type: text/html, Size: 2738 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 11/22] rust/hpet: Make timer register accessors as methods of HPETTimerRegisters
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (9 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 10/22] rust/hpet: Abstract HPETTimerRegisters struct Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 12/22] rust/hpet: Abstract HPETRegisters struct Zhao Liu
` (10 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Implement helper accessors as methods of HPETTimerRegisters. Then
HPETTimerRegisters can be accessed without going through HPETTimer or
HPETState.
In subsequent refactoring, HPETTimerRegisters will be maintained at the
HPETState level. However, accessing it through HPETState requires the
lock (lock BQL or mutex), which would cause troublesome nested locks or
reentrancy issues.
Therefore, refactor the accessors of HPETTimerRegisters to bypass
HPETTimer or HPETState.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 108 ++++++++++++++++---------------
1 file changed, 55 insertions(+), 53 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index c7c0987aeb71..13123c257522 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -192,6 +192,41 @@ pub struct HPETTimerRegisters {
fsb: u64,
}
+impl HPETTimerRegisters {
+ const fn is_fsb_route_enabled(&self) -> bool {
+ self.config & (1 << HPET_TN_CFG_FSB_ENABLE_SHIFT) != 0
+ }
+
+ const fn is_periodic(&self) -> bool {
+ self.config & (1 << HPET_TN_CFG_PERIODIC_SHIFT) != 0
+ }
+
+ const fn is_int_enabled(&self) -> bool {
+ self.config & (1 << HPET_TN_CFG_INT_ENABLE_SHIFT) != 0
+ }
+
+ const fn is_32bit_mod(&self) -> bool {
+ self.config & (1 << HPET_TN_CFG_32BIT_SHIFT) != 0
+ }
+
+ const fn is_valset_enabled(&self) -> bool {
+ self.config & (1 << HPET_TN_CFG_SETVAL_SHIFT) != 0
+ }
+
+ /// True if timer interrupt is level triggered; otherwise, edge triggered.
+ const fn is_int_level_triggered(&self) -> bool {
+ self.config & (1 << HPET_TN_CFG_INT_TYPE_SHIFT) != 0
+ }
+
+ const fn clear_valset(&mut self) {
+ self.config &= !(1 << HPET_TN_CFG_SETVAL_SHIFT);
+ }
+
+ const fn get_individual_route(&self) -> usize {
+ ((self.config & HPET_TN_CFG_INT_ROUTE_MASK) >> HPET_TN_CFG_INT_ROUTE_SHIFT) as usize
+ }
+}
+
/// HPET Timer Abstraction
#[repr(C)]
#[derive(Debug)]
@@ -254,40 +289,11 @@ fn is_int_active(&self) -> bool {
self.get_state().is_timer_int_active(self.index.into())
}
- const fn is_fsb_route_enabled(&self) -> bool {
- self.regs.config & (1 << HPET_TN_CFG_FSB_ENABLE_SHIFT) != 0
- }
-
- const fn is_periodic(&self) -> bool {
- self.regs.config & (1 << HPET_TN_CFG_PERIODIC_SHIFT) != 0
- }
-
- const fn is_int_enabled(&self) -> bool {
- self.regs.config & (1 << HPET_TN_CFG_INT_ENABLE_SHIFT) != 0
- }
-
- const fn is_32bit_mod(&self) -> bool {
- self.regs.config & (1 << HPET_TN_CFG_32BIT_SHIFT) != 0
- }
-
- const fn is_valset_enabled(&self) -> bool {
- self.regs.config & (1 << HPET_TN_CFG_SETVAL_SHIFT) != 0
- }
-
- fn clear_valset(&mut self) {
- self.regs.config &= !(1 << HPET_TN_CFG_SETVAL_SHIFT);
- }
-
- /// True if timer interrupt is level triggered; otherwise, edge triggered.
- const fn is_int_level_triggered(&self) -> bool {
- self.regs.config & (1 << HPET_TN_CFG_INT_TYPE_SHIFT) != 0
- }
-
/// calculate next value of the general counter that matches the
/// target (either entirely, or the low 32-bit only depending on
/// the timer mode).
fn calculate_cmp64(&self, cur_tick: u64, target: u64) -> u64 {
- if self.is_32bit_mod() {
+ if self.regs.is_32bit_mod() {
let mut result: u64 = cur_tick.deposit(0, 32, target);
if result < cur_tick {
result += 0x100000000;
@@ -298,10 +304,6 @@ fn calculate_cmp64(&self, cur_tick: u64, target: u64) -> u64 {
}
}
- const fn get_individual_route(&self) -> usize {
- ((self.regs.config & HPET_TN_CFG_INT_ROUTE_MASK) >> HPET_TN_CFG_INT_ROUTE_SHIFT) as usize
- }
-
fn get_int_route(&self) -> usize {
if self.index <= 1 && self.get_state().is_legacy_mode() {
// If LegacyReplacement Route bit is set, HPET specification requires
@@ -323,15 +325,15 @@ fn get_int_route(&self) -> usize {
// ...
// If the LegacyReplacement Route bit is not set, the individual
// routing bits for each of the timers are used.
- self.get_individual_route()
+ self.regs.get_individual_route()
}
}
fn set_irq(&self, set: bool) {
let route = self.get_int_route();
- if set && self.is_int_enabled() && self.get_state().is_hpet_enabled() {
- if self.is_fsb_route_enabled() {
+ if set && self.regs.is_int_enabled() && self.get_state().is_hpet_enabled() {
+ if self.regs.is_fsb_route_enabled() {
// SAFETY:
// the parameters are valid.
unsafe {
@@ -343,12 +345,12 @@ fn set_irq(&self, set: bool) {
null_mut(),
);
}
- } else if self.is_int_level_triggered() {
+ } else if self.regs.is_int_level_triggered() {
self.get_state().irqs[route].raise();
} else {
self.get_state().irqs[route].pulse();
}
- } else if !self.is_fsb_route_enabled() {
+ } else if !self.regs.is_fsb_route_enabled() {
self.get_state().irqs[route].lower();
}
}
@@ -358,7 +360,7 @@ fn update_irq(&self, set: bool) {
// still operate and generate appropriate status bits, but
// will not cause an interrupt"
self.get_state()
- .update_int_status(self.index.into(), set && self.is_int_level_triggered());
+ .update_int_status(self.index.into(), set && self.regs.is_int_level_triggered());
self.set_irq(set);
}
@@ -366,7 +368,7 @@ fn arm_timer(&mut self, tick: u64) {
let mut ns = self.get_state().get_ns(tick);
// Clamp period to reasonable min value (1 us)
- if self.is_periodic() && ns - self.last < 1000 {
+ if self.regs.is_periodic() && ns - self.last < 1000 {
ns = self.last + 1000;
}
@@ -379,10 +381,10 @@ fn set_timer(&mut self) {
self.wrap_flag = 0;
self.cmp64 = self.calculate_cmp64(cur_tick, self.regs.cmp);
- if self.is_32bit_mod() {
+ if self.regs.is_32bit_mod() {
// HPET spec says in one-shot 32-bit mode, generate an interrupt when
// counter wraps in addition to an interrupt with comparator match.
- if !self.is_periodic() && self.cmp64 > hpet_next_wrap(cur_tick) {
+ if !self.regs.is_periodic() && self.cmp64 > hpet_next_wrap(cur_tick) {
self.wrap_flag = 1;
self.arm_timer(hpet_next_wrap(cur_tick));
return;
@@ -423,7 +425,7 @@ fn set_tn_cfg_reg(&mut self, shift: u32, len: u32, val: u64) {
self.update_irq(true);
}
- if self.is_32bit_mod() {
+ if self.regs.is_32bit_mod() {
self.regs.cmp = u64::from(self.regs.cmp as u32); // truncate!
self.period = u64::from(self.period as u32); // truncate!
}
@@ -439,7 +441,7 @@ fn set_tn_cmp_reg(&mut self, shift: u32, len: u32, val: u64) {
let mut value = val;
// TODO: Add trace point - trace_hpet_ram_write_tn_cmp(addr & 4)
- if self.is_32bit_mod() {
+ if self.regs.is_32bit_mod() {
// High 32-bits are zero, leave them untouched.
if shift != 0 {
// TODO: Add trace point - trace_hpet_ram_write_invalid_tn_cmp()
@@ -449,15 +451,15 @@ fn set_tn_cmp_reg(&mut self, shift: u32, len: u32, val: u64) {
value = u64::from(value as u32); // truncate!
}
- if !self.is_periodic() || self.is_valset_enabled() {
+ if !self.regs.is_periodic() || self.regs.is_valset_enabled() {
self.regs.cmp = self.regs.cmp.deposit(shift, length, value);
}
- if self.is_periodic() {
+ if self.regs.is_periodic() {
self.period = self.period.deposit(shift, length, value);
}
- self.clear_valset();
+ self.regs.clear_valset();
if self.get_state().is_hpet_enabled() {
self.set_timer();
}
@@ -488,11 +490,11 @@ fn callback(&mut self) {
let period: u64 = self.period;
let cur_tick: u64 = self.get_state().get_ticks();
- if self.is_periodic() && period != 0 {
+ if self.regs.is_periodic() && period != 0 {
while hpet_time_after(cur_tick, self.cmp64) {
self.cmp64 += period;
}
- if self.is_32bit_mod() {
+ if self.regs.is_32bit_mod() {
self.regs.cmp = u64::from(self.cmp64 as u32); // truncate!
} else {
self.regs.cmp = self.cmp64;
@@ -651,7 +653,7 @@ fn set_cfg_reg(&self, shift: u32, len: u32, val: u64) {
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
- if t.is_int_enabled() && t.is_int_active() {
+ if t.regs.is_int_enabled() && t.is_int_active() {
t.update_irq(true);
}
t.set_timer();
@@ -810,8 +812,8 @@ fn read(&self, addr: hwaddr, size: u32) -> u64 {
// TODO: Add trace point - trace_hpet_ram_read(addr)
let HPETAddrDecode { shift, target, .. } = self.decode(addr, size);
- use GlobalRegister::*;
use DecodedRegister::*;
+ use GlobalRegister::*;
(match target {
Timer(timer, tn_target) => timer.borrow_mut().read(tn_target),
Global(CAP) => self.capability.get(), /* including HPET_PERIOD 0x004 */
@@ -837,8 +839,8 @@ fn write(&self, addr: hwaddr, value: u64, size: u32) {
let HPETAddrDecode { shift, len, target } = self.decode(addr, size);
// TODO: Add trace point - trace_hpet_ram_write(addr, value)
- use GlobalRegister::*;
use DecodedRegister::*;
+ use GlobalRegister::*;
match target {
Timer(timer, tn_target) => timer.borrow_mut().write(tn_target, value, shift, len),
Global(CAP) => {} // General Capabilities and ID Register: Read Only
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 12/22] rust/hpet: Abstract HPETRegisters struct
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (10 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 11/22] rust/hpet: Make timer register accessors as methods of HPETTimerRegisters Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 13/22] rust/hpet: Make global register accessors as methods of HPETRegisters Zhao Liu
` (9 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Place all HPET (global) timer block registers in a HPETRegisters struct,
and wrap the whole register struct with a BqlRefCell<>.
This allows to elevate the Bql check from individual register access to
register structure access, making the Bql check more coarse-grained. But
in current step, just treat BqlRefCell as BqlCell while maintaining
fine-grained BQL protection. This approach helps to use HPETRegisters
struct clearly without introducing the "already borrowed" around
BqlRefCell.
HPETRegisters struct makes it possible to take a Mutex<> to replace
BqlRefCell<>, like C side did.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 112 +++++++++++++++++++------------
1 file changed, 68 insertions(+), 44 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 13123c257522..503ceee4c445 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -526,24 +526,29 @@ fn write(&mut self, target: TimerRegister, value: u64, shift: u32, len: u32) {
}
}
-/// HPET Event Timer Block Abstraction
#[repr(C)]
-#[derive(qom::Object, hwcore::Device)]
-pub struct HPETState {
- parent_obj: ParentField<SysBusDevice>,
- iomem: MemoryRegion,
-
+#[derive(Default)]
+pub struct HPETRegisters {
// HPET block Registers: Memory-mapped, software visible registers
/// General Capabilities and ID Register
- capability: BqlCell<u64>,
+ capability: u64,
/// General Configuration Register
- config: BqlCell<u64>,
+ config: u64,
/// General Interrupt Status Register
#[doc(alias = "isr")]
- int_status: BqlCell<u64>,
+ int_status: u64,
/// Main Counter Value Register
#[doc(alias = "hpet_counter")]
- counter: BqlCell<u64>,
+ counter: u64,
+}
+
+/// HPET Event Timer Block Abstraction
+#[repr(C)]
+#[derive(qom::Object, hwcore::Device)]
+pub struct HPETState {
+ parent_obj: ParentField<SysBusDevice>,
+ iomem: MemoryRegion,
+ regs: BqlRefCell<HPETRegisters>,
// Internal state
/// Capabilities that QEMU HPET supports.
@@ -585,15 +590,15 @@ const fn has_msi_flag(&self) -> bool {
}
fn is_legacy_mode(&self) -> bool {
- self.config.get() & (1 << HPET_CFG_LEG_RT_SHIFT) != 0
+ self.regs.borrow().config & (1 << HPET_CFG_LEG_RT_SHIFT) != 0
}
fn is_hpet_enabled(&self) -> bool {
- self.config.get() & (1 << HPET_CFG_ENABLE_SHIFT) != 0
+ self.regs.borrow().config & (1 << HPET_CFG_ENABLE_SHIFT) != 0
}
fn is_timer_int_active(&self, index: usize) -> bool {
- self.int_status.get() & (1 << index) != 0
+ self.regs.borrow().int_status & (1 << index) != 0
}
fn get_ticks(&self) -> u64 {
@@ -633,22 +638,22 @@ fn init_timers(this: &mut MaybeUninit<Self>) {
}
fn update_int_status(&self, index: u32, level: bool) {
- self.int_status
- .set(self.int_status.get().deposit(index, 1, u64::from(level)));
+ let mut regs = self.regs.borrow_mut();
+ regs.int_status = regs.int_status.deposit(index, 1, u64::from(level));
}
/// General Configuration Register
fn set_cfg_reg(&self, shift: u32, len: u32, val: u64) {
- let old_val = self.config.get();
+ let old_val = self.regs.borrow().config;
let mut new_val = old_val.deposit(shift, len, val);
new_val = hpet_fixup_reg(new_val, old_val, HPET_CFG_WRITE_MASK);
- self.config.set(new_val);
+ self.regs.borrow_mut().config = new_val;
if activating_bit(old_val, new_val, HPET_CFG_ENABLE_SHIFT) {
// Enable main counter and interrupt generation.
self.hpet_offset
- .set(ticks_to_ns(self.counter.get()) - CLOCK_VIRTUAL.get_ns());
+ .set(ticks_to_ns(self.regs.borrow().counter) - CLOCK_VIRTUAL.get_ns());
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
@@ -660,7 +665,7 @@ fn set_cfg_reg(&self, shift: u32, len: u32, val: u64) {
}
} else if deactivating_bit(old_val, new_val, HPET_CFG_ENABLE_SHIFT) {
// Halt main counter and disable interrupt generation.
- self.counter.set(self.get_ticks());
+ self.regs.borrow_mut().counter = self.get_ticks();
for timer in self.timers.iter().take(self.num_timers) {
timer.borrow().del_timer();
@@ -683,7 +688,7 @@ fn set_cfg_reg(&self, shift: u32, len: u32, val: u64) {
/// General Interrupt Status Register: Read/Write Clear
fn set_int_status_reg(&self, shift: u32, _len: u32, val: u64) {
let new_val = val << shift;
- let cleared = new_val & self.int_status.get();
+ let cleared = new_val & self.regs.borrow().int_status;
for (index, timer) in self.timers.iter().take(self.num_timers).enumerate() {
if cleared & (1 << index) != 0 {
@@ -705,8 +710,8 @@ fn set_counter_reg(&self, shift: u32, len: u32, val: u64) {
// tick count (i.e., the previously calculated offset will
// not be changed as well).
}
- self.counter
- .set(self.counter.get().deposit(shift, len, val));
+ let mut regs = self.regs.borrow_mut();
+ regs.counter = regs.counter.deposit(shift, len, val);
}
unsafe fn init(mut this: ParentInit<Self>) {
@@ -749,14 +754,12 @@ fn realize(&self) -> util::Result<()> {
self.hpet_id.set(HPETFwConfig::assign_hpet_id()?);
// 64-bit General Capabilities and ID Register; LegacyReplacementRoute.
- self.capability.set(
- HPET_CAP_REV_ID_VALUE << HPET_CAP_REV_ID_SHIFT |
+ self.regs.borrow_mut().capability = HPET_CAP_REV_ID_VALUE << HPET_CAP_REV_ID_SHIFT |
1 << HPET_CAP_COUNT_SIZE_CAP_SHIFT |
1 << HPET_CAP_LEG_RT_CAP_SHIFT |
HPET_CAP_VENDER_ID_VALUE << HPET_CAP_VENDER_ID_SHIFT |
((self.num_timers - 1) as u64) << HPET_CAP_NUM_TIM_SHIFT | // indicate the last timer
- (HPET_CLK_PERIOD * FS_PER_NS) << HPET_CAP_CNT_CLK_PERIOD_SHIFT, // 10 ns
- );
+ (HPET_CLK_PERIOD * FS_PER_NS) << HPET_CAP_CNT_CLK_PERIOD_SHIFT; // 10 ns
self.init_gpio_in(2, HPETState::handle_legacy_irq);
self.init_gpio_out(from_ref(&self.pit_enabled));
@@ -768,17 +771,23 @@ fn reset_hold(&self, _type: ResetType) {
timer.borrow_mut().reset();
}
- self.counter.set(0);
- self.config.set(0);
+ // pit_enabled.set(true) will call irq handler and access regs
+ // again. We cannot borrow BqlRefCell twice at once. Minimize the
+ // scope of regs to ensure it will be dropped before irq callback.
+ {
+ let mut regs = self.regs.borrow_mut();
+ regs.counter = 0;
+ regs.config = 0;
+ HPETFwConfig::update_hpet_cfg(
+ self.hpet_id.get(),
+ regs.capability as u32,
+ self.mmio_addr(0).unwrap(),
+ );
+ }
+
self.pit_enabled.set(true);
self.hpet_offset.set(0);
- HPETFwConfig::update_hpet_cfg(
- self.hpet_id.get(),
- self.capability.get() as u32,
- self.mmio_addr(0).unwrap(),
- );
-
// to document that the RTC lowers its output on reset as well
self.rtc_irq_level.set(0);
}
@@ -816,16 +825,16 @@ fn read(&self, addr: hwaddr, size: u32) -> u64 {
use GlobalRegister::*;
(match target {
Timer(timer, tn_target) => timer.borrow_mut().read(tn_target),
- Global(CAP) => self.capability.get(), /* including HPET_PERIOD 0x004 */
- Global(CFG) => self.config.get(),
- Global(INT_STATUS) => self.int_status.get(),
+ Global(CAP) => self.regs.borrow().capability, /* including HPET_PERIOD 0x004 */
+ Global(CFG) => self.regs.borrow().config,
+ Global(INT_STATUS) => self.regs.borrow().int_status,
Global(COUNTER) => {
// TODO: Add trace point
// trace_hpet_ram_read_reading_counter(addr & 4, cur_tick)
if self.is_hpet_enabled() {
self.get_ticks()
} else {
- self.counter.get()
+ self.regs.borrow().counter
}
}
Unknown(_) => {
@@ -855,7 +864,7 @@ fn write(&self, addr: hwaddr, value: u64, size: u32) {
fn pre_save(&self) -> Result<(), migration::Infallible> {
if self.is_hpet_enabled() {
- self.counter.set(self.get_ticks());
+ self.regs.borrow_mut().counter = self.get_ticks();
}
/*
@@ -870,15 +879,16 @@ fn pre_save(&self) -> Result<(), migration::Infallible> {
fn post_load(&self, _version_id: u8) -> Result<(), migration::Infallible> {
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
+ let cnt = t.get_state().regs.borrow().counter;
- t.cmp64 = t.calculate_cmp64(t.get_state().counter.get(), t.regs.cmp);
+ t.cmp64 = t.calculate_cmp64(cnt, t.regs.cmp);
t.last = CLOCK_VIRTUAL.get_ns() - NANOSECONDS_PER_SECOND;
}
// Recalculate the offset between the main counter and guest time
if !self.hpet_offset_saved {
self.hpet_offset
- .set(ticks_to_ns(self.counter.get()) - CLOCK_VIRTUAL.get_ns());
+ .set(ticks_to_ns(self.regs.borrow().counter) - CLOCK_VIRTUAL.get_ns());
}
Ok(())
@@ -969,6 +979,22 @@ impl ObjectImpl for HPETState {
const VALIDATE_TIMERS_NAME: &CStr = c"num_timers must match";
+// In fact, version_id and minimum_version_id for HPETRegisters are
+// unrelated to HPETState's version IDs. Does not affect compatibility.
+impl_vmstate_struct!(
+ HPETRegisters,
+ VMStateDescriptionBuilder::<HPETRegisters>::new()
+ .name(c"hpet/regs")
+ .version_id(1)
+ .minimum_version_id(1)
+ .fields(vmstate_fields! {
+ vmstate_of!(HPETRegisters, config),
+ vmstate_of!(HPETRegisters, int_status),
+ vmstate_of!(HPETRegisters, counter),
+ })
+ .build()
+);
+
const VMSTATE_HPET: VMStateDescription<HPETState> =
VMStateDescriptionBuilder::<HPETState>::new()
.name(c"hpet")
@@ -977,9 +1003,7 @@ impl ObjectImpl for HPETState {
.pre_save(&HPETState::pre_save)
.post_load(&HPETState::post_load)
.fields(vmstate_fields! {
- vmstate_of!(HPETState, config),
- vmstate_of!(HPETState, int_status),
- vmstate_of!(HPETState, counter),
+ vmstate_of!(HPETState, regs),
vmstate_of!(HPETState, num_timers_save),
vmstate_validate!(HPETState, VALIDATE_TIMERS_NAME, HPETState::validate_num_timers),
vmstate_of!(HPETState, timers[0 .. num_timers_save], HPETState::validate_num_timers).with_version_id(0),
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 13/22] rust/hpet: Make global register accessors as methods of HPETRegisters
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (11 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 12/22] rust/hpet: Abstract HPETRegisters struct Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 14/22] rust/hpet: Borrow HPETState.regs once in HPETState::post_load() Zhao Liu
` (8 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Implement helper accessors as methods of HPETRegisters. Then
HPETRegisters can be accessed without going through HPETState.
In subsequent refactoring, coarser-grained BQL lock protection will be
implemented. Specifically, BqlRefCell<HPETRegisters> will be borrowed
only once during MMIO accesses, and the scope of borrowed `regs` will
be extended to cover the entire MMIO access. Consequently, repeated
borrow() attempts within function calls will no longer be allowed.
Therefore, refactor the accessors of HPETRegisters to bypass HPETState,
which help to reduce borrow() in deep function calls.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 60 ++++++++++++++++++--------------
1 file changed, 34 insertions(+), 26 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 503ceee4c445..738ebb374fc9 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -286,7 +286,10 @@ fn get_state(&self) -> &HPETState {
}
fn is_int_active(&self) -> bool {
- self.get_state().is_timer_int_active(self.index.into())
+ self.get_state()
+ .regs
+ .borrow()
+ .is_timer_int_active(self.index.into())
}
/// calculate next value of the general counter that matches the
@@ -305,7 +308,7 @@ fn calculate_cmp64(&self, cur_tick: u64, target: u64) -> u64 {
}
fn get_int_route(&self) -> usize {
- if self.index <= 1 && self.get_state().is_legacy_mode() {
+ if self.index <= 1 && self.get_state().regs.borrow().is_legacy_mode() {
// If LegacyReplacement Route bit is set, HPET specification requires
// timer0 be routed to IRQ0 in NON-APIC or IRQ2 in the I/O APIC,
// timer1 be routed to IRQ8 in NON-APIC or IRQ8 in the I/O APIC.
@@ -332,7 +335,7 @@ fn get_int_route(&self) -> usize {
fn set_irq(&self, set: bool) {
let route = self.get_int_route();
- if set && self.regs.is_int_enabled() && self.get_state().is_hpet_enabled() {
+ if set && self.regs.is_int_enabled() && self.get_state().regs.borrow().is_hpet_enabled() {
if self.regs.is_fsb_route_enabled() {
// SAFETY:
// the parameters are valid.
@@ -430,7 +433,7 @@ fn set_tn_cfg_reg(&mut self, shift: u32, len: u32, val: u64) {
self.period = u64::from(self.period as u32); // truncate!
}
- if self.get_state().is_hpet_enabled() {
+ if self.get_state().regs.borrow().is_hpet_enabled() {
self.set_timer();
}
}
@@ -460,7 +463,7 @@ fn set_tn_cmp_reg(&mut self, shift: u32, len: u32, val: u64) {
}
self.regs.clear_valset();
- if self.get_state().is_hpet_enabled() {
+ if self.get_state().regs.borrow().is_hpet_enabled() {
self.set_timer();
}
}
@@ -542,6 +545,20 @@ pub struct HPETRegisters {
counter: u64,
}
+impl HPETRegisters {
+ fn is_legacy_mode(&self) -> bool {
+ self.config & (1 << HPET_CFG_LEG_RT_SHIFT) != 0
+ }
+
+ fn is_hpet_enabled(&self) -> bool {
+ self.config & (1 << HPET_CFG_ENABLE_SHIFT) != 0
+ }
+
+ fn is_timer_int_active(&self, index: usize) -> bool {
+ self.int_status & (1 << index) != 0
+ }
+}
+
/// HPET Event Timer Block Abstraction
#[repr(C)]
#[derive(qom::Object, hwcore::Device)]
@@ -589,18 +606,6 @@ const fn has_msi_flag(&self) -> bool {
self.flags & (1 << HPET_FLAG_MSI_SUPPORT_SHIFT) != 0
}
- fn is_legacy_mode(&self) -> bool {
- self.regs.borrow().config & (1 << HPET_CFG_LEG_RT_SHIFT) != 0
- }
-
- fn is_hpet_enabled(&self) -> bool {
- self.regs.borrow().config & (1 << HPET_CFG_ENABLE_SHIFT) != 0
- }
-
- fn is_timer_int_active(&self, index: usize) -> bool {
- self.regs.borrow().int_status & (1 << index) != 0
- }
-
fn get_ticks(&self) -> u64 {
ns_to_ticks(CLOCK_VIRTUAL.get_ns() + self.hpet_offset.get())
}
@@ -610,13 +615,14 @@ fn get_ns(&self, tick: u64) -> u64 {
}
fn handle_legacy_irq(&self, irq: u32, level: u32) {
+ let regs = self.regs.borrow();
if irq == HPET_LEGACY_PIT_INT {
- if !self.is_legacy_mode() {
+ if !regs.is_legacy_mode() {
self.irqs[0].set(level != 0);
}
} else {
self.rtc_irq_level.set(level);
- if !self.is_legacy_mode() {
+ if !regs.is_legacy_mode() {
self.irqs[RTC_ISA_IRQ].set(level != 0);
}
}
@@ -699,7 +705,8 @@ fn set_int_status_reg(&self, shift: u32, _len: u32, val: u64) {
/// Main Counter Value Register
fn set_counter_reg(&self, shift: u32, len: u32, val: u64) {
- if self.is_hpet_enabled() {
+ let mut regs = self.regs.borrow_mut();
+ if regs.is_hpet_enabled() {
// TODO: Add trace point -
// trace_hpet_ram_write_counter_write_while_enabled()
//
@@ -710,7 +717,6 @@ fn set_counter_reg(&self, shift: u32, len: u32, val: u64) {
// tick count (i.e., the previously calculated offset will
// not be changed as well).
}
- let mut regs = self.regs.borrow_mut();
regs.counter = regs.counter.deposit(shift, len, val);
}
@@ -829,12 +835,13 @@ fn read(&self, addr: hwaddr, size: u32) -> u64 {
Global(CFG) => self.regs.borrow().config,
Global(INT_STATUS) => self.regs.borrow().int_status,
Global(COUNTER) => {
+ let regs = self.regs.borrow();
// TODO: Add trace point
// trace_hpet_ram_read_reading_counter(addr & 4, cur_tick)
- if self.is_hpet_enabled() {
+ if regs.is_hpet_enabled() {
self.get_ticks()
} else {
- self.regs.borrow().counter
+ regs.counter
}
}
Unknown(_) => {
@@ -863,8 +870,9 @@ fn write(&self, addr: hwaddr, value: u64, size: u32) {
}
fn pre_save(&self) -> Result<(), migration::Infallible> {
- if self.is_hpet_enabled() {
- self.regs.borrow_mut().counter = self.get_ticks();
+ let mut regs = self.regs.borrow_mut();
+ if regs.is_hpet_enabled() {
+ regs.counter = self.get_ticks();
}
/*
@@ -899,7 +907,7 @@ fn is_rtc_irq_level_needed(&self) -> bool {
}
fn is_offset_needed(&self) -> bool {
- self.is_hpet_enabled() && self.hpet_offset_saved
+ self.regs.borrow().is_hpet_enabled() && self.hpet_offset_saved
}
fn validate_num_timers(&self, _version_id: u8) -> bool {
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 14/22] rust/hpet: Borrow HPETState.regs once in HPETState::post_load()
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (12 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 13/22] rust/hpet: Make global register accessors as methods of HPETRegisters Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 15/22] rust/hpet: Explicitly initialize complex fields in init() Zhao Liu
` (7 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Timers in post_load() access the same HPETState, which is the "self"
HPETState.
So there's no need to access HPETState from child HPETTimer again and
again. Instead, just cache and borrow HPETState.regs at the beginning,
and this could save some CPU cycles and reduce borrow() calls.
It's safe, because post_load() is called with BQL protection, so that
there's no other chance to modify the regs.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 738ebb374fc9..bd673a1d0110 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -885,9 +885,11 @@ fn pre_save(&self) -> Result<(), migration::Infallible> {
}
fn post_load(&self, _version_id: u8) -> Result<(), migration::Infallible> {
+ let regs = self.regs.borrow();
+
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
- let cnt = t.get_state().regs.borrow().counter;
+ let cnt = regs.counter;
t.cmp64 = t.calculate_cmp64(cnt, t.regs.cmp);
t.last = CLOCK_VIRTUAL.get_ns() - NANOSECONDS_PER_SECOND;
@@ -896,7 +898,7 @@ fn post_load(&self, _version_id: u8) -> Result<(), migration::Infallible> {
// Recalculate the offset between the main counter and guest time
if !self.hpet_offset_saved {
self.hpet_offset
- .set(ticks_to_ns(self.regs.borrow().counter) - CLOCK_VIRTUAL.get_ns());
+ .set(ticks_to_ns(regs.counter) - CLOCK_VIRTUAL.get_ns());
}
Ok(())
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 15/22] rust/hpet: Explicitly initialize complex fields in init()
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (13 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 14/22] rust/hpet: Borrow HPETState.regs once in HPETState::post_load() Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 16/22] rust/hpet: Pass &BqlRefCell<HPETRegisters> as argument during MMIO access Zhao Liu
` (6 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Explicitly initialize more fields which are complex structures.
For simple types (bool/u32/usize), they can be omitted since C has
already initialized memory to all zeros and this is the valid
initialization for those simple types.
Previously such complex fields (InterruptSource/BqlCell/BqlRefCell) were
not explicitly initialized in init() and it's fine, because simply
setting all memory to zero aligns with their default initialization
behavior. However, this behavior is not robust. When adding new complex
struct or modifying the initial values of existing structs, this default
behavior can easily be broken.
Thus, do explicit initialization for HPET to become a good example.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index bd673a1d0110..1bef94e560f6 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -737,6 +737,18 @@ unsafe fn init(mut this: ParentInit<Self>) {
HPET_REG_SPACE_LEN,
);
+ // Only consider members with more complex structures. C has already
+ // initialized memory to all zeros - simple types (bool/u32/usize) can
+ // rely on this without explicit initialization.
+ uninit_field_mut!(*this, regs).write(Default::default());
+ uninit_field_mut!(*this, hpet_offset).write(Default::default());
+ // Set null_mut for now and post_init() will fill it.
+ uninit_field_mut!(*this, irqs).write(Default::default());
+ uninit_field_mut!(*this, rtc_irq_level).write(Default::default());
+ uninit_field_mut!(*this, pit_enabled).write(Default::default());
+ uninit_field_mut!(*this, num_timers_save).write(Default::default());
+ uninit_field_mut!(*this, hpet_id).write(Default::default());
+
Self::init_timers(&mut this);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 16/22] rust/hpet: Pass &BqlRefCell<HPETRegisters> as argument during MMIO access
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (14 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 15/22] rust/hpet: Explicitly initialize complex fields in init() Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 17/22] rust/hpet: Maintain HPETTimerRegisters in HPETRegisters Zhao Liu
` (5 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Currently in HPETTimer context, the global registers are accessed by
dereferring a HPETState raw pointer stored in NonNull<>, and then
borrowing the BqlRefCel<>.
This blocks borrowing HPETRegisters once during MMIO access, and
furthermore prevents replacing BqlRefCell<> with Mutex<>.
Therefore, do not access global registers through NonNull<HPETState>
and instead passing &BqlRefCell<HPETRegisters> as argument in
function calls within MMIO access.
But there's one special case that is timer handler, which still needs
to access HPETRegistsers through NonNull<HPETState>. It's okay for now
since this case doesn't have any repeated borrow() or lock reentrancy
issues.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 160 +++++++++++++++++++------------
1 file changed, 97 insertions(+), 63 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 1bef94e560f6..13e88df66d6f 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -177,7 +177,10 @@ const fn deactivating_bit(old: u64, new: u64, shift: usize) -> bool {
}
fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
- timer_cell.borrow_mut().callback()
+ let mut t = timer_cell.borrow_mut();
+ // SFAETY: state field is valid after timer initialization.
+ let hpet_regs = &mut unsafe { t.state.as_mut() }.regs;
+ t.callback(hpet_regs)
}
#[repr(C)]
@@ -285,11 +288,8 @@ fn get_state(&self) -> &HPETState {
unsafe { self.state.as_ref() }
}
- fn is_int_active(&self) -> bool {
- self.get_state()
- .regs
- .borrow()
- .is_timer_int_active(self.index.into())
+ fn is_int_active(&self, hpet_regs: &BqlRefCell<HPETRegisters>) -> bool {
+ hpet_regs.borrow().is_timer_int_active(self.index.into())
}
/// calculate next value of the general counter that matches the
@@ -307,8 +307,8 @@ fn calculate_cmp64(&self, cur_tick: u64, target: u64) -> u64 {
}
}
- fn get_int_route(&self) -> usize {
- if self.index <= 1 && self.get_state().regs.borrow().is_legacy_mode() {
+ fn get_int_route(&self, hpet_regs: &BqlRefCell<HPETRegisters>) -> usize {
+ if self.index <= 1 && hpet_regs.borrow().is_legacy_mode() {
// If LegacyReplacement Route bit is set, HPET specification requires
// timer0 be routed to IRQ0 in NON-APIC or IRQ2 in the I/O APIC,
// timer1 be routed to IRQ8 in NON-APIC or IRQ8 in the I/O APIC.
@@ -332,10 +332,10 @@ fn get_int_route(&self) -> usize {
}
}
- fn set_irq(&self, set: bool) {
- let route = self.get_int_route();
+ fn set_irq(&self, hpet_regs: &BqlRefCell<HPETRegisters>, set: bool) {
+ let route = self.get_int_route(hpet_regs);
- if set && self.regs.is_int_enabled() && self.get_state().regs.borrow().is_hpet_enabled() {
+ if set && self.regs.is_int_enabled() && hpet_regs.borrow().is_hpet_enabled() {
if self.regs.is_fsb_route_enabled() {
// SAFETY:
// the parameters are valid.
@@ -358,13 +358,17 @@ fn set_irq(&self, set: bool) {
}
}
- fn update_irq(&self, set: bool) {
+ fn update_irq(&self, hpet_regs: &BqlRefCell<HPETRegisters>, set: bool) {
+ let mut regs = hpet_regs.borrow_mut();
// If Timer N Interrupt Enable bit is 0, "the timer will
// still operate and generate appropriate status bits, but
// will not cause an interrupt"
- self.get_state()
- .update_int_status(self.index.into(), set && self.regs.is_int_level_triggered());
- self.set_irq(set);
+ regs.int_status = regs.int_status.deposit(
+ self.index.into(),
+ 1,
+ u64::from(set && self.regs.is_int_level_triggered()),
+ );
+ self.set_irq(hpet_regs, set);
}
fn arm_timer(&mut self, tick: u64) {
@@ -396,20 +400,26 @@ fn set_timer(&mut self) {
self.arm_timer(self.cmp64);
}
- fn del_timer(&self) {
+ fn del_timer(&self, hpet_regs: &BqlRefCell<HPETRegisters>) {
// Just remove the timer from the timer_list without destroying
// this timer instance.
self.qemu_timer.delete();
- if self.is_int_active() {
+ if self.is_int_active(hpet_regs) {
// For level-triggered interrupt, this leaves interrupt status
// register set but lowers irq.
- self.update_irq(true);
+ self.update_irq(hpet_regs, true);
}
}
/// Configuration and Capability Register
- fn set_tn_cfg_reg(&mut self, shift: u32, len: u32, val: u64) {
+ fn set_tn_cfg_reg(
+ &mut self,
+ hpet_regs: &BqlRefCell<HPETRegisters>,
+ shift: u32,
+ len: u32,
+ val: u64,
+ ) {
// TODO: Add trace point - trace_hpet_ram_write_tn_cfg(addr & 4)
let old_val: u64 = self.regs.config;
let mut new_val: u64 = old_val.deposit(shift, len, val);
@@ -419,13 +429,15 @@ fn set_tn_cfg_reg(&mut self, shift: u32, len: u32, val: u64) {
if deactivating_bit(old_val, new_val, HPET_TN_CFG_INT_TYPE_SHIFT) {
// Do this before changing timer.regs.config; otherwise, if
// HPET_TN_FSB is set, update_irq will not lower the qemu_irq.
- self.update_irq(false);
+ self.update_irq(hpet_regs, false);
}
self.regs.config = new_val;
- if activating_bit(old_val, new_val, HPET_TN_CFG_INT_ENABLE_SHIFT) && self.is_int_active() {
- self.update_irq(true);
+ if activating_bit(old_val, new_val, HPET_TN_CFG_INT_ENABLE_SHIFT)
+ && self.is_int_active(hpet_regs)
+ {
+ self.update_irq(hpet_regs, true);
}
if self.regs.is_32bit_mod() {
@@ -433,13 +445,19 @@ fn set_tn_cfg_reg(&mut self, shift: u32, len: u32, val: u64) {
self.period = u64::from(self.period as u32); // truncate!
}
- if self.get_state().regs.borrow().is_hpet_enabled() {
+ if hpet_regs.borrow().is_hpet_enabled() {
self.set_timer();
}
}
/// Comparator Value Register
- fn set_tn_cmp_reg(&mut self, shift: u32, len: u32, val: u64) {
+ fn set_tn_cmp_reg(
+ &mut self,
+ hpet_regs: &BqlRefCell<HPETRegisters>,
+ shift: u32,
+ len: u32,
+ val: u64,
+ ) {
let mut length = len;
let mut value = val;
@@ -463,18 +481,24 @@ fn set_tn_cmp_reg(&mut self, shift: u32, len: u32, val: u64) {
}
self.regs.clear_valset();
- if self.get_state().regs.borrow().is_hpet_enabled() {
+ if hpet_regs.borrow().is_hpet_enabled() {
self.set_timer();
}
}
/// FSB Interrupt Route Register
- fn set_tn_fsb_route_reg(&mut self, shift: u32, len: u32, val: u64) {
+ fn set_tn_fsb_route_reg(
+ &mut self,
+ _hpet_regs: &BqlRefCell<HPETRegisters>,
+ shift: u32,
+ len: u32,
+ val: u64,
+ ) {
self.regs.fsb = self.regs.fsb.deposit(shift, len, val);
}
- fn reset(&mut self) {
- self.del_timer();
+ fn reset(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
+ self.del_timer(hpet_regs);
self.regs.cmp = u64::MAX; // Comparator Match Registers reset to all 1's.
self.regs.config =
(1 << HPET_TN_CFG_PERIODIC_CAP_SHIFT) | (1 << HPET_TN_CFG_SIZE_CAP_SHIFT);
@@ -489,7 +513,7 @@ fn reset(&mut self) {
}
/// timer expiration callback
- fn callback(&mut self) {
+ fn callback(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
let period: u64 = self.period;
let cur_tick: u64 = self.get_state().get_ticks();
@@ -507,10 +531,10 @@ fn callback(&mut self) {
self.wrap_flag = 0;
self.arm_timer(self.cmp64);
}
- self.update_irq(true);
+ self.update_irq(hpet_regs, true);
}
- const fn read(&self, target: TimerRegister) -> u64 {
+ const fn read(&self, target: TimerRegister, _hpet_regs: &BqlRefCell<HPETRegisters>) -> u64 {
use TimerRegister::*;
match target {
CFG => self.regs.config, // including interrupt capabilities
@@ -519,12 +543,19 @@ const fn read(&self, target: TimerRegister) -> u64 {
}
}
- fn write(&mut self, target: TimerRegister, value: u64, shift: u32, len: u32) {
+ fn write(
+ &mut self,
+ target: TimerRegister,
+ hpet_regs: &BqlRefCell<HPETRegisters>,
+ value: u64,
+ shift: u32,
+ len: u32,
+ ) {
use TimerRegister::*;
match target {
- CFG => self.set_tn_cfg_reg(shift, len, value),
- CMP => self.set_tn_cmp_reg(shift, len, value),
- ROUTE => self.set_tn_fsb_route_reg(shift, len, value),
+ CFG => self.set_tn_cfg_reg(hpet_regs, shift, len, value),
+ CMP => self.set_tn_cmp_reg(hpet_regs, shift, len, value),
+ ROUTE => self.set_tn_fsb_route_reg(hpet_regs, shift, len, value),
}
}
}
@@ -643,38 +674,33 @@ fn init_timers(this: &mut MaybeUninit<Self>) {
}
}
- fn update_int_status(&self, index: u32, level: bool) {
- let mut regs = self.regs.borrow_mut();
- regs.int_status = regs.int_status.deposit(index, 1, u64::from(level));
- }
-
/// General Configuration Register
- fn set_cfg_reg(&self, shift: u32, len: u32, val: u64) {
- let old_val = self.regs.borrow().config;
+ fn set_cfg_reg(&self, regs: &BqlRefCell<HPETRegisters>, shift: u32, len: u32, val: u64) {
+ let old_val = regs.borrow().config;
let mut new_val = old_val.deposit(shift, len, val);
new_val = hpet_fixup_reg(new_val, old_val, HPET_CFG_WRITE_MASK);
- self.regs.borrow_mut().config = new_val;
+ regs.borrow_mut().config = new_val;
if activating_bit(old_val, new_val, HPET_CFG_ENABLE_SHIFT) {
// Enable main counter and interrupt generation.
self.hpet_offset
- .set(ticks_to_ns(self.regs.borrow().counter) - CLOCK_VIRTUAL.get_ns());
+ .set(ticks_to_ns(regs.borrow().counter) - CLOCK_VIRTUAL.get_ns());
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
- if t.regs.is_int_enabled() && t.is_int_active() {
- t.update_irq(true);
+ if t.regs.is_int_enabled() && t.is_int_active(regs) {
+ t.update_irq(regs, true);
}
t.set_timer();
}
} else if deactivating_bit(old_val, new_val, HPET_CFG_ENABLE_SHIFT) {
// Halt main counter and disable interrupt generation.
- self.regs.borrow_mut().counter = self.get_ticks();
+ regs.borrow_mut().counter = self.get_ticks();
for timer in self.timers.iter().take(self.num_timers) {
- timer.borrow().del_timer();
+ timer.borrow().del_timer(regs);
}
}
@@ -692,20 +718,26 @@ fn set_cfg_reg(&self, shift: u32, len: u32, val: u64) {
}
/// General Interrupt Status Register: Read/Write Clear
- fn set_int_status_reg(&self, shift: u32, _len: u32, val: u64) {
+ fn set_int_status_reg(
+ &self,
+ regs: &BqlRefCell<HPETRegisters>,
+ shift: u32,
+ _len: u32,
+ val: u64,
+ ) {
let new_val = val << shift;
- let cleared = new_val & self.regs.borrow().int_status;
+ let cleared = new_val & regs.borrow().int_status;
for (index, timer) in self.timers.iter().take(self.num_timers).enumerate() {
if cleared & (1 << index) != 0 {
- timer.borrow().update_irq(false);
+ timer.borrow().update_irq(regs, false);
}
}
}
/// Main Counter Value Register
- fn set_counter_reg(&self, shift: u32, len: u32, val: u64) {
- let mut regs = self.regs.borrow_mut();
+ fn set_counter_reg(&self, regs: &BqlRefCell<HPETRegisters>, shift: u32, len: u32, val: u64) {
+ let mut regs = regs.borrow_mut();
if regs.is_hpet_enabled() {
// TODO: Add trace point -
// trace_hpet_ram_write_counter_write_while_enabled()
@@ -786,7 +818,7 @@ fn realize(&self) -> util::Result<()> {
fn reset_hold(&self, _type: ResetType) {
for timer in self.timers.iter().take(self.num_timers) {
- timer.borrow_mut().reset();
+ timer.borrow_mut().reset(&self.regs);
}
// pit_enabled.set(true) will call irq handler and access regs
@@ -838,16 +870,17 @@ fn decode(&self, mut addr: hwaddr, size: u32) -> HPETAddrDecode<'_> {
fn read(&self, addr: hwaddr, size: u32) -> u64 {
// TODO: Add trace point - trace_hpet_ram_read(addr)
let HPETAddrDecode { shift, target, .. } = self.decode(addr, size);
+ let regs = &self.regs;
use DecodedRegister::*;
use GlobalRegister::*;
(match target {
- Timer(timer, tn_target) => timer.borrow_mut().read(tn_target),
- Global(CAP) => self.regs.borrow().capability, /* including HPET_PERIOD 0x004 */
- Global(CFG) => self.regs.borrow().config,
- Global(INT_STATUS) => self.regs.borrow().int_status,
+ Timer(timer, tn_target) => timer.borrow_mut().read(tn_target, regs),
+ Global(CAP) => regs.borrow().capability, /* including HPET_PERIOD 0x004 */
+ Global(CFG) => regs.borrow().config,
+ Global(INT_STATUS) => regs.borrow().int_status,
Global(COUNTER) => {
- let regs = self.regs.borrow();
+ let regs = regs.borrow();
// TODO: Add trace point
// trace_hpet_ram_read_reading_counter(addr & 4, cur_tick)
if regs.is_hpet_enabled() {
@@ -865,16 +898,17 @@ fn read(&self, addr: hwaddr, size: u32) -> u64 {
fn write(&self, addr: hwaddr, value: u64, size: u32) {
let HPETAddrDecode { shift, len, target } = self.decode(addr, size);
+ let regs = &self.regs;
// TODO: Add trace point - trace_hpet_ram_write(addr, value)
use DecodedRegister::*;
use GlobalRegister::*;
match target {
- Timer(timer, tn_target) => timer.borrow_mut().write(tn_target, value, shift, len),
+ Timer(timer, tn_target) => timer.borrow_mut().write(tn_target, regs, value, shift, len),
Global(CAP) => {} // General Capabilities and ID Register: Read Only
- Global(CFG) => self.set_cfg_reg(shift, len, value),
- Global(INT_STATUS) => self.set_int_status_reg(shift, len, value),
- Global(COUNTER) => self.set_counter_reg(shift, len, value),
+ Global(CFG) => self.set_cfg_reg(regs, shift, len, value),
+ Global(INT_STATUS) => self.set_int_status_reg(regs, shift, len, value),
+ Global(COUNTER) => self.set_counter_reg(regs, shift, len, value),
Unknown(_) => {
// TODO: Add trace point - trace_hpet_ram_write_invalid()
}
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 17/22] rust/hpet: Maintain HPETTimerRegisters in HPETRegisters
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (15 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 16/22] rust/hpet: Pass &BqlRefCell<HPETRegisters> as argument during MMIO access Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 18/22] rust/hpet: Borrow BqlRefCell<HPETRegisters> at top level Zhao Liu
` (4 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Lockless IO requires holding a single lock during MMIO access, so that
it's necessary to maintain timer N's registers (HPETTimerRegisters) with
global register in one place.
Therefore, move HPETTimerRegisters to HPETRegisters from HPETTimer, and
access timer registers from HPETRegisters struct for the whole HPET
code.
This changes HPETTimer and HPETRegisters, and the layout of VMState has
changed, which makes it incompatible to migrate with previous versions.
Thus, bump up the version IDs in VMStates of HPETState and HPETTimer.
The VMState version IDs of HPETRegisters doesn't need to change since
it's a newly added struct and its version IDs doesn't affect the
compatibility of HPETState's VMState.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 167 ++++++++++++++++++-------------
1 file changed, 99 insertions(+), 68 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 13e88df66d6f..462a09880b61 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -241,7 +241,6 @@ pub struct HPETTimer {
/// timer block abstraction containing this timer
state: NonNull<HPETState>,
- regs: HPETTimerRegisters,
// Hidden register state
/// comparator (extended to counter width)
cmp64: u64,
@@ -266,7 +265,6 @@ fn new(index: u8, state: *const HPETState) -> HPETTimer {
// is initialized below.
qemu_timer: unsafe { Timer::new() },
state: NonNull::new(state.cast_mut()).unwrap(),
- regs: Default::default(),
cmp64: 0,
period: 0,
wrap_flag: 0,
@@ -295,8 +293,14 @@ fn is_int_active(&self, hpet_regs: &BqlRefCell<HPETRegisters>) -> bool {
/// calculate next value of the general counter that matches the
/// target (either entirely, or the low 32-bit only depending on
/// the timer mode).
- fn calculate_cmp64(&self, cur_tick: u64, target: u64) -> u64 {
- if self.regs.is_32bit_mod() {
+ fn calculate_cmp64(
+ &self,
+ hpet_regs: &BqlRefCell<HPETRegisters>,
+ cur_tick: u64,
+ target: u64,
+ ) -> u64 {
+ let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
+ if tn_regs.is_32bit_mod() {
let mut result: u64 = cur_tick.deposit(0, 32, target);
if result < cur_tick {
result += 0x100000000;
@@ -308,7 +312,8 @@ fn calculate_cmp64(&self, cur_tick: u64, target: u64) -> u64 {
}
fn get_int_route(&self, hpet_regs: &BqlRefCell<HPETRegisters>) -> usize {
- if self.index <= 1 && hpet_regs.borrow().is_legacy_mode() {
+ let regs = hpet_regs.borrow();
+ if self.index <= 1 && regs.is_legacy_mode() {
// If LegacyReplacement Route bit is set, HPET specification requires
// timer0 be routed to IRQ0 in NON-APIC or IRQ2 in the I/O APIC,
// timer1 be routed to IRQ8 in NON-APIC or IRQ8 in the I/O APIC.
@@ -328,32 +333,34 @@ fn get_int_route(&self, hpet_regs: &BqlRefCell<HPETRegisters>) -> usize {
// ...
// If the LegacyReplacement Route bit is not set, the individual
// routing bits for each of the timers are used.
- self.regs.get_individual_route()
+ regs.tn_regs[self.index as usize].get_individual_route()
}
}
fn set_irq(&self, hpet_regs: &BqlRefCell<HPETRegisters>, set: bool) {
+ let regs = hpet_regs.borrow();
+ let tn_regs = ®s.tn_regs[self.index as usize];
let route = self.get_int_route(hpet_regs);
- if set && self.regs.is_int_enabled() && hpet_regs.borrow().is_hpet_enabled() {
- if self.regs.is_fsb_route_enabled() {
+ if set && tn_regs.is_int_enabled() && regs.is_hpet_enabled() {
+ if tn_regs.is_fsb_route_enabled() {
// SAFETY:
// the parameters are valid.
unsafe {
address_space_stl_le(
addr_of_mut!(address_space_memory),
- self.regs.fsb >> 32, // Timer N FSB int addr
- self.regs.fsb as u32, // Timer N FSB int value, truncate!
+ tn_regs.fsb >> 32, // Timer N FSB int addr
+ tn_regs.fsb as u32, // Timer N FSB int value, truncate!
MEMTXATTRS_UNSPECIFIED,
null_mut(),
);
}
- } else if self.regs.is_int_level_triggered() {
+ } else if tn_regs.is_int_level_triggered() {
self.get_state().irqs[route].raise();
} else {
self.get_state().irqs[route].pulse();
}
- } else if !self.regs.is_fsb_route_enabled() {
+ } else if !tn_regs.is_fsb_route_enabled() {
self.get_state().irqs[route].lower();
}
}
@@ -366,16 +373,17 @@ fn update_irq(&self, hpet_regs: &BqlRefCell<HPETRegisters>, set: bool) {
regs.int_status = regs.int_status.deposit(
self.index.into(),
1,
- u64::from(set && self.regs.is_int_level_triggered()),
+ u64::from(set && regs.tn_regs[self.index as usize].is_int_level_triggered()),
);
self.set_irq(hpet_regs, set);
}
- fn arm_timer(&mut self, tick: u64) {
+ fn arm_timer(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>, tick: u64) {
+ let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
let mut ns = self.get_state().get_ns(tick);
// Clamp period to reasonable min value (1 us)
- if self.regs.is_periodic() && ns - self.last < 1000 {
+ if tn_regs.is_periodic() && ns - self.last < 1000 {
ns = self.last + 1000;
}
@@ -383,21 +391,22 @@ fn arm_timer(&mut self, tick: u64) {
self.qemu_timer.modify(self.last);
}
- fn set_timer(&mut self) {
+ fn set_timer(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
+ let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
let cur_tick: u64 = self.get_state().get_ticks();
self.wrap_flag = 0;
- self.cmp64 = self.calculate_cmp64(cur_tick, self.regs.cmp);
- if self.regs.is_32bit_mod() {
+ self.cmp64 = self.calculate_cmp64(hpet_regs, cur_tick, tn_regs.cmp);
+ if tn_regs.is_32bit_mod() {
// HPET spec says in one-shot 32-bit mode, generate an interrupt when
// counter wraps in addition to an interrupt with comparator match.
- if !self.regs.is_periodic() && self.cmp64 > hpet_next_wrap(cur_tick) {
+ if !tn_regs.is_periodic() && self.cmp64 > hpet_next_wrap(cur_tick) {
self.wrap_flag = 1;
- self.arm_timer(hpet_next_wrap(cur_tick));
+ self.arm_timer(hpet_regs, hpet_next_wrap(cur_tick));
return;
}
}
- self.arm_timer(self.cmp64);
+ self.arm_timer(hpet_regs, self.cmp64);
}
fn del_timer(&self, hpet_regs: &BqlRefCell<HPETRegisters>) {
@@ -412,16 +421,16 @@ fn del_timer(&self, hpet_regs: &BqlRefCell<HPETRegisters>) {
}
}
- /// Configuration and Capability Register
- fn set_tn_cfg_reg(
- &mut self,
+ fn prepare_tn_cfg_reg_new(
+ &self,
hpet_regs: &BqlRefCell<HPETRegisters>,
shift: u32,
len: u32,
val: u64,
- ) {
+ ) -> (u64, u64) {
+ let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
// TODO: Add trace point - trace_hpet_ram_write_tn_cfg(addr & 4)
- let old_val: u64 = self.regs.config;
+ let old_val: u64 = tn_regs.config;
let mut new_val: u64 = old_val.deposit(shift, len, val);
new_val = hpet_fixup_reg(new_val, old_val, HPET_TN_CFG_WRITE_MASK);
@@ -432,7 +441,21 @@ fn set_tn_cfg_reg(
self.update_irq(hpet_regs, false);
}
- self.regs.config = new_val;
+ (new_val, old_val)
+ }
+
+ /// Configuration and Capability Register
+ fn set_tn_cfg_reg(
+ &mut self,
+ hpet_regs: &BqlRefCell<HPETRegisters>,
+ shift: u32,
+ len: u32,
+ val: u64,
+ ) {
+ // Factor out a prepare_tn_cfg_reg_new() to better handle immutable scope.
+ let (new_val, old_val) = self.prepare_tn_cfg_reg_new(hpet_regs, shift, len, val);
+ let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
+ tn_regs.config = new_val;
if activating_bit(old_val, new_val, HPET_TN_CFG_INT_ENABLE_SHIFT)
&& self.is_int_active(hpet_regs)
@@ -440,13 +463,13 @@ fn set_tn_cfg_reg(
self.update_irq(hpet_regs, true);
}
- if self.regs.is_32bit_mod() {
- self.regs.cmp = u64::from(self.regs.cmp as u32); // truncate!
+ if tn_regs.is_32bit_mod() {
+ tn_regs.cmp = u64::from(tn_regs.cmp as u32); // truncate!
self.period = u64::from(self.period as u32); // truncate!
}
if hpet_regs.borrow().is_hpet_enabled() {
- self.set_timer();
+ self.set_timer(hpet_regs);
}
}
@@ -458,11 +481,12 @@ fn set_tn_cmp_reg(
len: u32,
val: u64,
) {
+ let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
let mut length = len;
let mut value = val;
// TODO: Add trace point - trace_hpet_ram_write_tn_cmp(addr & 4)
- if self.regs.is_32bit_mod() {
+ if tn_regs.is_32bit_mod() {
// High 32-bits are zero, leave them untouched.
if shift != 0 {
// TODO: Add trace point - trace_hpet_ram_write_invalid_tn_cmp()
@@ -472,41 +496,43 @@ fn set_tn_cmp_reg(
value = u64::from(value as u32); // truncate!
}
- if !self.regs.is_periodic() || self.regs.is_valset_enabled() {
- self.regs.cmp = self.regs.cmp.deposit(shift, length, value);
+ if !tn_regs.is_periodic() || tn_regs.is_valset_enabled() {
+ tn_regs.cmp = tn_regs.cmp.deposit(shift, length, value);
}
- if self.regs.is_periodic() {
+ if tn_regs.is_periodic() {
self.period = self.period.deposit(shift, length, value);
}
- self.regs.clear_valset();
+ tn_regs.clear_valset();
if hpet_regs.borrow().is_hpet_enabled() {
- self.set_timer();
+ self.set_timer(hpet_regs);
}
}
/// FSB Interrupt Route Register
fn set_tn_fsb_route_reg(
- &mut self,
- _hpet_regs: &BqlRefCell<HPETRegisters>,
+ &self,
+ hpet_regs: &BqlRefCell<HPETRegisters>,
shift: u32,
len: u32,
val: u64,
) {
- self.regs.fsb = self.regs.fsb.deposit(shift, len, val);
+ let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
+ tn_regs.fsb = tn_regs.fsb.deposit(shift, len, val);
}
fn reset(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
self.del_timer(hpet_regs);
- self.regs.cmp = u64::MAX; // Comparator Match Registers reset to all 1's.
- self.regs.config =
- (1 << HPET_TN_CFG_PERIODIC_CAP_SHIFT) | (1 << HPET_TN_CFG_SIZE_CAP_SHIFT);
+
+ let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
+ tn_regs.cmp = u64::MAX; // Comparator Match Registers reset to all 1's.
+ tn_regs.config = (1 << HPET_TN_CFG_PERIODIC_CAP_SHIFT) | (1 << HPET_TN_CFG_SIZE_CAP_SHIFT);
if self.get_state().has_msi_flag() {
- self.regs.config |= 1 << HPET_TN_CFG_FSB_CAP_SHIFT;
+ tn_regs.config |= 1 << HPET_TN_CFG_FSB_CAP_SHIFT;
}
// advertise availability of ioapic int
- self.regs.config |=
+ tn_regs.config |=
(u64::from(self.get_state().int_route_cap)) << HPET_TN_CFG_INT_ROUTE_CAP_SHIFT;
self.period = 0;
self.wrap_flag = 0;
@@ -514,32 +540,35 @@ fn reset(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
/// timer expiration callback
fn callback(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
+ let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
let period: u64 = self.period;
let cur_tick: u64 = self.get_state().get_ticks();
- if self.regs.is_periodic() && period != 0 {
+ if tn_regs.is_periodic() && period != 0 {
while hpet_time_after(cur_tick, self.cmp64) {
self.cmp64 += period;
}
- if self.regs.is_32bit_mod() {
- self.regs.cmp = u64::from(self.cmp64 as u32); // truncate!
+ if tn_regs.is_32bit_mod() {
+ tn_regs.cmp = u64::from(self.cmp64 as u32); // truncate!
} else {
- self.regs.cmp = self.cmp64;
+ tn_regs.cmp = self.cmp64;
}
- self.arm_timer(self.cmp64);
+ self.arm_timer(hpet_regs, self.cmp64);
} else if self.wrap_flag != 0 {
self.wrap_flag = 0;
- self.arm_timer(self.cmp64);
+ self.arm_timer(hpet_regs, self.cmp64);
}
self.update_irq(hpet_regs, true);
}
- const fn read(&self, target: TimerRegister, _hpet_regs: &BqlRefCell<HPETRegisters>) -> u64 {
+ fn read(&self, target: TimerRegister, hpet_regs: &BqlRefCell<HPETRegisters>) -> u64 {
+ let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
+
use TimerRegister::*;
match target {
- CFG => self.regs.config, // including interrupt capabilities
- CMP => self.regs.cmp, // comparator register
- ROUTE => self.regs.fsb,
+ CFG => tn_regs.config, // including interrupt capabilities
+ CMP => tn_regs.cmp, // comparator register
+ ROUTE => tn_regs.fsb,
}
}
@@ -574,6 +603,9 @@ pub struct HPETRegisters {
/// Main Counter Value Register
#[doc(alias = "hpet_counter")]
counter: u64,
+
+ /// HPET Timer N Registers
+ tn_regs: [HPETTimerRegisters; HPET_MAX_TIMERS],
}
impl HPETRegisters {
@@ -689,11 +721,13 @@ fn set_cfg_reg(&self, regs: &BqlRefCell<HPETRegisters>, shift: u32, len: u32, va
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
+ let id = t.index as usize;
+ let tn_regs = ®s.borrow().tn_regs[id];
- if t.regs.is_int_enabled() && t.is_int_active(regs) {
+ if tn_regs.is_int_enabled() && t.is_int_active(regs) {
t.update_irq(regs, true);
}
- t.set_timer();
+ t.set_timer(regs);
}
} else if deactivating_bit(old_val, new_val, HPET_CFG_ENABLE_SHIFT) {
// Halt main counter and disable interrupt generation.
@@ -936,8 +970,9 @@ fn post_load(&self, _version_id: u8) -> Result<(), migration::Infallible> {
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
let cnt = regs.counter;
+ let cmp = regs.tn_regs[t.index as usize].cmp;
- t.cmp64 = t.calculate_cmp64(cnt, t.regs.cmp);
+ t.cmp64 = t.calculate_cmp64(&self.regs, cnt, cmp);
t.last = CLOCK_VIRTUAL.get_ns() - NANOSECONDS_PER_SECOND;
}
@@ -1001,8 +1036,6 @@ impl ObjectImpl for HPETState {
})
.build();
-// In fact, version_id and minimum_version_id for HPETTimerRegisters are
-// unrelated to HPETTimer's version IDs. Does not affect compatibility.
impl_vmstate_struct!(
HPETTimerRegisters,
VMStateDescriptionBuilder::<HPETTimerRegisters>::new()
@@ -1020,11 +1053,10 @@ impl ObjectImpl for HPETState {
const VMSTATE_HPET_TIMER: VMStateDescription<HPETTimer> =
VMStateDescriptionBuilder::<HPETTimer>::new()
.name(c"hpet_timer")
- .version_id(1)
- .minimum_version_id(1)
+ .version_id(2)
+ .minimum_version_id(2)
.fields(vmstate_fields! {
vmstate_of!(HPETTimer, index),
- vmstate_of!(HPETTimer, regs),
vmstate_of!(HPETTimer, period),
vmstate_of!(HPETTimer, wrap_flag),
vmstate_of!(HPETTimer, qemu_timer),
@@ -1035,18 +1067,17 @@ impl ObjectImpl for HPETState {
const VALIDATE_TIMERS_NAME: &CStr = c"num_timers must match";
-// In fact, version_id and minimum_version_id for HPETRegisters are
-// unrelated to HPETState's version IDs. Does not affect compatibility.
impl_vmstate_struct!(
HPETRegisters,
VMStateDescriptionBuilder::<HPETRegisters>::new()
.name(c"hpet/regs")
- .version_id(1)
- .minimum_version_id(1)
+ .version_id(2)
+ .minimum_version_id(2)
.fields(vmstate_fields! {
vmstate_of!(HPETRegisters, config),
vmstate_of!(HPETRegisters, int_status),
vmstate_of!(HPETRegisters, counter),
+ vmstate_of!(HPETRegisters, tn_regs),
})
.build()
);
@@ -1054,8 +1085,8 @@ impl ObjectImpl for HPETState {
const VMSTATE_HPET: VMStateDescription<HPETState> =
VMStateDescriptionBuilder::<HPETState>::new()
.name(c"hpet")
- .version_id(2)
- .minimum_version_id(2)
+ .version_id(3)
+ .minimum_version_id(3)
.pre_save(&HPETState::pre_save)
.post_load(&HPETState::post_load)
.fields(vmstate_fields! {
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 18/22] rust/hpet: Borrow BqlRefCell<HPETRegisters> at top level
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (16 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 17/22] rust/hpet: Maintain HPETTimerRegisters in HPETRegisters Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 19/22] rust/hpet: Rename hpet_regs variables to regs Zhao Liu
` (3 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Lockless IO requires to lock the registers during MMIO access. So it's
necessary to get (or borrow) registers data at top level, and not to
borrow again in child function calls.
Change the argument types from BqlRefCell<HPETRegisters> to
&HPETRegisters/&mut HPETRegisters in child methods, and do borrow the
data once at top level.
This allows BqlRefCell<HPETRegisters> to be directly replaced with
Mutex<HPETRegisters> in subsequent steps without causing lock reentrancy
issues.
Note, passing reference instead of BqlRef/BqlRefMut because BqlRefMut
cannot be re-borrowed as BqlRef, though BqlRef/BqlRefMut themselves play
as the "guard". Passing reference is directly and the extra
bql::is_locked check could help to consolidate safety guarantee.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 270 ++++++++++++++++++++-----------
1 file changed, 177 insertions(+), 93 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 462a09880b61..f8a551fc0a78 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -179,7 +179,7 @@ const fn deactivating_bit(old: u64, new: u64, shift: usize) -> bool {
fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
let mut t = timer_cell.borrow_mut();
// SFAETY: state field is valid after timer initialization.
- let hpet_regs = &mut unsafe { t.state.as_mut() }.regs;
+ let hpet_regs = &mut unsafe { t.state.as_mut() }.regs.borrow_mut();
t.callback(hpet_regs)
}
@@ -286,20 +286,27 @@ fn get_state(&self) -> &HPETState {
unsafe { self.state.as_ref() }
}
- fn is_int_active(&self, hpet_regs: &BqlRefCell<HPETRegisters>) -> bool {
- hpet_regs.borrow().is_timer_int_active(self.index.into())
+ fn is_int_active(&self, hpet_regs: &HPETRegisters) -> bool {
+ // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ hpet_regs.is_timer_int_active(self.index.into())
}
/// calculate next value of the general counter that matches the
/// target (either entirely, or the low 32-bit only depending on
/// the timer mode).
- fn calculate_cmp64(
- &self,
- hpet_regs: &BqlRefCell<HPETRegisters>,
- cur_tick: u64,
- target: u64,
- ) -> u64 {
- let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
+ fn calculate_cmp64(&self, hpet_regs: &HPETRegisters, cur_tick: u64, target: u64) -> u64 {
+ // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let tn_regs = &hpet_regs.tn_regs[self.index as usize];
if tn_regs.is_32bit_mod() {
let mut result: u64 = cur_tick.deposit(0, 32, target);
if result < cur_tick {
@@ -311,9 +318,14 @@ fn calculate_cmp64(
}
}
- fn get_int_route(&self, hpet_regs: &BqlRefCell<HPETRegisters>) -> usize {
- let regs = hpet_regs.borrow();
- if self.index <= 1 && regs.is_legacy_mode() {
+ fn get_int_route(&self, hpet_regs: &HPETRegisters) -> usize {
+ // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ if self.index <= 1 && hpet_regs.is_legacy_mode() {
// If LegacyReplacement Route bit is set, HPET specification requires
// timer0 be routed to IRQ0 in NON-APIC or IRQ2 in the I/O APIC,
// timer1 be routed to IRQ8 in NON-APIC or IRQ8 in the I/O APIC.
@@ -333,16 +345,21 @@ fn get_int_route(&self, hpet_regs: &BqlRefCell<HPETRegisters>) -> usize {
// ...
// If the LegacyReplacement Route bit is not set, the individual
// routing bits for each of the timers are used.
- regs.tn_regs[self.index as usize].get_individual_route()
+ hpet_regs.tn_regs[self.index as usize].get_individual_route()
}
}
- fn set_irq(&self, hpet_regs: &BqlRefCell<HPETRegisters>, set: bool) {
- let regs = hpet_regs.borrow();
- let tn_regs = ®s.tn_regs[self.index as usize];
+ fn set_irq(&self, hpet_regs: &HPETRegisters, set: bool) {
+ // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let tn_regs = &hpet_regs.tn_regs[self.index as usize];
let route = self.get_int_route(hpet_regs);
- if set && tn_regs.is_int_enabled() && regs.is_hpet_enabled() {
+ if set && tn_regs.is_int_enabled() && hpet_regs.is_hpet_enabled() {
if tn_regs.is_fsb_route_enabled() {
// SAFETY:
// the parameters are valid.
@@ -365,21 +382,32 @@ fn set_irq(&self, hpet_regs: &BqlRefCell<HPETRegisters>, set: bool) {
}
}
- fn update_irq(&self, hpet_regs: &BqlRefCell<HPETRegisters>, set: bool) {
- let mut regs = hpet_regs.borrow_mut();
+ fn update_irq(&self, hpet_regs: &mut HPETRegisters, set: bool) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
// If Timer N Interrupt Enable bit is 0, "the timer will
// still operate and generate appropriate status bits, but
// will not cause an interrupt"
- regs.int_status = regs.int_status.deposit(
+ hpet_regs.int_status = hpet_regs.int_status.deposit(
self.index.into(),
1,
- u64::from(set && regs.tn_regs[self.index as usize].is_int_level_triggered()),
+ u64::from(set && hpet_regs.tn_regs[self.index as usize].is_int_level_triggered()),
);
self.set_irq(hpet_regs, set);
}
- fn arm_timer(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>, tick: u64) {
- let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
+ fn arm_timer(&mut self, hpet_regs: &HPETRegisters, tick: u64) {
+ // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let tn_regs = &hpet_regs.tn_regs[self.index as usize];
let mut ns = self.get_state().get_ns(tick);
// Clamp period to reasonable min value (1 us)
@@ -391,8 +419,14 @@ fn arm_timer(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>, tick: u64) {
self.qemu_timer.modify(self.last);
}
- fn set_timer(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
- let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
+ fn set_timer(&mut self, hpet_regs: &HPETRegisters) {
+ // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let tn_regs = &hpet_regs.tn_regs[self.index as usize];
let cur_tick: u64 = self.get_state().get_ticks();
self.wrap_flag = 0;
@@ -409,7 +443,13 @@ fn set_timer(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
self.arm_timer(hpet_regs, self.cmp64);
}
- fn del_timer(&self, hpet_regs: &BqlRefCell<HPETRegisters>) {
+ fn del_timer(&self, hpet_regs: &mut HPETRegisters) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
// Just remove the timer from the timer_list without destroying
// this timer instance.
self.qemu_timer.delete();
@@ -423,12 +463,18 @@ fn del_timer(&self, hpet_regs: &BqlRefCell<HPETRegisters>) {
fn prepare_tn_cfg_reg_new(
&self,
- hpet_regs: &BqlRefCell<HPETRegisters>,
+ hpet_regs: &mut HPETRegisters,
shift: u32,
len: u32,
val: u64,
) -> (u64, u64) {
- let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let tn_regs = &hpet_regs.tn_regs[self.index as usize];
// TODO: Add trace point - trace_hpet_ram_write_tn_cfg(addr & 4)
let old_val: u64 = tn_regs.config;
let mut new_val: u64 = old_val.deposit(shift, len, val);
@@ -445,43 +491,46 @@ fn prepare_tn_cfg_reg_new(
}
/// Configuration and Capability Register
- fn set_tn_cfg_reg(
- &mut self,
- hpet_regs: &BqlRefCell<HPETRegisters>,
- shift: u32,
- len: u32,
- val: u64,
- ) {
+ fn set_tn_cfg_reg(&mut self, hpet_regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
// Factor out a prepare_tn_cfg_reg_new() to better handle immutable scope.
let (new_val, old_val) = self.prepare_tn_cfg_reg_new(hpet_regs, shift, len, val);
- let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
- tn_regs.config = new_val;
+ // After prepare_tn_cfg_reg_new(), it's safe to access int_status with a
+ // immutable reference before update_irq().
+ let tn_int_active = self.is_int_active(hpet_regs);
+ hpet_regs.tn_regs[self.index as usize].config = new_val;
- if activating_bit(old_val, new_val, HPET_TN_CFG_INT_ENABLE_SHIFT)
- && self.is_int_active(hpet_regs)
- {
+ if activating_bit(old_val, new_val, HPET_TN_CFG_INT_ENABLE_SHIFT) && tn_int_active {
self.update_irq(hpet_regs, true);
}
+ // Create the mutable reference after update_irq() to ensure that
+ // only one mutable reference exists at a time.
+ let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
if tn_regs.is_32bit_mod() {
tn_regs.cmp = u64::from(tn_regs.cmp as u32); // truncate!
self.period = u64::from(self.period as u32); // truncate!
}
- if hpet_regs.borrow().is_hpet_enabled() {
+ if hpet_regs.is_hpet_enabled() {
self.set_timer(hpet_regs);
}
}
/// Comparator Value Register
- fn set_tn_cmp_reg(
- &mut self,
- hpet_regs: &BqlRefCell<HPETRegisters>,
- shift: u32,
- len: u32,
- val: u64,
- ) {
- let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
+ fn set_tn_cmp_reg(&mut self, hpet_regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
let mut length = len;
let mut value = val;
@@ -505,27 +554,33 @@ fn set_tn_cmp_reg(
}
tn_regs.clear_valset();
- if hpet_regs.borrow().is_hpet_enabled() {
+ if hpet_regs.is_hpet_enabled() {
self.set_timer(hpet_regs);
}
}
/// FSB Interrupt Route Register
- fn set_tn_fsb_route_reg(
- &self,
- hpet_regs: &BqlRefCell<HPETRegisters>,
- shift: u32,
- len: u32,
- val: u64,
- ) {
- let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
+ fn set_tn_fsb_route_reg(&self, hpet_regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
tn_regs.fsb = tn_regs.fsb.deposit(shift, len, val);
}
- fn reset(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
+ fn reset(&mut self, hpet_regs: &mut HPETRegisters) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
self.del_timer(hpet_regs);
- let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
+ let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
tn_regs.cmp = u64::MAX; // Comparator Match Registers reset to all 1's.
tn_regs.config = (1 << HPET_TN_CFG_PERIODIC_CAP_SHIFT) | (1 << HPET_TN_CFG_SIZE_CAP_SHIFT);
if self.get_state().has_msi_flag() {
@@ -539,8 +594,14 @@ fn reset(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
}
/// timer expiration callback
- fn callback(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
- let tn_regs = &mut hpet_regs.borrow_mut().tn_regs[self.index as usize];
+ fn callback(&mut self, hpet_regs: &mut HPETRegisters) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
let period: u64 = self.period;
let cur_tick: u64 = self.get_state().get_ticks();
@@ -561,8 +622,14 @@ fn callback(&mut self, hpet_regs: &BqlRefCell<HPETRegisters>) {
self.update_irq(hpet_regs, true);
}
- fn read(&self, target: TimerRegister, hpet_regs: &BqlRefCell<HPETRegisters>) -> u64 {
- let tn_regs = &hpet_regs.borrow().tn_regs[self.index as usize];
+ fn read(&self, target: TimerRegister, hpet_regs: &HPETRegisters) -> u64 {
+ // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let tn_regs = &hpet_regs.tn_regs[self.index as usize];
use TimerRegister::*;
match target {
@@ -575,11 +642,17 @@ fn read(&self, target: TimerRegister, hpet_regs: &BqlRefCell<HPETRegisters>) ->
fn write(
&mut self,
target: TimerRegister,
- hpet_regs: &BqlRefCell<HPETRegisters>,
+ hpet_regs: &mut HPETRegisters,
value: u64,
shift: u32,
len: u32,
) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
use TimerRegister::*;
match target {
CFG => self.set_tn_cfg_reg(hpet_regs, shift, len, value),
@@ -707,22 +780,28 @@ fn init_timers(this: &mut MaybeUninit<Self>) {
}
/// General Configuration Register
- fn set_cfg_reg(&self, regs: &BqlRefCell<HPETRegisters>, shift: u32, len: u32, val: u64) {
- let old_val = regs.borrow().config;
+ fn set_cfg_reg(&self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
+ let old_val = regs.config;
let mut new_val = old_val.deposit(shift, len, val);
new_val = hpet_fixup_reg(new_val, old_val, HPET_CFG_WRITE_MASK);
- regs.borrow_mut().config = new_val;
+ regs.config = new_val;
if activating_bit(old_val, new_val, HPET_CFG_ENABLE_SHIFT) {
// Enable main counter and interrupt generation.
self.hpet_offset
- .set(ticks_to_ns(regs.borrow().counter) - CLOCK_VIRTUAL.get_ns());
+ .set(ticks_to_ns(regs.counter) - CLOCK_VIRTUAL.get_ns());
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
let id = t.index as usize;
- let tn_regs = ®s.borrow().tn_regs[id];
+ let tn_regs = ®s.tn_regs[id];
if tn_regs.is_int_enabled() && t.is_int_active(regs) {
t.update_irq(regs, true);
@@ -731,7 +810,7 @@ fn set_cfg_reg(&self, regs: &BqlRefCell<HPETRegisters>, shift: u32, len: u32, va
}
} else if deactivating_bit(old_val, new_val, HPET_CFG_ENABLE_SHIFT) {
// Halt main counter and disable interrupt generation.
- regs.borrow_mut().counter = self.get_ticks();
+ regs.counter = self.get_ticks();
for timer in self.timers.iter().take(self.num_timers) {
timer.borrow().del_timer(regs);
@@ -752,15 +831,15 @@ fn set_cfg_reg(&self, regs: &BqlRefCell<HPETRegisters>, shift: u32, len: u32, va
}
/// General Interrupt Status Register: Read/Write Clear
- fn set_int_status_reg(
- &self,
- regs: &BqlRefCell<HPETRegisters>,
- shift: u32,
- _len: u32,
- val: u64,
- ) {
+ fn set_int_status_reg(&self, regs: &mut HPETRegisters, shift: u32, _len: u32, val: u64) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
let new_val = val << shift;
- let cleared = new_val & regs.borrow().int_status;
+ let cleared = new_val & regs.int_status;
for (index, timer) in self.timers.iter().take(self.num_timers).enumerate() {
if cleared & (1 << index) != 0 {
@@ -770,8 +849,13 @@ fn set_int_status_reg(
}
/// Main Counter Value Register
- fn set_counter_reg(&self, regs: &BqlRefCell<HPETRegisters>, shift: u32, len: u32, val: u64) {
- let mut regs = regs.borrow_mut();
+ fn set_counter_reg(&self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
+ // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
+ // but there's no lock guard to guarantee this. So we have to check BQL
+ // context explicitly. This check should be removed when we switch to
+ // Mutex<HPETRegisters>.
+ assert!(bql::is_locked());
+
if regs.is_hpet_enabled() {
// TODO: Add trace point -
// trace_hpet_ram_write_counter_write_while_enabled()
@@ -851,15 +935,16 @@ fn realize(&self) -> util::Result<()> {
}
fn reset_hold(&self, _type: ResetType) {
- for timer in self.timers.iter().take(self.num_timers) {
- timer.borrow_mut().reset(&self.regs);
- }
-
// pit_enabled.set(true) will call irq handler and access regs
// again. We cannot borrow BqlRefCell twice at once. Minimize the
// scope of regs to ensure it will be dropped before irq callback.
{
let mut regs = self.regs.borrow_mut();
+
+ for timer in self.timers.iter().take(self.num_timers) {
+ timer.borrow_mut().reset(&mut regs);
+ }
+
regs.counter = 0;
regs.config = 0;
HPETFwConfig::update_hpet_cfg(
@@ -904,17 +989,16 @@ fn decode(&self, mut addr: hwaddr, size: u32) -> HPETAddrDecode<'_> {
fn read(&self, addr: hwaddr, size: u32) -> u64 {
// TODO: Add trace point - trace_hpet_ram_read(addr)
let HPETAddrDecode { shift, target, .. } = self.decode(addr, size);
- let regs = &self.regs;
+ let regs = &self.regs.borrow();
use DecodedRegister::*;
use GlobalRegister::*;
(match target {
Timer(timer, tn_target) => timer.borrow_mut().read(tn_target, regs),
- Global(CAP) => regs.borrow().capability, /* including HPET_PERIOD 0x004 */
- Global(CFG) => regs.borrow().config,
- Global(INT_STATUS) => regs.borrow().int_status,
+ Global(CAP) => regs.capability, /* including HPET_PERIOD 0x004 */
+ Global(CFG) => regs.config,
+ Global(INT_STATUS) => regs.int_status,
Global(COUNTER) => {
- let regs = regs.borrow();
// TODO: Add trace point
// trace_hpet_ram_read_reading_counter(addr & 4, cur_tick)
if regs.is_hpet_enabled() {
@@ -932,7 +1016,7 @@ fn read(&self, addr: hwaddr, size: u32) -> u64 {
fn write(&self, addr: hwaddr, value: u64, size: u32) {
let HPETAddrDecode { shift, len, target } = self.decode(addr, size);
- let regs = &self.regs;
+ let regs = &mut self.regs.borrow_mut();
// TODO: Add trace point - trace_hpet_ram_write(addr, value)
use DecodedRegister::*;
@@ -972,7 +1056,7 @@ fn post_load(&self, _version_id: u8) -> Result<(), migration::Infallible> {
let cnt = regs.counter;
let cmp = regs.tn_regs[t.index as usize].cmp;
- t.cmp64 = t.calculate_cmp64(&self.regs, cnt, cmp);
+ t.cmp64 = t.calculate_cmp64(®s, cnt, cmp);
t.last = CLOCK_VIRTUAL.get_ns() - NANOSECONDS_PER_SECOND;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 19/22] rust/hpet: Rename hpet_regs variables to regs
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (17 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 18/22] rust/hpet: Borrow BqlRefCell<HPETRegisters> at top level Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 20/22] rust/hpet: Apply Migratable<> wrapper and ToMigrationState for HPETRegisters Zhao Liu
` (2 subsequent siblings)
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Now, both global registers and timer registers are placed together
within the HPETRegisters struct. The previous "hpet_regs" name is used
to distinguish global registers, and it's not needed anymore for now.
Rename hpet_regs variables to regs, to make code use the consistent
variable naming style.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 116 +++++++++++++++----------------
1 file changed, 58 insertions(+), 58 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index f8a551fc0a78..73a1c87abd95 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -179,8 +179,8 @@ const fn deactivating_bit(old: u64, new: u64, shift: usize) -> bool {
fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
let mut t = timer_cell.borrow_mut();
// SFAETY: state field is valid after timer initialization.
- let hpet_regs = &mut unsafe { t.state.as_mut() }.regs.borrow_mut();
- t.callback(hpet_regs)
+ let regs = &mut unsafe { t.state.as_mut() }.regs.borrow_mut();
+ t.callback(regs)
}
#[repr(C)]
@@ -286,27 +286,27 @@ fn get_state(&self) -> &HPETState {
unsafe { self.state.as_ref() }
}
- fn is_int_active(&self, hpet_regs: &HPETRegisters) -> bool {
+ fn is_int_active(&self, regs: &HPETRegisters) -> bool {
// &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- hpet_regs.is_timer_int_active(self.index.into())
+ regs.is_timer_int_active(self.index.into())
}
/// calculate next value of the general counter that matches the
/// target (either entirely, or the low 32-bit only depending on
/// the timer mode).
- fn calculate_cmp64(&self, hpet_regs: &HPETRegisters, cur_tick: u64, target: u64) -> u64 {
+ fn calculate_cmp64(&self, regs: &HPETRegisters, cur_tick: u64, target: u64) -> u64 {
// &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- let tn_regs = &hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = ®s.tn_regs[self.index as usize];
if tn_regs.is_32bit_mod() {
let mut result: u64 = cur_tick.deposit(0, 32, target);
if result < cur_tick {
@@ -318,14 +318,14 @@ fn calculate_cmp64(&self, hpet_regs: &HPETRegisters, cur_tick: u64, target: u64)
}
}
- fn get_int_route(&self, hpet_regs: &HPETRegisters) -> usize {
+ fn get_int_route(&self, regs: &HPETRegisters) -> usize {
// &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- if self.index <= 1 && hpet_regs.is_legacy_mode() {
+ if self.index <= 1 && regs.is_legacy_mode() {
// If LegacyReplacement Route bit is set, HPET specification requires
// timer0 be routed to IRQ0 in NON-APIC or IRQ2 in the I/O APIC,
// timer1 be routed to IRQ8 in NON-APIC or IRQ8 in the I/O APIC.
@@ -345,21 +345,21 @@ fn get_int_route(&self, hpet_regs: &HPETRegisters) -> usize {
// ...
// If the LegacyReplacement Route bit is not set, the individual
// routing bits for each of the timers are used.
- hpet_regs.tn_regs[self.index as usize].get_individual_route()
+ regs.tn_regs[self.index as usize].get_individual_route()
}
}
- fn set_irq(&self, hpet_regs: &HPETRegisters, set: bool) {
+ fn set_irq(&self, regs: &HPETRegisters, set: bool) {
// &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- let tn_regs = &hpet_regs.tn_regs[self.index as usize];
- let route = self.get_int_route(hpet_regs);
+ let tn_regs = ®s.tn_regs[self.index as usize];
+ let route = self.get_int_route(regs);
- if set && tn_regs.is_int_enabled() && hpet_regs.is_hpet_enabled() {
+ if set && tn_regs.is_int_enabled() && regs.is_hpet_enabled() {
if tn_regs.is_fsb_route_enabled() {
// SAFETY:
// the parameters are valid.
@@ -382,7 +382,7 @@ fn set_irq(&self, hpet_regs: &HPETRegisters, set: bool) {
}
}
- fn update_irq(&self, hpet_regs: &mut HPETRegisters, set: bool) {
+ fn update_irq(&self, regs: &mut HPETRegisters, set: bool) {
// &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
@@ -392,22 +392,22 @@ fn update_irq(&self, hpet_regs: &mut HPETRegisters, set: bool) {
// If Timer N Interrupt Enable bit is 0, "the timer will
// still operate and generate appropriate status bits, but
// will not cause an interrupt"
- hpet_regs.int_status = hpet_regs.int_status.deposit(
+ regs.int_status = regs.int_status.deposit(
self.index.into(),
1,
- u64::from(set && hpet_regs.tn_regs[self.index as usize].is_int_level_triggered()),
+ u64::from(set && regs.tn_regs[self.index as usize].is_int_level_triggered()),
);
- self.set_irq(hpet_regs, set);
+ self.set_irq(regs, set);
}
- fn arm_timer(&mut self, hpet_regs: &HPETRegisters, tick: u64) {
+ fn arm_timer(&mut self, regs: &HPETRegisters, tick: u64) {
// &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- let tn_regs = &hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = ®s.tn_regs[self.index as usize];
let mut ns = self.get_state().get_ns(tick);
// Clamp period to reasonable min value (1 us)
@@ -419,31 +419,31 @@ fn arm_timer(&mut self, hpet_regs: &HPETRegisters, tick: u64) {
self.qemu_timer.modify(self.last);
}
- fn set_timer(&mut self, hpet_regs: &HPETRegisters) {
+ fn set_timer(&mut self, regs: &HPETRegisters) {
// &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- let tn_regs = &hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = ®s.tn_regs[self.index as usize];
let cur_tick: u64 = self.get_state().get_ticks();
self.wrap_flag = 0;
- self.cmp64 = self.calculate_cmp64(hpet_regs, cur_tick, tn_regs.cmp);
+ self.cmp64 = self.calculate_cmp64(regs, cur_tick, tn_regs.cmp);
if tn_regs.is_32bit_mod() {
// HPET spec says in one-shot 32-bit mode, generate an interrupt when
// counter wraps in addition to an interrupt with comparator match.
if !tn_regs.is_periodic() && self.cmp64 > hpet_next_wrap(cur_tick) {
self.wrap_flag = 1;
- self.arm_timer(hpet_regs, hpet_next_wrap(cur_tick));
+ self.arm_timer(regs, hpet_next_wrap(cur_tick));
return;
}
}
- self.arm_timer(hpet_regs, self.cmp64);
+ self.arm_timer(regs, self.cmp64);
}
- fn del_timer(&self, hpet_regs: &mut HPETRegisters) {
+ fn del_timer(&self, regs: &mut HPETRegisters) {
// &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
@@ -454,16 +454,16 @@ fn del_timer(&self, hpet_regs: &mut HPETRegisters) {
// this timer instance.
self.qemu_timer.delete();
- if self.is_int_active(hpet_regs) {
+ if self.is_int_active(regs) {
// For level-triggered interrupt, this leaves interrupt status
// register set but lowers irq.
- self.update_irq(hpet_regs, true);
+ self.update_irq(regs, true);
}
}
fn prepare_tn_cfg_reg_new(
&self,
- hpet_regs: &mut HPETRegisters,
+ regs: &mut HPETRegisters,
shift: u32,
len: u32,
val: u64,
@@ -474,7 +474,7 @@ fn prepare_tn_cfg_reg_new(
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- let tn_regs = &hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = ®s.tn_regs[self.index as usize];
// TODO: Add trace point - trace_hpet_ram_write_tn_cfg(addr & 4)
let old_val: u64 = tn_regs.config;
let mut new_val: u64 = old_val.deposit(shift, len, val);
@@ -484,14 +484,14 @@ fn prepare_tn_cfg_reg_new(
if deactivating_bit(old_val, new_val, HPET_TN_CFG_INT_TYPE_SHIFT) {
// Do this before changing timer.regs.config; otherwise, if
// HPET_TN_FSB is set, update_irq will not lower the qemu_irq.
- self.update_irq(hpet_regs, false);
+ self.update_irq(regs, false);
}
(new_val, old_val)
}
/// Configuration and Capability Register
- fn set_tn_cfg_reg(&mut self, hpet_regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
+ fn set_tn_cfg_reg(&mut self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
// &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
@@ -499,38 +499,38 @@ fn set_tn_cfg_reg(&mut self, hpet_regs: &mut HPETRegisters, shift: u32, len: u32
assert!(bql::is_locked());
// Factor out a prepare_tn_cfg_reg_new() to better handle immutable scope.
- let (new_val, old_val) = self.prepare_tn_cfg_reg_new(hpet_regs, shift, len, val);
+ let (new_val, old_val) = self.prepare_tn_cfg_reg_new(regs, shift, len, val);
// After prepare_tn_cfg_reg_new(), it's safe to access int_status with a
// immutable reference before update_irq().
- let tn_int_active = self.is_int_active(hpet_regs);
- hpet_regs.tn_regs[self.index as usize].config = new_val;
+ let tn_int_active = self.is_int_active(regs);
+ regs.tn_regs[self.index as usize].config = new_val;
if activating_bit(old_val, new_val, HPET_TN_CFG_INT_ENABLE_SHIFT) && tn_int_active {
- self.update_irq(hpet_regs, true);
+ self.update_irq(regs, true);
}
// Create the mutable reference after update_irq() to ensure that
// only one mutable reference exists at a time.
- let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = &mut regs.tn_regs[self.index as usize];
if tn_regs.is_32bit_mod() {
tn_regs.cmp = u64::from(tn_regs.cmp as u32); // truncate!
self.period = u64::from(self.period as u32); // truncate!
}
- if hpet_regs.is_hpet_enabled() {
- self.set_timer(hpet_regs);
+ if regs.is_hpet_enabled() {
+ self.set_timer(regs);
}
}
/// Comparator Value Register
- fn set_tn_cmp_reg(&mut self, hpet_regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
+ fn set_tn_cmp_reg(&mut self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
// &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = &mut regs.tn_regs[self.index as usize];
let mut length = len;
let mut value = val;
@@ -554,33 +554,33 @@ fn set_tn_cmp_reg(&mut self, hpet_regs: &mut HPETRegisters, shift: u32, len: u32
}
tn_regs.clear_valset();
- if hpet_regs.is_hpet_enabled() {
- self.set_timer(hpet_regs);
+ if regs.is_hpet_enabled() {
+ self.set_timer(regs);
}
}
/// FSB Interrupt Route Register
- fn set_tn_fsb_route_reg(&self, hpet_regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
+ fn set_tn_fsb_route_reg(&self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
// &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = &mut regs.tn_regs[self.index as usize];
tn_regs.fsb = tn_regs.fsb.deposit(shift, len, val);
}
- fn reset(&mut self, hpet_regs: &mut HPETRegisters) {
+ fn reset(&mut self, regs: &mut HPETRegisters) {
// &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- self.del_timer(hpet_regs);
+ self.del_timer(regs);
- let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = &mut regs.tn_regs[self.index as usize];
tn_regs.cmp = u64::MAX; // Comparator Match Registers reset to all 1's.
tn_regs.config = (1 << HPET_TN_CFG_PERIODIC_CAP_SHIFT) | (1 << HPET_TN_CFG_SIZE_CAP_SHIFT);
if self.get_state().has_msi_flag() {
@@ -594,14 +594,14 @@ fn reset(&mut self, hpet_regs: &mut HPETRegisters) {
}
/// timer expiration callback
- fn callback(&mut self, hpet_regs: &mut HPETRegisters) {
+ fn callback(&mut self, regs: &mut HPETRegisters) {
// &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- let tn_regs = &mut hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = &mut regs.tn_regs[self.index as usize];
let period: u64 = self.period;
let cur_tick: u64 = self.get_state().get_ticks();
@@ -614,22 +614,22 @@ fn callback(&mut self, hpet_regs: &mut HPETRegisters) {
} else {
tn_regs.cmp = self.cmp64;
}
- self.arm_timer(hpet_regs, self.cmp64);
+ self.arm_timer(regs, self.cmp64);
} else if self.wrap_flag != 0 {
self.wrap_flag = 0;
- self.arm_timer(hpet_regs, self.cmp64);
+ self.arm_timer(regs, self.cmp64);
}
- self.update_irq(hpet_regs, true);
+ self.update_irq(regs, true);
}
- fn read(&self, target: TimerRegister, hpet_regs: &HPETRegisters) -> u64 {
+ fn read(&self, target: TimerRegister, regs: &HPETRegisters) -> u64 {
// &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
// but there's no lock guard to guarantee this. So we have to check BQL
// context explicitly. This check should be removed when we switch to
// Mutex<HPETRegisters>.
assert!(bql::is_locked());
- let tn_regs = &hpet_regs.tn_regs[self.index as usize];
+ let tn_regs = ®s.tn_regs[self.index as usize];
use TimerRegister::*;
match target {
@@ -642,7 +642,7 @@ fn read(&self, target: TimerRegister, hpet_regs: &HPETRegisters) -> u64 {
fn write(
&mut self,
target: TimerRegister,
- hpet_regs: &mut HPETRegisters,
+ regs: &mut HPETRegisters,
value: u64,
shift: u32,
len: u32,
@@ -655,9 +655,9 @@ fn write(
use TimerRegister::*;
match target {
- CFG => self.set_tn_cfg_reg(hpet_regs, shift, len, value),
- CMP => self.set_tn_cmp_reg(hpet_regs, shift, len, value),
- ROUTE => self.set_tn_fsb_route_reg(hpet_regs, shift, len, value),
+ CFG => self.set_tn_cfg_reg(regs, shift, len, value),
+ CMP => self.set_tn_cmp_reg(regs, shift, len, value),
+ ROUTE => self.set_tn_fsb_route_reg(regs, shift, len, value),
}
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 20/22] rust/hpet: Apply Migratable<> wrapper and ToMigrationState for HPETRegisters
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (18 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 19/22] rust/hpet: Rename hpet_regs variables to regs Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 21/22] rust/hpet: Replace BqlRefCell<HPETRegisters> with Mutex<HPETRegisters> Zhao Liu
2025-11-13 5:19 ` [PATCH 22/22] rust/hpet: Enable lockless IO Zhao Liu
21 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Before using Mutex<> to protect HPETRegisters, it's necessary to apply
Migratable<> wrapper and ToMigrationState first since there's no
pre-defined VMState for Mutex<>.
Though HPETRegistersMigration and HPETTimerRegistersMigration structs are
generated by ToMigrationState macro, their VMStates still need to be
implemented by hand.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 32 +++++++++++++++++---------------
1 file changed, 17 insertions(+), 15 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 73a1c87abd95..42e2e8d070c3 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -18,7 +18,7 @@
};
use migration::{
self, impl_vmstate_struct, vmstate_fields, vmstate_of, vmstate_subsections, vmstate_validate,
- VMStateDescription, VMStateDescriptionBuilder,
+ Migratable, ToMigrationState, VMStateDescription, VMStateDescriptionBuilder,
};
use qom::{prelude::*, ObjectImpl, ParentField, ParentInit};
use system::{
@@ -184,7 +184,7 @@ fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
}
#[repr(C)]
-#[derive(Debug, Default)]
+#[derive(Debug, Default, ToMigrationState)]
pub struct HPETTimerRegisters {
// Memory-mapped, software visible timer registers
/// Timer N Configuration and Capability Register
@@ -663,7 +663,7 @@ fn write(
}
#[repr(C)]
-#[derive(Default)]
+#[derive(Default, ToMigrationState)]
pub struct HPETRegisters {
// HPET block Registers: Memory-mapped, software visible registers
/// General Capabilities and ID Register
@@ -701,7 +701,7 @@ fn is_timer_int_active(&self, index: usize) -> bool {
pub struct HPETState {
parent_obj: ParentField<SysBusDevice>,
iomem: MemoryRegion,
- regs: BqlRefCell<HPETRegisters>,
+ regs: Migratable<BqlRefCell<HPETRegisters>>,
// Internal state
/// Capabilities that QEMU HPET supports.
@@ -1120,16 +1120,17 @@ impl ObjectImpl for HPETState {
})
.build();
+// HPETTimerRegistersMigration is generated by ToMigrationState macro.
impl_vmstate_struct!(
- HPETTimerRegisters,
- VMStateDescriptionBuilder::<HPETTimerRegisters>::new()
+ HPETTimerRegistersMigration,
+ VMStateDescriptionBuilder::<HPETTimerRegistersMigration>::new()
.name(c"hpet_timer/regs")
.version_id(1)
.minimum_version_id(1)
.fields(vmstate_fields! {
- vmstate_of!(HPETTimerRegisters, config),
- vmstate_of!(HPETTimerRegisters, cmp),
- vmstate_of!(HPETTimerRegisters, fsb),
+ vmstate_of!(HPETTimerRegistersMigration, config),
+ vmstate_of!(HPETTimerRegistersMigration, cmp),
+ vmstate_of!(HPETTimerRegistersMigration, fsb),
})
.build()
);
@@ -1151,17 +1152,18 @@ impl ObjectImpl for HPETState {
const VALIDATE_TIMERS_NAME: &CStr = c"num_timers must match";
+// HPETRegistersMigration is generated by ToMigrationState macro.
impl_vmstate_struct!(
- HPETRegisters,
- VMStateDescriptionBuilder::<HPETRegisters>::new()
+ HPETRegistersMigration,
+ VMStateDescriptionBuilder::<HPETRegistersMigration>::new()
.name(c"hpet/regs")
.version_id(2)
.minimum_version_id(2)
.fields(vmstate_fields! {
- vmstate_of!(HPETRegisters, config),
- vmstate_of!(HPETRegisters, int_status),
- vmstate_of!(HPETRegisters, counter),
- vmstate_of!(HPETRegisters, tn_regs),
+ vmstate_of!(HPETRegistersMigration, config),
+ vmstate_of!(HPETRegistersMigration, int_status),
+ vmstate_of!(HPETRegistersMigration, counter),
+ vmstate_of!(HPETRegistersMigration, tn_regs),
})
.build()
);
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 21/22] rust/hpet: Replace BqlRefCell<HPETRegisters> with Mutex<HPETRegisters>
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (19 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 20/22] rust/hpet: Apply Migratable<> wrapper and ToMigrationState for HPETRegisters Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 9:31 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 22/22] rust/hpet: Enable lockless IO Zhao Liu
21 siblings, 1 reply; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Replace BqlRefCell<HPETRegisters> with Mutex<HPETRegisters> and pass
&MutexGuard<HPETRegisters>/&mut MutexGuard<HPETRegisters> as argument
type to child method calls.
Since the MutexGuard could clearly reflect the register data is
protected under Mutex, remove bql::is_locked() checks.
Then Mutex could lock at top level, including locking once during MMIO
access. Though the lockless IO hasn't been enabled for HPET, Mutex
doesn't have any conflict with BQL for now. So it's safe do such
replacement.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
rust/hw/timer/hpet/src/device.rs | 199 ++++++++++---------------------
1 file changed, 62 insertions(+), 137 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index 42e2e8d070c3..db3e2c8fa23c 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -8,6 +8,7 @@
pin::Pin,
ptr::{addr_of_mut, null_mut, NonNull},
slice::from_ref,
+ sync::{Mutex, MutexGuard},
};
use bql::{BqlCell, BqlRefCell};
@@ -179,8 +180,8 @@ const fn deactivating_bit(old: u64, new: u64, shift: usize) -> bool {
fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
let mut t = timer_cell.borrow_mut();
// SFAETY: state field is valid after timer initialization.
- let regs = &mut unsafe { t.state.as_mut() }.regs.borrow_mut();
- t.callback(regs)
+ let mut regs = unsafe { t.state.as_ref() }.regs.lock().unwrap();
+ t.callback(&mut regs)
}
#[repr(C)]
@@ -299,13 +300,7 @@ fn is_int_active(&self, regs: &HPETRegisters) -> bool {
/// calculate next value of the general counter that matches the
/// target (either entirely, or the low 32-bit only depending on
/// the timer mode).
- fn calculate_cmp64(&self, regs: &HPETRegisters, cur_tick: u64, target: u64) -> u64 {
- // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn calculate_cmp64(&self, regs: &MutexGuard<HPETRegisters>, cur_tick: u64, target: u64) -> u64 {
let tn_regs = ®s.tn_regs[self.index as usize];
if tn_regs.is_32bit_mod() {
let mut result: u64 = cur_tick.deposit(0, 32, target);
@@ -318,13 +313,7 @@ fn calculate_cmp64(&self, regs: &HPETRegisters, cur_tick: u64, target: u64) -> u
}
}
- fn get_int_route(&self, regs: &HPETRegisters) -> usize {
- // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn get_int_route(&self, regs: &MutexGuard<HPETRegisters>) -> usize {
if self.index <= 1 && regs.is_legacy_mode() {
// If LegacyReplacement Route bit is set, HPET specification requires
// timer0 be routed to IRQ0 in NON-APIC or IRQ2 in the I/O APIC,
@@ -349,13 +338,7 @@ fn get_int_route(&self, regs: &HPETRegisters) -> usize {
}
}
- fn set_irq(&self, regs: &HPETRegisters, set: bool) {
- // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn set_irq(&self, regs: &MutexGuard<HPETRegisters>, set: bool) {
let tn_regs = ®s.tn_regs[self.index as usize];
let route = self.get_int_route(regs);
@@ -382,13 +365,7 @@ fn set_irq(&self, regs: &HPETRegisters, set: bool) {
}
}
- fn update_irq(&self, regs: &mut HPETRegisters, set: bool) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn update_irq(&self, regs: &mut MutexGuard<HPETRegisters>, set: bool) {
// If Timer N Interrupt Enable bit is 0, "the timer will
// still operate and generate appropriate status bits, but
// will not cause an interrupt"
@@ -400,13 +377,7 @@ fn update_irq(&self, regs: &mut HPETRegisters, set: bool) {
self.set_irq(regs, set);
}
- fn arm_timer(&mut self, regs: &HPETRegisters, tick: u64) {
- // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn arm_timer(&mut self, regs: &MutexGuard<HPETRegisters>, tick: u64) {
let tn_regs = ®s.tn_regs[self.index as usize];
let mut ns = self.get_state().get_ns(tick);
@@ -419,13 +390,7 @@ fn arm_timer(&mut self, regs: &HPETRegisters, tick: u64) {
self.qemu_timer.modify(self.last);
}
- fn set_timer(&mut self, regs: &HPETRegisters) {
- // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn set_timer(&mut self, regs: &MutexGuard<HPETRegisters>) {
let tn_regs = ®s.tn_regs[self.index as usize];
let cur_tick: u64 = self.get_state().get_ticks();
@@ -443,13 +408,7 @@ fn set_timer(&mut self, regs: &HPETRegisters) {
self.arm_timer(regs, self.cmp64);
}
- fn del_timer(&self, regs: &mut HPETRegisters) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn del_timer(&self, regs: &mut MutexGuard<HPETRegisters>) {
// Just remove the timer from the timer_list without destroying
// this timer instance.
self.qemu_timer.delete();
@@ -463,17 +422,11 @@ fn del_timer(&self, regs: &mut HPETRegisters) {
fn prepare_tn_cfg_reg_new(
&self,
- regs: &mut HPETRegisters,
+ regs: &mut MutexGuard<HPETRegisters>,
shift: u32,
len: u32,
val: u64,
) -> (u64, u64) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
let tn_regs = ®s.tn_regs[self.index as usize];
// TODO: Add trace point - trace_hpet_ram_write_tn_cfg(addr & 4)
let old_val: u64 = tn_regs.config;
@@ -491,13 +444,13 @@ fn prepare_tn_cfg_reg_new(
}
/// Configuration and Capability Register
- fn set_tn_cfg_reg(&mut self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn set_tn_cfg_reg(
+ &mut self,
+ regs: &mut MutexGuard<HPETRegisters>,
+ shift: u32,
+ len: u32,
+ val: u64,
+ ) {
// Factor out a prepare_tn_cfg_reg_new() to better handle immutable scope.
let (new_val, old_val) = self.prepare_tn_cfg_reg_new(regs, shift, len, val);
// After prepare_tn_cfg_reg_new(), it's safe to access int_status with a
@@ -523,13 +476,13 @@ fn set_tn_cfg_reg(&mut self, regs: &mut HPETRegisters, shift: u32, len: u32, val
}
/// Comparator Value Register
- fn set_tn_cmp_reg(&mut self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn set_tn_cmp_reg(
+ &mut self,
+ regs: &mut MutexGuard<HPETRegisters>,
+ shift: u32,
+ len: u32,
+ val: u64,
+ ) {
let tn_regs = &mut regs.tn_regs[self.index as usize];
let mut length = len;
let mut value = val;
@@ -560,24 +513,18 @@ fn set_tn_cmp_reg(&mut self, regs: &mut HPETRegisters, shift: u32, len: u32, val
}
/// FSB Interrupt Route Register
- fn set_tn_fsb_route_reg(&self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn set_tn_fsb_route_reg(
+ &self,
+ regs: &mut MutexGuard<HPETRegisters>,
+ shift: u32,
+ len: u32,
+ val: u64,
+ ) {
let tn_regs = &mut regs.tn_regs[self.index as usize];
tn_regs.fsb = tn_regs.fsb.deposit(shift, len, val);
}
- fn reset(&mut self, regs: &mut HPETRegisters) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn reset(&mut self, regs: &mut MutexGuard<HPETRegisters>) {
self.del_timer(regs);
let tn_regs = &mut regs.tn_regs[self.index as usize];
@@ -594,13 +541,7 @@ fn reset(&mut self, regs: &mut HPETRegisters) {
}
/// timer expiration callback
- fn callback(&mut self, regs: &mut HPETRegisters) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn callback(&mut self, regs: &mut MutexGuard<HPETRegisters>) {
let tn_regs = &mut regs.tn_regs[self.index as usize];
let period: u64 = self.period;
let cur_tick: u64 = self.get_state().get_ticks();
@@ -622,13 +563,7 @@ fn callback(&mut self, regs: &mut HPETRegisters) {
self.update_irq(regs, true);
}
- fn read(&self, target: TimerRegister, regs: &HPETRegisters) -> u64 {
- // &HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn read(&self, target: TimerRegister, regs: &MutexGuard<HPETRegisters>) -> u64 {
let tn_regs = ®s.tn_regs[self.index as usize];
use TimerRegister::*;
@@ -642,17 +577,11 @@ fn read(&self, target: TimerRegister, regs: &HPETRegisters) -> u64 {
fn write(
&mut self,
target: TimerRegister,
- regs: &mut HPETRegisters,
+ regs: &mut MutexGuard<HPETRegisters>,
value: u64,
shift: u32,
len: u32,
) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
use TimerRegister::*;
match target {
CFG => self.set_tn_cfg_reg(regs, shift, len, value),
@@ -701,7 +630,7 @@ fn is_timer_int_active(&self, index: usize) -> bool {
pub struct HPETState {
parent_obj: ParentField<SysBusDevice>,
iomem: MemoryRegion,
- regs: Migratable<BqlRefCell<HPETRegisters>>,
+ regs: Migratable<Mutex<HPETRegisters>>,
// Internal state
/// Capabilities that QEMU HPET supports.
@@ -751,7 +680,7 @@ fn get_ns(&self, tick: u64) -> u64 {
}
fn handle_legacy_irq(&self, irq: u32, level: u32) {
- let regs = self.regs.borrow();
+ let regs = self.regs.lock().unwrap();
if irq == HPET_LEGACY_PIT_INT {
if !regs.is_legacy_mode() {
self.irqs[0].set(level != 0);
@@ -780,13 +709,7 @@ fn init_timers(this: &mut MaybeUninit<Self>) {
}
/// General Configuration Register
- fn set_cfg_reg(&self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn set_cfg_reg(&self, regs: &mut MutexGuard<HPETRegisters>, shift: u32, len: u32, val: u64) {
let old_val = regs.config;
let mut new_val = old_val.deposit(shift, len, val);
@@ -831,13 +754,13 @@ fn set_cfg_reg(&self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64)
}
/// General Interrupt Status Register: Read/Write Clear
- fn set_int_status_reg(&self, regs: &mut HPETRegisters, shift: u32, _len: u32, val: u64) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn set_int_status_reg(
+ &self,
+ regs: &mut MutexGuard<HPETRegisters>,
+ shift: u32,
+ _len: u32,
+ val: u64,
+ ) {
let new_val = val << shift;
let cleared = new_val & regs.int_status;
@@ -849,13 +772,13 @@ fn set_int_status_reg(&self, regs: &mut HPETRegisters, shift: u32, _len: u32, va
}
/// Main Counter Value Register
- fn set_counter_reg(&self, regs: &mut HPETRegisters, shift: u32, len: u32, val: u64) {
- // &mut HPETRegisters should be gotten from BqlRefCell<HPETRegisters>,
- // but there's no lock guard to guarantee this. So we have to check BQL
- // context explicitly. This check should be removed when we switch to
- // Mutex<HPETRegisters>.
- assert!(bql::is_locked());
-
+ fn set_counter_reg(
+ &self,
+ regs: &mut MutexGuard<HPETRegisters>,
+ shift: u32,
+ len: u32,
+ val: u64,
+ ) {
if regs.is_hpet_enabled() {
// TODO: Add trace point -
// trace_hpet_ram_write_counter_write_while_enabled()
@@ -922,7 +845,7 @@ fn realize(&self) -> util::Result<()> {
self.hpet_id.set(HPETFwConfig::assign_hpet_id()?);
// 64-bit General Capabilities and ID Register; LegacyReplacementRoute.
- self.regs.borrow_mut().capability = HPET_CAP_REV_ID_VALUE << HPET_CAP_REV_ID_SHIFT |
+ self.regs.lock().unwrap().capability = HPET_CAP_REV_ID_VALUE << HPET_CAP_REV_ID_SHIFT |
1 << HPET_CAP_COUNT_SIZE_CAP_SHIFT |
1 << HPET_CAP_LEG_RT_CAP_SHIFT |
HPET_CAP_VENDER_ID_VALUE << HPET_CAP_VENDER_ID_SHIFT |
@@ -939,7 +862,7 @@ fn reset_hold(&self, _type: ResetType) {
// again. We cannot borrow BqlRefCell twice at once. Minimize the
// scope of regs to ensure it will be dropped before irq callback.
{
- let mut regs = self.regs.borrow_mut();
+ let mut regs = self.regs.lock().unwrap();
for timer in self.timers.iter().take(self.num_timers) {
timer.borrow_mut().reset(&mut regs);
@@ -989,7 +912,7 @@ fn decode(&self, mut addr: hwaddr, size: u32) -> HPETAddrDecode<'_> {
fn read(&self, addr: hwaddr, size: u32) -> u64 {
// TODO: Add trace point - trace_hpet_ram_read(addr)
let HPETAddrDecode { shift, target, .. } = self.decode(addr, size);
- let regs = &self.regs.borrow();
+ let regs = &self.regs.lock().unwrap();
use DecodedRegister::*;
use GlobalRegister::*;
@@ -1016,7 +939,7 @@ fn read(&self, addr: hwaddr, size: u32) -> u64 {
fn write(&self, addr: hwaddr, value: u64, size: u32) {
let HPETAddrDecode { shift, len, target } = self.decode(addr, size);
- let regs = &mut self.regs.borrow_mut();
+ let regs = &mut self.regs.lock().unwrap();
// TODO: Add trace point - trace_hpet_ram_write(addr, value)
use DecodedRegister::*;
@@ -1034,9 +957,10 @@ fn write(&self, addr: hwaddr, value: u64, size: u32) {
}
fn pre_save(&self) -> Result<(), migration::Infallible> {
- let mut regs = self.regs.borrow_mut();
+ let mut regs = self.regs.lock().unwrap();
if regs.is_hpet_enabled() {
regs.counter = self.get_ticks();
+ drop(regs); // required by clippy::significant-drop-tightening
}
/*
@@ -1049,7 +973,7 @@ fn pre_save(&self) -> Result<(), migration::Infallible> {
}
fn post_load(&self, _version_id: u8) -> Result<(), migration::Infallible> {
- let regs = self.regs.borrow();
+ let regs = self.regs.lock().unwrap();
for timer in self.timers.iter().take(self.num_timers) {
let mut t = timer.borrow_mut();
@@ -1064,6 +988,7 @@ fn post_load(&self, _version_id: u8) -> Result<(), migration::Infallible> {
if !self.hpet_offset_saved {
self.hpet_offset
.set(ticks_to_ns(regs.counter) - CLOCK_VIRTUAL.get_ns());
+ drop(regs); // required by clippy::significant-drop-tightening
}
Ok(())
@@ -1074,7 +999,7 @@ fn is_rtc_irq_level_needed(&self) -> bool {
}
fn is_offset_needed(&self) -> bool {
- self.regs.borrow().is_hpet_enabled() && self.hpet_offset_saved
+ self.regs.lock().unwrap().is_hpet_enabled() && self.hpet_offset_saved
}
fn validate_num_timers(&self, _version_id: u8) -> bool {
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH 21/22] rust/hpet: Replace BqlRefCell<HPETRegisters> with Mutex<HPETRegisters>
2025-11-13 5:19 ` [PATCH 21/22] rust/hpet: Replace BqlRefCell<HPETRegisters> with Mutex<HPETRegisters> Zhao Liu
@ 2025-11-13 9:31 ` Zhao Liu
2025-11-13 11:36 ` Zhao Liu
0 siblings, 1 reply; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 9:31 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust
> @@ -179,8 +180,8 @@ const fn deactivating_bit(old: u64, new: u64, shift: usize) -> bool {
> fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
> let mut t = timer_cell.borrow_mut();
> // SFAETY: state field is valid after timer initialization.
> - let regs = &mut unsafe { t.state.as_mut() }.regs.borrow_mut();
> - t.callback(regs)
> + let mut regs = unsafe { t.state.as_ref() }.regs.lock().unwrap();
> + t.callback(&mut regs)
> }
callback()
-> arm_timer(): access timer N register
-> update_irq(): modify global register (int_status or "isr" in C code)
So timer handler needs to lock Mutex. But this may cause deadlock:
timer_hanlder -> lock BQL -> try to lock Mutex
MMIO access -> lock Mutex -> try to lock BQL
C HPET doesn't have such deadlock issue since it doesn't lock Mutex in
timer handler.
I think it seems necessay to lock Mutex in timer handler since there's
no guarantee to avoid data race...
Thanks,
Zhao
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH 21/22] rust/hpet: Replace BqlRefCell<HPETRegisters> with Mutex<HPETRegisters>
2025-11-13 9:31 ` Zhao Liu
@ 2025-11-13 11:36 ` Zhao Liu
0 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 11:36 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-Andr� Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
On Thu, Nov 13, 2025 at 05:31:42PM +0800, Zhao Liu wrote:
> Date: Thu, 13 Nov 2025 17:31:42 +0800
> From: Zhao Liu <zhao1.liu@intel.com>
> Subject: Re: [PATCH 21/22] rust/hpet: Replace BqlRefCell<HPETRegisters>
> with Mutex<HPETRegisters>
>
> > @@ -179,8 +180,8 @@ const fn deactivating_bit(old: u64, new: u64, shift: usize) -> bool {
> > fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
> > let mut t = timer_cell.borrow_mut();
> > // SFAETY: state field is valid after timer initialization.
> > - let regs = &mut unsafe { t.state.as_mut() }.regs.borrow_mut();
> > - t.callback(regs)
> > + let mut regs = unsafe { t.state.as_ref() }.regs.lock().unwrap();
> > + t.callback(&mut regs)
> > }
>
> callback()
> -> arm_timer(): access timer N register
> -> update_irq(): modify global register (int_status or "isr" in C code)
>
> So timer handler needs to lock Mutex. But this may cause deadlock:
>
> timer_hanlder -> lock BQL -> try to lock Mutex
> MMIO access -> lock Mutex -> try to lock BQL
>
> C HPET doesn't have such deadlock issue since it doesn't lock Mutex in
> timer handler.
>
> I think it seems necessay to lock Mutex in timer handler since there's
> no guarantee to avoid data race...
One possible way may be to introduce lockless timer callback, but at
Rust side, this needs to extract timers from BqlRefCell and add extra
Muetx to protect timer state.
So a simple way is to just unlock bql before acquiring Mutex in timer
handler, which give a chance for MMIO to acquire BQL. And this way could
fix locking order in timer handler.
Code example:
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index f96dfe1ebd06..389eb9b49eb6 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -178,10 +178,35 @@ const fn deactivating_bit(old: u64, new: u64, shift: usize) -> bool {
}
fn timer_handler(timer_cell: &BqlRefCell<HPETTimer>) {
- let mut t = timer_cell.borrow_mut();
- // SFAETY: state field is valid after timer initialization.
- let mut regs = unsafe { t.state.as_ref() }.regs.lock().unwrap();
- t.callback(&mut regs)
+ let state_p = {
+ let t = timer_cell.borrow();
+ t.state
+ };
+
+ // Release BQL first and acquire Mutex instead. This avoids deadlock
+ // since lockless IO will lock Mutex first and then try to acquire
+ // BQL.
+ //
+ // SAFETY: BQL free context only locks Mutex and will do nothing else.
+ unsafe {
+ bql::unlock();
+ }
+
+ // SAFETY: state_p is valid and we just access Mutex and don't touch
+ // other fields. Mutex could guarantee the registers access is safe
+ // during BQL is unlocked.
+ let mut regs = unsafe { state_p.as_ref() }.regs.lock().unwrap();
+
+ // After Mutex is locked, lock BQL again. This ensures both timer
+ // handler and MMIO have the same locking order.
+ //
+ // SAFETY: BQL context is expected for timer handler and now the
+ // correct locking order eliminates deadlock.
+ unsafe {
+ bql::lock();
+ }
+
+ timer_cell.borrow_mut().callback(&mut regs);
}
#[repr(C)]
Thanks,
Zhao
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 22/22] rust/hpet: Enable lockless IO
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
` (20 preceding siblings ...)
2025-11-13 5:19 ` [PATCH 21/22] rust/hpet: Replace BqlRefCell<HPETRegisters> with Mutex<HPETRegisters> Zhao Liu
@ 2025-11-13 5:19 ` Zhao Liu
2025-11-13 14:29 ` Paolo Bonzini
21 siblings, 1 reply; 30+ messages in thread
From: Zhao Liu @ 2025-11-13 5:19 UTC (permalink / raw)
To: Paolo Bonzini, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust, Zhao Liu
Enable lockless IO for HPET to allow BQL free MMIO access.
But BQL context is still needed for some cases during MMIO:
* IRQ handling:
Like C version of HPET did (commit d99041a20328 ("hpet: guard IRQ
handling with BQL"), BQL context is needed during IRQ handling.
But Rust HPET has an extra reason that InterruptSource is placed in
BqlCell, which requires BQL context explicitly. (This also shows
that the BQL limitation in the design of the InterruptSource binding
is reasonable.)
* BqlCell/BqlRefCell access.
Except InterruptSource, HPETState has other BqlCell and BqlRefCell:
hpet_offset (BqlCell<u64>), rtc_irq_level (BqlCell<u32>) and timers
([BqlRefCell<HPETTimer>; HPET_MAX_TIMERS]).
Their data may change during runtime, so the atomic context is
required.
Therefore, use BqlGuard to provide BQL context in the MMIO path.
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
Q: HPET C version doesn't guard BQL for hpet_offset, rtc_irq_level and
timers. Should we add the guard for these fields?
---
rust/hw/timer/hpet/src/device.rs | 67 ++++++++++++++++++++------------
1 file changed, 43 insertions(+), 24 deletions(-)
diff --git a/rust/hw/timer/hpet/src/device.rs b/rust/hw/timer/hpet/src/device.rs
index db3e2c8fa23c..f96dfe1ebd06 100644
--- a/rust/hw/timer/hpet/src/device.rs
+++ b/rust/hw/timer/hpet/src/device.rs
@@ -356,12 +356,12 @@ fn set_irq(&self, regs: &MutexGuard<HPETRegisters>, set: bool) {
);
}
} else if tn_regs.is_int_level_triggered() {
- self.get_state().irqs[route].raise();
+ bql::with_guard(|| self.get_state().irqs[route].raise());
} else {
- self.get_state().irqs[route].pulse();
+ bql::with_guard(|| self.get_state().irqs[route].pulse());
}
} else if !tn_regs.is_fsb_route_enabled() {
- self.get_state().irqs[route].lower();
+ bql::with_guard(|| self.get_state().irqs[route].lower());
}
}
@@ -672,14 +672,20 @@ const fn has_msi_flag(&self) -> bool {
}
fn get_ticks(&self) -> u64 {
- ns_to_ticks(CLOCK_VIRTUAL.get_ns() + self.hpet_offset.get())
+ // Protect hpet_offset in lockless IO case which would not lock BQL.
+ ns_to_ticks(CLOCK_VIRTUAL.get_ns() + bql::with_guard(|| self.hpet_offset.get()))
}
fn get_ns(&self, tick: u64) -> u64 {
- ticks_to_ns(tick) - self.hpet_offset.get()
+ // Protect hpet_offset in lockless IO case which would not lock BQL.
+ ticks_to_ns(tick) - bql::with_guard(|| self.hpet_offset.get())
}
fn handle_legacy_irq(&self, irq: u32, level: u32) {
+ // Protect both rtc_irq_level and irqs[] in lockless IO case which
+ // would not lock BQL.
+ let _bql_guard = bql::BqlGuard::new();
+
let regs = self.regs.lock().unwrap();
if irq == HPET_LEGACY_PIT_INT {
if !regs.is_legacy_mode() {
@@ -718,38 +724,49 @@ fn set_cfg_reg(&self, regs: &mut MutexGuard<HPETRegisters>, shift: u32, len: u32
if activating_bit(old_val, new_val, HPET_CFG_ENABLE_SHIFT) {
// Enable main counter and interrupt generation.
- self.hpet_offset
- .set(ticks_to_ns(regs.counter) - CLOCK_VIRTUAL.get_ns());
+ // Protect hpet_offset in lockless IO case which would not lock BQL.
+ bql::with_guard(|| {
+ self.hpet_offset
+ .set(ticks_to_ns(regs.counter) - CLOCK_VIRTUAL.get_ns())
+ });
for timer in self.timers.iter().take(self.num_timers) {
- let mut t = timer.borrow_mut();
- let id = t.index as usize;
- let tn_regs = ®s.tn_regs[id];
-
- if tn_regs.is_int_enabled() && t.is_int_active(regs) {
- t.update_irq(regs, true);
- }
- t.set_timer(regs);
+ // Protect timer in lockless IO case which would not lock BQL.
+ bql::with_guard(|| {
+ let mut t = timer.borrow_mut();
+ let id = t.index as usize;
+ let tn_regs = ®s.tn_regs[id];
+
+ if tn_regs.is_int_enabled() && t.is_int_active(regs) {
+ t.update_irq(regs, true);
+ }
+ t.set_timer(regs);
+ });
}
} else if deactivating_bit(old_val, new_val, HPET_CFG_ENABLE_SHIFT) {
// Halt main counter and disable interrupt generation.
regs.counter = self.get_ticks();
for timer in self.timers.iter().take(self.num_timers) {
- timer.borrow().del_timer(regs);
+ // Protect timer in lockless IO case which would not lock BQL.
+ bql::with_guard(|| timer.borrow().del_timer(regs));
}
}
// i8254 and RTC output pins are disabled when HPET is in legacy mode
if activating_bit(old_val, new_val, HPET_CFG_LEG_RT_SHIFT) {
- self.pit_enabled.set(false);
- self.irqs[0].lower();
- self.irqs[RTC_ISA_IRQ].lower();
+ bql::with_guard(|| {
+ self.pit_enabled.set(false);
+ self.irqs[0].lower();
+ self.irqs[RTC_ISA_IRQ].lower();
+ });
} else if deactivating_bit(old_val, new_val, HPET_CFG_LEG_RT_SHIFT) {
- // TODO: Add irq binding: qemu_irq_lower(s->irqs[0])
- self.irqs[0].lower();
- self.pit_enabled.set(true);
- self.irqs[RTC_ISA_IRQ].set(self.rtc_irq_level.get() != 0);
+ bql::with_guard(|| {
+ // TODO: Add irq binding: qemu_irq_lower(s->irqs[0])
+ self.irqs[0].lower();
+ self.pit_enabled.set(true);
+ self.irqs[RTC_ISA_IRQ].set(self.rtc_irq_level.get() != 0);
+ });
}
}
@@ -766,7 +783,8 @@ fn set_int_status_reg(
for (index, timer) in self.timers.iter().take(self.num_timers).enumerate() {
if cleared & (1 << index) != 0 {
- timer.borrow().update_irq(regs, false);
+ // Protect timer in lockless IO case which would not lock BQL.
+ bql::with_guard(|| timer.borrow().update_irq(regs, false));
}
}
}
@@ -827,6 +845,7 @@ unsafe fn init(mut this: ParentInit<Self>) {
fn post_init(&self) {
self.init_mmio(&self.iomem);
+ self.iomem.enable_lockless_io();
for irq in self.irqs.iter() {
self.init_irq(irq);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH 22/22] rust/hpet: Enable lockless IO
2025-11-13 5:19 ` [PATCH 22/22] rust/hpet: Enable lockless IO Zhao Liu
@ 2025-11-13 14:29 ` Paolo Bonzini
2025-11-14 6:39 ` Zhao Liu
0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2025-11-13 14:29 UTC (permalink / raw)
To: Zhao Liu, Manos Pitsidianakis, Marc-André Lureau
Cc: Igor Mammedov, qemu-devel, qemu-rust
On 11/13/25 06:19, Zhao Liu wrote:
> Enable lockless IO for HPET to allow BQL free MMIO access.
>
> But BQL context is still needed for some cases during MMIO:
> * IRQ handling:
>
> Like C version of HPET did (commit d99041a20328 ("hpet: guard IRQ
> handling with BQL"), BQL context is needed during IRQ handling.
>
> But Rust HPET has an extra reason that InterruptSource is placed in
> BqlCell, which requires BQL context explicitly. (This also shows
> that the BQL limitation in the design of the InterruptSource binding
> is reasonable.)
Thanks, this is helpful. It shows some complications that I honestly
hadn't thought about. So many more things to be precise about, compared
to C... but I have an idea which I'll mention below. :)
> * BqlCell/BqlRefCell access.
>
> Except InterruptSource, HPETState has other BqlCell and BqlRefCell:
> hpet_offset (BqlCell<u64>), rtc_irq_level (BqlCell<u32>) and timers
> ([BqlRefCell<HPETTimer>; HPET_MAX_TIMERS]).
>
> Their data may change during runtime, so the atomic context is
> required.
I have already mentioned HPETTimer in the other email, but I would also
move hpet_offset to HPETRegisters if possible. It doesn't seem hard.
And as an aside, I wonder if you really need to pass MutexGuard and not
&mut HPETRegisters. Once you don't have BQL dependencies, you can just
remove the assert!(bql::is_locked()) without switching to MutexGuard<>.
In the meanwhile, even if they are not perfect (especially due to
migration), I think touching patches 1-19 further is too messy, so I'll
rebase on top of Stefan's tracing patches and push them to rust-next.
Let's start from there and I'll take a look tomorrow maybe on how to fix
migration. Migratable<HPETTimer> looks like a powerful tool for that.
Then the new problem is that we have to figure out a way to handle IRQs.
They are also messy for PL011 compared to the C version, and that will
make it possible to enable lockless IO.
The crazy idea that just came to mind, is a Latched<u32> that is
something like an (AtomicU32, BqlCell<u32>) tuple. Then we set the
individual bits outside the BQL and update IRQs at the end of the MMIO
in a bql::with_guard() block. Maybe if you have some time you can
prototype that for PL011 (even without generics, you could just do
LatchedU32 for a start)?
Paolo
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH 22/22] rust/hpet: Enable lockless IO
2025-11-13 14:29 ` Paolo Bonzini
@ 2025-11-14 6:39 ` Zhao Liu
0 siblings, 0 replies; 30+ messages in thread
From: Zhao Liu @ 2025-11-14 6:39 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Manos Pitsidianakis, Marc-André Lureau, Igor Mammedov,
qemu-devel, qemu-rust, Zhao Liu
> > * BqlCell/BqlRefCell access.
> >
> > Except InterruptSource, HPETState has other BqlCell and BqlRefCell:
> > hpet_offset (BqlCell<u64>), rtc_irq_level (BqlCell<u32>) and timers
> > ([BqlRefCell<HPETTimer>; HPET_MAX_TIMERS]).
> >
> > Their data may change during runtime, so the atomic context is
> > required.
>
> I have already mentioned HPETTimer in the other email, but I would also move
> hpet_offset to HPETRegisters if possible. It doesn't seem hard.
Yeah, it can.
> And as an aside, I wonder if you really need to pass MutexGuard and not &mut
> HPETRegisters. Once you don't have BQL dependencies, you can just remove
> the assert!(bql::is_locked()) without switching to MutexGuard<>.
The main reason for using MutexGuard at present is to explicitly
indicate that it is protected by a Mutex. Because I considered that
get_mut() in the timer handler could bypass the lock(). But get_mut
depends on the unsafe code `unsafe { t.state.as_mut() }` which always
needs careful check and review.
So yes, we can use &mut HPETRegisters directly.
> In the meanwhile, even if they are not perfect (especially due to
> migration), I think touching patches 1-19 further is too messy, so I'll
> rebase on top of Stefan's tracing patches and push them to rust-next. Let's
> start from there and I'll take a look tomorrow maybe on how to fix
> migration. Migratable<HPETTimer> looks like a powerful tool for that.
Thank you!
> Then the new problem is that we have to figure out a way to handle IRQs.
> They are also messy for PL011 compared to the C version, and that will make
> it possible to enable lockless IO.
>
> The crazy idea that just came to mind, is a Latched<u32> that is something
> like an (AtomicU32, BqlCell<u32>) tuple. Then we set the individual bits
> outside the BQL and update IRQs at the end of the MMIO in a
> bql::with_guard() block.
This is an interesting idea and sounds like a "RCU" (write-copy-update)?
HMM, what does u32 mean, irq number? I understand the bql::with_guard()
is after Muext locking, i.e., after writing registers.
At that point, we need to know which irq should be operated (this is the
u32 but we also have pit_enabled), and what operation should we do now.
I'm not sure whether a tuple is enough... because there may be multiple
IRQ operations during Mutex locking:
fn set_cfg_reg(&self, regs: &mut MutexGuard<HPETRegisters>, shift: u32, len: u32, val: u64) {
...
// i8254 and RTC output pins are disabled when HPET is in legacy mode
if activating_bit(old_val, new_val, HPET_CFG_LEG_RT_SHIFT) {
bql::with_guard(|| {
self.pit_enabled.set(false);
self.irqs[0].lower();
self.irqs[RTC_ISA_IRQ].lower();
});
} else if deactivating_bit(old_val, new_val, HPET_CFG_LEG_RT_SHIFT) {
bql::with_guard(|| {
// TODO: Add irq binding: qemu_irq_lower(s->irqs[0])
self.irqs[0].lower();
self.pit_enabled.set(true);
self.irqs[RTC_ISA_IRQ].set(self.rtc_irq_level.get() != 0);
});
}
}
So do we need a lockless queue to store IrqOps during Mutex locking?
pub enum HPETIrqOp {
Lower(usize), // usize is index in HPETState::irqs[]
Pulse(usize),
Raise(usize),
Set(usize, bool),
PitSet(bool), // HPETState::pit_enabled
}
Another point I'm considerring is: the IRQ ops is cached in MMIO Mutex,
while its execution occurs in the MMIO BQL. If a timer handler (which
acquires BQL and then Mutex) is present between MMIO Mutex and MMIO BQL,
and also performs an IRQ op, this seems possible a "reordering" issue
for IRQ ops. Is this ok?
I guess it's ok, since even hardware may also can't guarantee that
register operation and irq operation is atomic...
Then with your idea, this could fix deadlock I mentioned in patch 21 and
we don't need the fix to unlock bql in timer handler anymore...
BTW, but, shouldn't C HPET also lock the mutex in the timer handler?
> Maybe if you have some time you can prototype that
> for PL011 (even without generics, you could just do LatchedU32 for a start)?
I guess you mean HPET? PL011 is also Ok but it hasn't reached the
lockless stage yet.
Thanks,
Zhao
^ permalink raw reply [flat|nested] 30+ messages in thread