From: Zhao Liu <zhao1.liu@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Manos Pitsidianakis" <manos.pitsidianakis@linaro.org>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Igor Mammedov" <imammedo@redhat.com>,
qemu-devel@nongnu.org, qemu-rust@nongnu.org,
"Zhao Liu" <zhao1.liu@intel.com>
Subject: Re: [PATCH 22/22] rust/hpet: Enable lockless IO
Date: Fri, 14 Nov 2025 14:39:10 +0800 [thread overview]
Message-ID: <aRbOjuDk3USq2RLK@intel.com> (raw)
In-Reply-To: <6ed6a959-fe49-4635-9051-b9bbc91dd2e4@redhat.com>
> > * BqlCell/BqlRefCell access.
> >
> > Except InterruptSource, HPETState has other BqlCell and BqlRefCell:
> > hpet_offset (BqlCell<u64>), rtc_irq_level (BqlCell<u32>) and timers
> > ([BqlRefCell<HPETTimer>; HPET_MAX_TIMERS]).
> >
> > Their data may change during runtime, so the atomic context is
> > required.
>
> I have already mentioned HPETTimer in the other email, but I would also move
> hpet_offset to HPETRegisters if possible. It doesn't seem hard.
Yeah, it can.
> And as an aside, I wonder if you really need to pass MutexGuard and not &mut
> HPETRegisters. Once you don't have BQL dependencies, you can just remove
> the assert!(bql::is_locked()) without switching to MutexGuard<>.
The main reason for using MutexGuard at present is to explicitly
indicate that it is protected by a Mutex. Because I considered that
get_mut() in the timer handler could bypass the lock(). But get_mut
depends on the unsafe code `unsafe { t.state.as_mut() }` which always
needs careful check and review.
So yes, we can use &mut HPETRegisters directly.
> In the meanwhile, even if they are not perfect (especially due to
> migration), I think touching patches 1-19 further is too messy, so I'll
> rebase on top of Stefan's tracing patches and push them to rust-next. Let's
> start from there and I'll take a look tomorrow maybe on how to fix
> migration. Migratable<HPETTimer> looks like a powerful tool for that.
Thank you!
> Then the new problem is that we have to figure out a way to handle IRQs.
> They are also messy for PL011 compared to the C version, and that will make
> it possible to enable lockless IO.
>
> The crazy idea that just came to mind, is a Latched<u32> that is something
> like an (AtomicU32, BqlCell<u32>) tuple. Then we set the individual bits
> outside the BQL and update IRQs at the end of the MMIO in a
> bql::with_guard() block.
This is an interesting idea and sounds like a "RCU" (write-copy-update)?
HMM, what does u32 mean, irq number? I understand the bql::with_guard()
is after Muext locking, i.e., after writing registers.
At that point, we need to know which irq should be operated (this is the
u32 but we also have pit_enabled), and what operation should we do now.
I'm not sure whether a tuple is enough... because there may be multiple
IRQ operations during Mutex locking:
fn set_cfg_reg(&self, regs: &mut MutexGuard<HPETRegisters>, shift: u32, len: u32, val: u64) {
...
// i8254 and RTC output pins are disabled when HPET is in legacy mode
if activating_bit(old_val, new_val, HPET_CFG_LEG_RT_SHIFT) {
bql::with_guard(|| {
self.pit_enabled.set(false);
self.irqs[0].lower();
self.irqs[RTC_ISA_IRQ].lower();
});
} else if deactivating_bit(old_val, new_val, HPET_CFG_LEG_RT_SHIFT) {
bql::with_guard(|| {
// TODO: Add irq binding: qemu_irq_lower(s->irqs[0])
self.irqs[0].lower();
self.pit_enabled.set(true);
self.irqs[RTC_ISA_IRQ].set(self.rtc_irq_level.get() != 0);
});
}
}
So do we need a lockless queue to store IrqOps during Mutex locking?
pub enum HPETIrqOp {
Lower(usize), // usize is index in HPETState::irqs[]
Pulse(usize),
Raise(usize),
Set(usize, bool),
PitSet(bool), // HPETState::pit_enabled
}
Another point I'm considerring is: the IRQ ops is cached in MMIO Mutex,
while its execution occurs in the MMIO BQL. If a timer handler (which
acquires BQL and then Mutex) is present between MMIO Mutex and MMIO BQL,
and also performs an IRQ op, this seems possible a "reordering" issue
for IRQ ops. Is this ok?
I guess it's ok, since even hardware may also can't guarantee that
register operation and irq operation is atomic...
Then with your idea, this could fix deadlock I mentioned in patch 21 and
we don't need the fix to unlock bql in timer handler anymore...
BTW, but, shouldn't C HPET also lock the mutex in the timer handler?
> Maybe if you have some time you can prototype that
> for PL011 (even without generics, you could just do LatchedU32 for a start)?
I guess you mean HPET? PL011 is also Ok but it hasn't reached the
lockless stage yet.
Thanks,
Zhao
prev parent reply other threads:[~2025-11-14 6:17 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-13 5:19 [PATCH 00/22] rust/hpet: Move towards lockless IO, partly Zhao Liu
2025-11-13 5:19 ` [PATCH 01/22] rust/migration: Add Sync implementation for Migratable<> Zhao Liu
2025-11-13 5:19 ` [PATCH 02/22] rust/migration: Fix missing name in the VMSD of Migratable<> Zhao Liu
2025-11-13 5:19 ` [PATCH 03/22] rust/migration: Check name field in VMStateDescriptionBuilder Zhao Liu
2025-11-13 5:19 ` [PATCH 04/22] rust/bql: Add BqlGuard to provide BQL context Zhao Liu
2025-11-13 5:19 ` [PATCH 05/22] rust/bql: Ensure BQL locked early at BqlRefCell borrowing Zhao Liu
2025-11-13 5:19 ` [PATCH 06/22] rust/memory: Add enable_lockless_io binding Zhao Liu
2025-11-13 5:19 ` [PATCH 07/22] rust/hpet: Reduce unnecessary mutable self argument Zhao Liu
2025-11-13 5:19 ` [PATCH 08/22] rust/hpet: Rename HPETRegister to DecodedRegister Zhao Liu
2025-11-13 5:19 ` [PATCH 09/22] rust/hpet: Rename decoded "reg" enumeration to "target" Zhao Liu
2025-11-13 5:19 ` [PATCH 10/22] rust/hpet: Abstract HPETTimerRegisters struct Zhao Liu
2025-11-13 11:24 ` Paolo Bonzini
2025-11-14 4:37 ` Zhao Liu
2025-11-15 7:54 ` Paolo Bonzini
2025-11-13 5:19 ` [PATCH 11/22] rust/hpet: Make timer register accessors as methods of HPETTimerRegisters Zhao Liu
2025-11-13 5:19 ` [PATCH 12/22] rust/hpet: Abstract HPETRegisters struct Zhao Liu
2025-11-13 5:19 ` [PATCH 13/22] rust/hpet: Make global register accessors as methods of HPETRegisters Zhao Liu
2025-11-13 5:19 ` [PATCH 14/22] rust/hpet: Borrow HPETState.regs once in HPETState::post_load() Zhao Liu
2025-11-13 5:19 ` [PATCH 15/22] rust/hpet: Explicitly initialize complex fields in init() Zhao Liu
2025-11-13 5:19 ` [PATCH 16/22] rust/hpet: Pass &BqlRefCell<HPETRegisters> as argument during MMIO access Zhao Liu
2025-11-13 5:19 ` [PATCH 17/22] rust/hpet: Maintain HPETTimerRegisters in HPETRegisters Zhao Liu
2025-11-13 5:19 ` [PATCH 18/22] rust/hpet: Borrow BqlRefCell<HPETRegisters> at top level Zhao Liu
2025-11-13 5:19 ` [PATCH 19/22] rust/hpet: Rename hpet_regs variables to regs Zhao Liu
2025-11-13 5:19 ` [PATCH 20/22] rust/hpet: Apply Migratable<> wrapper and ToMigrationState for HPETRegisters Zhao Liu
2025-11-13 5:19 ` [PATCH 21/22] rust/hpet: Replace BqlRefCell<HPETRegisters> with Mutex<HPETRegisters> Zhao Liu
2025-11-13 9:31 ` Zhao Liu
2025-11-13 11:36 ` Zhao Liu
2025-11-13 5:19 ` [PATCH 22/22] rust/hpet: Enable lockless IO Zhao Liu
2025-11-13 14:29 ` Paolo Bonzini
2025-11-14 6:39 ` Zhao Liu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aRbOjuDk3USq2RLK@intel.com \
--to=zhao1.liu@intel.com \
--cc=imammedo@redhat.com \
--cc=manos.pitsidianakis@linaro.org \
--cc=marcandre.lureau@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-rust@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).