* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
[not found] ` <DAU0ELV91E2Q.35FZOII18W44J@kernel.org>
@ 2025-06-23 17:11 ` Boqun Feng
2025-06-23 23:22 ` Benno Lossin
[not found] ` <20250707163913.5ffc046d@nimda.home>
1 sibling, 1 reply; 12+ messages in thread
From: Boqun Feng @ 2025-06-23 17:11 UTC (permalink / raw)
To: Benno Lossin
Cc: Onur Özkan, linux-kernel, rust-for-linux, ojeda, alex.gaynor,
gary, a.hindborg, aliceryhl, tmgross, dakr, peterz, mingo, will,
longman, felipe_life, daniel, bjorn3_gh, simona, airlied,
dri-devel, lyude
On Mon, Jun 23, 2025 at 05:14:37PM +0200, Benno Lossin wrote:
> On Mon Jun 23, 2025 at 4:47 PM CEST, Boqun Feng wrote:
> > On Mon, Jun 23, 2025 at 03:44:58PM +0200, Benno Lossin wrote:
> >> I didn't have a concrete API in mind, but after having read the
> >> abstractions more, would this make sense?
> >>
> >> let ctx: &WwAcquireCtx = ...;
> >> let m1: &WwMutex<T> = ...;
> >> let m2: &WwMutex<Foo> = ...;
> >>
> >> let (t, foo, foo2) = ctx
> >> .begin()
> >> .lock(m1)
> >> .lock(m2)
> >> .lock_with(|(t, foo)| &*foo.other)
> >> .finish();
> >>
> >
> > Cute!
> >
> > However, each `.lock()` will need to be polymorphic over a tuple of
> > locks that are already held, right? Otherwise I don't see how
> > `.lock_with()` knows it's already held two locks. That sounds like a
> > challenge for implementation.
>
> I think it's doable if we have
>
> impl WwActiveCtx {
I think you mean *WwAcquireCtx*
> fn begin(&self) -> WwActiveCtx<'_, ()>;
> }
>
> struct WwActiveCtx<'a, Locks> {
> locks: Locks,
This probably need to to be Result<Locks>, because we may detect
-DEADLOCK in the middle.
let (a, c, d) = ctx.begin()
.lock(a)
.lock(b) // <- `b` may be locked by someone else. So we should
// drop `a` and switch `locks` to an `Err(_)`.
.lock(c) // <- this should be a no-op if `locks` is an `Err(_)`.
.finish();
> _ctx: PhantomData<&'a WwAcquireCtx>,
We can still take a reference to WwAcquireCtx here I think.
> }
>
> impl<'a, Locks> WwActiveCtx<'a, Locks>
> where
> Locks: Tuple
> {
> fn lock<'b, T>(
> self,
> lock: &'b WwMutex<T>,
> ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
>
> fn lock_with<'b, T>(
> self,
> get_lock: impl FnOnce(&Locks) -> &'b WwMutex<T>,
> ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
> // I'm not 100% sure that the lifetimes will work out...
I think we can make the following work?
impl<'a, Locks> WwActiveCtx<'a, Locks>
where
Locks: Tuple
{
fn lock_with<T>(
self,
get_lock: impl FnOnce(&Locks) -> &WmMutex<T>,
) -> WwActiveCtx<'a, Locks::Append<WmMutexGuard<'a, T>>
}
because with a `WwActiveCtx<'a, Locks>`, we can get a `&'a Locks`, which
will give us a `&'a WmMutex<T>`, and should be able to give us a
`WmMutexGuard<'a, T>`.
>
> fn finish(self) -> Locks;
> }
>
> trait Tuple {
> type Append<T>;
>
> fn append<T>(self, value: T) -> Self::Append<T>;
> }
>
`Tuple` is good enough for its own, if you could remember, we have some
ideas about using things like this to consolidate multiple `RcuOld` so
that we can do one `synchronize_rcu()` for `RcuOld`s.
> impl Tuple for () {
> type Append<T> = (T,);
>
> fn append<T>(self, value: T) -> Self::Append<T> {
> (value,)
> }
> }
>
> impl<T1> Tuple for (T1,) {
> type Append<T> = (T1, T);
>
> fn append<T>(self, value: T) -> Self::Append<T> {
> (self.0, value,)
> }
> }
>
> impl<T1, T2> Tuple for (T1, T2) {
> type Append<T> = (T1, T2, T);
>
> fn append<T>(self, value: T) -> Self::Append<T> {
> (self.0, self.1, value,)
> }
> }
>
> /* these can easily be generated by a macro */
>
> > We also need to take into consideration that the user want to drop any
> > lock in the sequence? E.g. the user acquires a, b and c, and then drop
> > b, and then acquires d. Which I think is possible for ww_mutex.
>
> Hmm what about adding this to the above idea?:
>
> impl<'a, Locks> WwActiveCtx<'a, Locks>
> where
> Locks: Tuple
> {
> fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) -> WwActiveCtx<'a, L2>;
> }
>
> Then you can do:
>
> let (a, c, d) = ctx.begin()
> .lock(a)
> .lock(b)
> .lock(c)
> .custom(|(a, _, c)| (a, c))
> .lock(d)
> .finish();
>
Seems reasonable. But we still need to present this to the end user to
see how much they like it. For ww_mutex I think the major user is DRM,
so add them into Cc list.
Regards,
Boqun
> >> let _: &mut T = t;
> >> let _: &mut Foo = foo;
> >> let _: &mut Foo = foo2;
>
> Ah these will actually be `WwMutexGuard<'_, ...>`, but that should be
> expected.
>
> ---
> Cheers,
> Benno
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-06-23 17:11 ` [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree Boqun Feng
@ 2025-06-23 23:22 ` Benno Lossin
2025-06-24 5:34 ` Onur
0 siblings, 1 reply; 12+ messages in thread
From: Benno Lossin @ 2025-06-23 23:22 UTC (permalink / raw)
To: Boqun Feng
Cc: Onur Özkan, linux-kernel, rust-for-linux, ojeda, alex.gaynor,
gary, a.hindborg, aliceryhl, tmgross, dakr, peterz, mingo, will,
longman, felipe_life, daniel, bjorn3_gh, simona, airlied,
dri-devel, lyude
On Mon Jun 23, 2025 at 7:11 PM CEST, Boqun Feng wrote:
> On Mon, Jun 23, 2025 at 05:14:37PM +0200, Benno Lossin wrote:
>> On Mon Jun 23, 2025 at 4:47 PM CEST, Boqun Feng wrote:
>> > On Mon, Jun 23, 2025 at 03:44:58PM +0200, Benno Lossin wrote:
>> >> I didn't have a concrete API in mind, but after having read the
>> >> abstractions more, would this make sense?
>> >>
>> >> let ctx: &WwAcquireCtx = ...;
>> >> let m1: &WwMutex<T> = ...;
>> >> let m2: &WwMutex<Foo> = ...;
>> >>
>> >> let (t, foo, foo2) = ctx
>> >> .begin()
>> >> .lock(m1)
>> >> .lock(m2)
>> >> .lock_with(|(t, foo)| &*foo.other)
>> >> .finish();
>> >>
>> >
>> > Cute!
>> >
>> > However, each `.lock()` will need to be polymorphic over a tuple of
>> > locks that are already held, right? Otherwise I don't see how
>> > `.lock_with()` knows it's already held two locks. That sounds like a
>> > challenge for implementation.
>>
>> I think it's doable if we have
>>
>> impl WwActiveCtx {
>
> I think you mean *WwAcquireCtx*
Oh yeah.
>> fn begin(&self) -> WwActiveCtx<'_, ()>;
>> }
>>
>> struct WwActiveCtx<'a, Locks> {
>> locks: Locks,
>
> This probably need to to be Result<Locks>, because we may detect
> -DEADLOCK in the middle.
>
> let (a, c, d) = ctx.begin()
> .lock(a)
> .lock(b) // <- `b` may be locked by someone else. So we should
> // drop `a` and switch `locks` to an `Err(_)`.
> .lock(c) // <- this should be a no-op if `locks` is an `Err(_)`.
> .finish();
Hmm, I thought that we would go for the `lock_slow_path` thing, but
maybe that's the wrong thing to do? Maybe `lock` should return a result?
I'd have to see the use-cases...
>> _ctx: PhantomData<&'a WwAcquireCtx>,
>
> We can still take a reference to WwAcquireCtx here I think.
Yeah we have to do that in order to call lock on the mutexes.
>> }
>>
>> impl<'a, Locks> WwActiveCtx<'a, Locks>
>> where
>> Locks: Tuple
>> {
>> fn lock<'b, T>(
>> self,
>> lock: &'b WwMutex<T>,
>> ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
>>
>> fn lock_with<'b, T>(
>> self,
>> get_lock: impl FnOnce(&Locks) -> &'b WwMutex<T>,
>> ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
>> // I'm not 100% sure that the lifetimes will work out...
>
> I think we can make the following work?
>
> impl<'a, Locks> WwActiveCtx<'a, Locks>
> where
> Locks: Tuple
> {
> fn lock_with<T>(
> self,
> get_lock: impl FnOnce(&Locks) -> &WmMutex<T>,
> ) -> WwActiveCtx<'a, Locks::Append<WmMutexGuard<'a, T>>
> }
>
> because with a `WwActiveCtx<'a, Locks>`, we can get a `&'a Locks`, which
> will give us a `&'a WmMutex<T>`, and should be able to give us a
> `WmMutexGuard<'a, T>`.
I think this is more restrictive, since this will require that the mutex
is (potentially) locked for `'a` (you can drop the guard before, but you
can't drop the mutex itself). So again concrete use-cases should inform
our choice here.
>> fn finish(self) -> Locks;
>> }
>>
>> trait Tuple {
>> type Append<T>;
>>
>> fn append<T>(self, value: T) -> Self::Append<T>;
>> }
>>
>
> `Tuple` is good enough for its own, if you could remember, we have some
> ideas about using things like this to consolidate multiple `RcuOld` so
> that we can do one `synchronize_rcu()` for `RcuOld`s.
Yeah that's true, feel free to make a patch or good-first-issue, I won't
have time to create a series.
>> impl Tuple for () {
>> type Append<T> = (T,);
>>
>> fn append<T>(self, value: T) -> Self::Append<T> {
>> (value,)
>> }
>> }
>>
>> impl<T1> Tuple for (T1,) {
>> type Append<T> = (T1, T);
>>
>> fn append<T>(self, value: T) -> Self::Append<T> {
>> (self.0, value,)
>> }
>> }
>>
>> impl<T1, T2> Tuple for (T1, T2) {
>> type Append<T> = (T1, T2, T);
>>
>> fn append<T>(self, value: T) -> Self::Append<T> {
>> (self.0, self.1, value,)
>> }
>> }
>>
>> /* these can easily be generated by a macro */
>>
>> > We also need to take into consideration that the user want to drop any
>> > lock in the sequence? E.g. the user acquires a, b and c, and then drop
>> > b, and then acquires d. Which I think is possible for ww_mutex.
>>
>> Hmm what about adding this to the above idea?:
>>
>> impl<'a, Locks> WwActiveCtx<'a, Locks>
>> where
>> Locks: Tuple
>> {
>> fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) -> WwActiveCtx<'a, L2>;
>> }
>>
>> Then you can do:
>>
>> let (a, c, d) = ctx.begin()
>> .lock(a)
>> .lock(b)
>> .lock(c)
>> .custom(|(a, _, c)| (a, c))
>> .lock(d)
>> .finish();
>>
>
> Seems reasonable. But we still need to present this to the end user to
> see how much they like it. For ww_mutex I think the major user is DRM,
> so add them into Cc list.
Yeah let's see some use-cases :)
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-06-23 23:22 ` Benno Lossin
@ 2025-06-24 5:34 ` Onur
2025-06-24 8:20 ` Benno Lossin
0 siblings, 1 reply; 12+ messages in thread
From: Onur @ 2025-06-24 5:34 UTC (permalink / raw)
To: Benno Lossin
Cc: Boqun Feng, linux-kernel, rust-for-linux, ojeda, alex.gaynor,
gary, a.hindborg, aliceryhl, tmgross, dakr, peterz, mingo, will,
longman, felipe_life, daniel, bjorn3_gh, simona, airlied,
dri-devel, lyude
On Tue, 24 Jun 2025 01:22:05 +0200
"Benno Lossin" <lossin@kernel.org> wrote:
> On Mon Jun 23, 2025 at 7:11 PM CEST, Boqun Feng wrote:
> > On Mon, Jun 23, 2025 at 05:14:37PM +0200, Benno Lossin wrote:
> >> On Mon Jun 23, 2025 at 4:47 PM CEST, Boqun Feng wrote:
> >> > On Mon, Jun 23, 2025 at 03:44:58PM +0200, Benno Lossin wrote:
> >> >> I didn't have a concrete API in mind, but after having read the
> >> >> abstractions more, would this make sense?
> >> >>
> >> >> let ctx: &WwAcquireCtx = ...;
> >> >> let m1: &WwMutex<T> = ...;
> >> >> let m2: &WwMutex<Foo> = ...;
> >> >>
> >> >> let (t, foo, foo2) = ctx
> >> >> .begin()
> >> >> .lock(m1)
> >> >> .lock(m2)
> >> >> .lock_with(|(t, foo)| &*foo.other)
> >> >> .finish();
> >> >>
> >> >
> >> > Cute!
> >> >
> >> > However, each `.lock()` will need to be polymorphic over a tuple
> >> > of locks that are already held, right? Otherwise I don't see how
> >> > `.lock_with()` knows it's already held two locks. That sounds
> >> > like a challenge for implementation.
> >>
> >> I think it's doable if we have
> >>
> >> impl WwActiveCtx {
> >
> > I think you mean *WwAcquireCtx*
>
> Oh yeah.
>
> >> fn begin(&self) -> WwActiveCtx<'_, ()>;
> >> }
> >>
> >> struct WwActiveCtx<'a, Locks> {
> >> locks: Locks,
> >
> > This probably need to to be Result<Locks>, because we may detect
> > -DEADLOCK in the middle.
> >
> > let (a, c, d) = ctx.begin()
> > .lock(a)
> > .lock(b) // <- `b` may be locked by someone else. So we
> > should // drop `a` and switch `locks` to an `Err(_)`.
> > .lock(c) // <- this should be a no-op if `locks` is an
> > `Err(_)`. .finish();
>
> Hmm, I thought that we would go for the `lock_slow_path` thing, but
> maybe that's the wrong thing to do? Maybe `lock` should return a
> result? I'd have to see the use-cases...
>
> >> _ctx: PhantomData<&'a WwAcquireCtx>,
> >
> > We can still take a reference to WwAcquireCtx here I think.
>
> Yeah we have to do that in order to call lock on the mutexes.
>
> >> }
> >>
> >> impl<'a, Locks> WwActiveCtx<'a, Locks>
> >> where
> >> Locks: Tuple
> >> {
> >> fn lock<'b, T>(
> >> self,
> >> lock: &'b WwMutex<T>,
> >> ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
> >>
> >> fn lock_with<'b, T>(
> >> self,
> >> get_lock: impl FnOnce(&Locks) -> &'b WwMutex<T>,
> >> ) -> WwActiveCtx<'a, Locks::Append<WwMutexGuard<'b, T>>>;
> >> // I'm not 100% sure that the lifetimes will work out...
> >
> > I think we can make the following work?
> >
> > impl<'a, Locks> WwActiveCtx<'a, Locks>
> > where
> > Locks: Tuple
> > {
> > fn lock_with<T>(
> > self,
> > get_lock: impl FnOnce(&Locks) -> &WmMutex<T>,
> > ) -> WwActiveCtx<'a, Locks::Append<WmMutexGuard<'a, T>>
> > }
> >
> > because with a `WwActiveCtx<'a, Locks>`, we can get a `&'a Locks`,
> > which will give us a `&'a WmMutex<T>`, and should be able to give
> > us a `WmMutexGuard<'a, T>`.
>
> I think this is more restrictive, since this will require that the
> mutex is (potentially) locked for `'a` (you can drop the guard
> before, but you can't drop the mutex itself). So again concrete
> use-cases should inform our choice here.
>
> >> fn finish(self) -> Locks;
> >> }
> >>
> >> trait Tuple {
> >> type Append<T>;
> >>
> >> fn append<T>(self, value: T) -> Self::Append<T>;
> >> }
> >>
> >
> > `Tuple` is good enough for its own, if you could remember, we have
> > some ideas about using things like this to consolidate multiple
> > `RcuOld` so that we can do one `synchronize_rcu()` for `RcuOld`s.
>
> Yeah that's true, feel free to make a patch or good-first-issue, I
> won't have time to create a series.
>
> >> impl Tuple for () {
> >> type Append<T> = (T,);
> >>
> >> fn append<T>(self, value: T) -> Self::Append<T> {
> >> (value,)
> >> }
> >> }
> >>
> >> impl<T1> Tuple for (T1,) {
> >> type Append<T> = (T1, T);
> >>
> >> fn append<T>(self, value: T) -> Self::Append<T> {
> >> (self.0, value,)
> >> }
> >> }
> >>
> >> impl<T1, T2> Tuple for (T1, T2) {
> >> type Append<T> = (T1, T2, T);
> >>
> >> fn append<T>(self, value: T) -> Self::Append<T> {
> >> (self.0, self.1, value,)
> >> }
> >> }
> >>
> >> /* these can easily be generated by a macro */
> >>
> >> > We also need to take into consideration that the user want to
> >> > drop any lock in the sequence? E.g. the user acquires a, b and
> >> > c, and then drop b, and then acquires d. Which I think is
> >> > possible for ww_mutex.
> >>
> >> Hmm what about adding this to the above idea?:
> >>
> >> impl<'a, Locks> WwActiveCtx<'a, Locks>
> >> where
> >> Locks: Tuple
> >> {
> >> fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
> >> WwActiveCtx<'a, L2>; }
> >>
> >> Then you can do:
> >>
> >> let (a, c, d) = ctx.begin()
> >> .lock(a)
> >> .lock(b)
> >> .lock(c)
> >> .custom(|(a, _, c)| (a, c))
> >> .lock(d)
> >> .finish();
> >>
> >
> > Seems reasonable. But we still need to present this to the end user
> > to see how much they like it. For ww_mutex I think the major user
> > is DRM, so add them into Cc list.
>
> Yeah let's see some use-cases :)
Should we handle this in the initial implementation or leave it for
follow-up patches after the core abstraction of ww_mutex has landed?
---
Regards,
Onur
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-06-24 5:34 ` Onur
@ 2025-06-24 8:20 ` Benno Lossin
2025-06-24 12:31 ` Onur
0 siblings, 1 reply; 12+ messages in thread
From: Benno Lossin @ 2025-06-24 8:20 UTC (permalink / raw)
To: Onur
Cc: Boqun Feng, linux-kernel, rust-for-linux, ojeda, alex.gaynor,
gary, a.hindborg, aliceryhl, tmgross, dakr, peterz, mingo, will,
longman, felipe_life, daniel, bjorn3_gh, simona, airlied,
dri-devel, lyude
On Tue Jun 24, 2025 at 7:34 AM CEST, Onur wrote:
> Should we handle this in the initial implementation or leave it for
> follow-up patches after the core abstraction of ww_mutex has landed?
Since you're writing these abstractions specifically for usage in drm, I
think we should look at the intended use-cases there and then decide on
an API.
So maybe Lyude or Dave can chime in :)
If you (or someone else) have another user for this API that needs it
ASAP, then we can think about merging this and improve it later. But if
we don't have a user, then we shouldn't merge it anyways.
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-06-24 8:20 ` Benno Lossin
@ 2025-06-24 12:31 ` Onur
2025-06-24 12:48 ` Benno Lossin
0 siblings, 1 reply; 12+ messages in thread
From: Onur @ 2025-06-24 12:31 UTC (permalink / raw)
To: Benno Lossin
Cc: Boqun Feng, linux-kernel, rust-for-linux, ojeda, alex.gaynor,
gary, a.hindborg, aliceryhl, tmgross, dakr, peterz, mingo, will,
longman, felipe_life, daniel, bjorn3_gh, simona, airlied,
dri-devel, lyude
On Tue, 24 Jun 2025 10:20:48 +0200
"Benno Lossin" <lossin@kernel.org> wrote:
> On Tue Jun 24, 2025 at 7:34 AM CEST, Onur wrote:
> > Should we handle this in the initial implementation or leave it for
> > follow-up patches after the core abstraction of ww_mutex has landed?
>
> Since you're writing these abstractions specifically for usage in
> drm, I think we should look at the intended use-cases there and then
> decide on an API.
>
> So maybe Lyude or Dave can chime in :)
>
> If you (or someone else) have another user for this API that needs it
> ASAP, then we can think about merging this and improve it later. But
> if we don't have a user, then we shouldn't merge it anyways.
I don't think this is urgent, but it might be better to land the basic
structure first and improve it gradually I think? I would be happy to
continue working for the improvements as I don't plan to leave it as
just the initial version.
I worked on the v5 review notes, but if we are going to consider
designing a different API, then it doesn't make much sense to send a v6
patch before finishing the design, which requires additional people in
the topic. That would also mean some of the ongoing review discussion
would be wasted.
---
Regards,
Onur
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-06-24 12:31 ` Onur
@ 2025-06-24 12:48 ` Benno Lossin
0 siblings, 0 replies; 12+ messages in thread
From: Benno Lossin @ 2025-06-24 12:48 UTC (permalink / raw)
To: Onur
Cc: Boqun Feng, linux-kernel, rust-for-linux, ojeda, alex.gaynor,
gary, a.hindborg, aliceryhl, tmgross, dakr, peterz, mingo, will,
longman, felipe_life, daniel, bjorn3_gh, simona, airlied,
dri-devel, lyude
On Tue Jun 24, 2025 at 2:31 PM CEST, Onur wrote:
> On Tue, 24 Jun 2025 10:20:48 +0200
> "Benno Lossin" <lossin@kernel.org> wrote:
>
>> On Tue Jun 24, 2025 at 7:34 AM CEST, Onur wrote:
>> > Should we handle this in the initial implementation or leave it for
>> > follow-up patches after the core abstraction of ww_mutex has landed?
>>
>> Since you're writing these abstractions specifically for usage in
>> drm, I think we should look at the intended use-cases there and then
>> decide on an API.
>>
>> So maybe Lyude or Dave can chime in :)
>>
>> If you (or someone else) have another user for this API that needs it
>> ASAP, then we can think about merging this and improve it later. But
>> if we don't have a user, then we shouldn't merge it anyways.
>
> I don't think this is urgent, but it might be better to land the basic
> structure first and improve it gradually I think? I would be happy to
> continue working for the improvements as I don't plan to leave it as
> just the initial version.
I don't think we should land the basic API when we don't have a user
in-tree or blessed by the maintainers.
> I worked on the v5 review notes, but if we are going to consider
> designing a different API, then it doesn't make much sense to send a v6
> patch before finishing the design, which requires additional people in
> the topic. That would also mean some of the ongoing review discussion
> would be wasted.
I would just wait for DRM to say something :)
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
[not found] ` <DBRVNP4MM5KO.3IXLMXKGK4XTS@kernel.org>
@ 2025-08-02 14:15 ` Daniel Almeida
2025-08-02 20:58 ` Benno Lossin
2025-08-05 9:08 ` Onur Özkan
0 siblings, 2 replies; 12+ messages in thread
From: Daniel Almeida @ 2025-08-02 14:15 UTC (permalink / raw)
To: Benno Lossin
Cc: Onur, Boqun Feng, linux-kernel, rust-for-linux, ojeda,
alex.gaynor, gary, a.hindborg, aliceryhl, tmgross, dakr, peterz,
mingo, will, longman, felipe_life, daniel, bjorn3_gh, dri-devel
> On 2 Aug 2025, at 07:42, Benno Lossin <lossin@kernel.org> wrote:
>
> On Fri Aug 1, 2025 at 11:22 PM CEST, Daniel Almeida wrote:
>> Hi Benno,
>>
>>> On 7 Jul 2025, at 16:48, Benno Lossin <lossin@kernel.org> wrote:
>>>
>>> On Mon Jul 7, 2025 at 8:06 PM CEST, Onur wrote:
>>>> On Mon, 07 Jul 2025 17:31:10 +0200
>>>> "Benno Lossin" <lossin@kernel.org> wrote:
>>>>
>>>>> On Mon Jul 7, 2025 at 3:39 PM CEST, Onur wrote:
>>>>>> On Mon, 23 Jun 2025 17:14:37 +0200
>>>>>> "Benno Lossin" <lossin@kernel.org> wrote:
>>>>>>
>>>>>>>> We also need to take into consideration that the user want to
>>>>>>>> drop any lock in the sequence? E.g. the user acquires a, b and
>>>>>>>> c, and then drop b, and then acquires d. Which I think is
>>>>>>>> possible for ww_mutex.
>>>>>>>
>>>>>>> Hmm what about adding this to the above idea?:
>>>>>>>
>>>>>>> impl<'a, Locks> WwActiveCtx<'a, Locks>
>>>>>>> where
>>>>>>> Locks: Tuple
>>>>>>> {
>>>>>>> fn custom<L2>(self, action: impl FnOnce(Locks) -> L2) ->
>>>>>>> WwActiveCtx<'a, L2>; }
>>>>>>>
>>>>>>> Then you can do:
>>>>>>>
>>>>>>> let (a, c, d) = ctx.begin()
>>>>>>> .lock(a)
>>>>>>> .lock(b)
>>>>>>> .lock(c)
>>>>>>> .custom(|(a, _, c)| (a, c))
>>>>>>> .lock(d)
>>>>>>> .finish();
>>>>>>
>>>>>>
>>>>>> Instead of `begin` and `custom`, why not something like this:
>>>>>>
>>>>>> let (a, c, d) = ctx.init()
>>>>>> .lock(a)
>>>>>> .lock(b)
>>>>>> .lock(c)
>>>>>> .unlock(b)
>>>>>> .lock(d)
>>>>>> .finish();
>>>>>>
>>>>>> Instead of `begin`, `init` would be better naming to imply `fini`
>>>>>> on the C side, and `unlock` instead of `custom` would make the
>>>>>> intent clearer when dropping locks mid chain.
>>>
>>> Also, I'm not really fond of the name `init`, how about `enter`?
>>>
>>>>>
>>>>> I don't think that this `unlock` operation will work. How would you
>>>>> implement it?
>>>>
>>>>
>>>> We could link mutexes to locks using some unique value, so that we can
>>>> access locks by passing mutexes (though that sounds a bit odd).
>>>>
>>>> Another option would be to unlock by the index, e.g.,:
>>>>
>>>> let (a, c) = ctx.init()
>>>> .lock(a)
>>>> .lock(b)
>>>> .unlock::<1>()
>>
>> Why do we need this random unlock() here? We usually want to lock everything
>> and proceed, or otherwise backoff completely so that someone else can proceed.
>
> No the `unlock` was just to show that we could interleave locking and
> unlocking.
>
>> One thing I didn’t understand with your approach: is it amenable to loops?
>> i.e.: are things like drm_exec() implementable?
>
> I don't think so, see also my reply here:
>
> https://lore.kernel.org/all/DBOPIJHY9NZ7.2CU5XP7UY7ES3@kernel.org
>
> The type-based approach with tuples doesn't handle dynamic number of
> locks.
>
This is probably the default use-case by the way.
>> /**
>> * drm_exec_until_all_locked - loop until all GEM objects are locked
>> * @exec: drm_exec object
>> *
>> * Core functionality of the drm_exec object. Loops until all GEM objects are
>> * locked and no more contention exists. At the beginning of the loop it is
>> * guaranteed that no GEM object is locked.
>> *
>> * Since labels can't be defined local to the loops body we use a jump pointer
>> * to make sure that the retry is only used from within the loops body.
>> */
>> #define drm_exec_until_all_locked(exec) \
>> __PASTE(__drm_exec_, __LINE__): \
>> for (void *__drm_exec_retry_ptr; ({ \
>> __drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
>> (void)__drm_exec_retry_ptr; \
>> drm_exec_cleanup(exec); \
>> });)
>
> My understanding of C preprocessor macros is not good enough to parse or
> understand this :( What is that `__PASTE` thing?
This macro is very useful, but also cursed :)
This declares a unique label before the loop, so you can jump back to it on
contention. It is usually used in conjunction with:
/**
* drm_exec_retry_on_contention - restart the loop to grap all locks
* @exec: drm_exec object
*
* Control flow helper to continue when a contention was detected and we need to
* clean up and re-start the loop to prepare all GEM objects.
*/
#define drm_exec_retry_on_contention(exec) \
do { \
if (unlikely(drm_exec_is_contended(exec))) \
goto *__drm_exec_retry_ptr; \
} while (0)
The termination is handled by:
/**
* drm_exec_cleanup - cleanup when contention is detected
* @exec: the drm_exec object to cleanup
*
* Cleanup the current state and return true if we should stay inside the retry
* loop, false if there wasn't any contention detected and we can keep the
* objects locked.
*/
bool drm_exec_cleanup(struct drm_exec *exec)
{
if (likely(!exec->contended)) {
ww_acquire_done(&exec->ticket);
return false;
}
if (likely(exec->contended == DRM_EXEC_DUMMY)) {
exec->contended = NULL;
ww_acquire_init(&exec->ticket, &reservation_ww_class);
return true;
}
drm_exec_unlock_all(exec);
exec->num_objects = 0;
return true;
}
EXPORT_SYMBOL(drm_exec_cleanup);
The third clause in the loop is empty.
For example, in amdgpu:
/**
* reserve_bo_and_vm - reserve a BO and a VM unconditionally.
* @mem: KFD BO structure.
* @vm: the VM to reserve.
* @ctx: the struct that will be used in unreserve_bo_and_vms().
*/
static int reserve_bo_and_vm(struct kgd_mem *mem,
struct amdgpu_vm *vm,
struct bo_vm_reservation_context *ctx)
{
struct amdgpu_bo *bo = mem->bo;
int ret;
WARN_ON(!vm);
ctx->n_vms = 1;
ctx->sync = &mem->sync;
drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
drm_exec_until_all_locked(&ctx->exec) {
ret = amdgpu_vm_lock_pd(vm, &ctx->exec, 2);
drm_exec_retry_on_contention(&ctx->exec);
if (unlikely(ret))
goto error;
ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1);
drm_exec_retry_on_contention(&ctx->exec);
if (unlikely(ret))
goto error;
}
// <—— everything is locked at this point.
return 0;
So, something like:
some_unique_label:
for(void *retry_ptr;
({ retry_ptr = &some_unique_label; drm_exec_cleanup(); });
/* empty *) {
/* user code here, which potentially jumps back to some_unique_label */
}
>
>> In fact, perhaps we can copy drm_exec, basically? i.e.:
>>
>> /**
>> * struct drm_exec - Execution context
>> */
>> struct drm_exec {
>> /**
>> * @flags: Flags to control locking behavior
>> */
>> u32 flags;
>>
>> /**
>> * @ticket: WW ticket used for acquiring locks
>> */
>> struct ww_acquire_ctx ticket;
>>
>> /**
>> * @num_objects: number of objects locked
>> */
>> unsigned int num_objects;
>>
>> /**
>> * @max_objects: maximum objects in array
>> */
>> unsigned int max_objects;
>>
>> /**
>> * @objects: array of the locked objects
>> */
>> struct drm_gem_object **objects;
>>
>> /**
>> * @contended: contended GEM object we backed off for
>> */
>> struct drm_gem_object *contended;
>>
>> /**
>> * @prelocked: already locked GEM object due to contention
>> */
>> struct drm_gem_object *prelocked;
>> };
>>
>> This is GEM-specific, but we could perhaps implement the same idea by
>> tracking ww_mutexes instead of GEM objects.
>
> But this would only work for `Vec<WwMutex<T>>`, right?
I’m not sure if I understand your point here.
The list of ww_mutexes that we've managed to currently lock would be something
that we'd keep track internally in our context. In what way is a KVec an issue?
>
>> Also, I’d appreciate if the rollback logic could be automated, which is
>> what you’re trying to do, so +1 to your idea.
>
> Good to see that it seems useful to you :)
>
> ---
> Cheers,
> Benno
Btw, I can also try to implement a proof of concept, so long as people agree that
this approach makes sense.
By the way, dri-devel seems to not be on cc? Added them now.
Full context at [0].
— Daniel
[0]: https://lore.kernel.org/rust-for-linux/DBPC27REX4N1.3JA4SSHDYXAHJ@kernel.org/T/#m17a1e3a913ead2648abdff0fc2d927c8323cb8c3
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-08-02 14:15 ` Daniel Almeida
@ 2025-08-02 20:58 ` Benno Lossin
2025-08-05 15:18 ` Daniel Almeida
2025-08-05 9:08 ` Onur Özkan
1 sibling, 1 reply; 12+ messages in thread
From: Benno Lossin @ 2025-08-02 20:58 UTC (permalink / raw)
To: Daniel Almeida
Cc: Onur, Boqun Feng, linux-kernel, rust-for-linux, ojeda,
alex.gaynor, gary, a.hindborg, aliceryhl, tmgross, dakr, peterz,
mingo, will, longman, felipe_life, daniel, bjorn3_gh, dri-devel
On Sat Aug 2, 2025 at 4:15 PM CEST, Daniel Almeida wrote:
> On 2 Aug 2025, at 07:42, Benno Lossin <lossin@kernel.org> wrote:
>> On Fri Aug 1, 2025 at 11:22 PM CEST, Daniel Almeida wrote:
>>> One thing I didn’t understand with your approach: is it amenable to loops?
>>> i.e.: are things like drm_exec() implementable?
>>
>> I don't think so, see also my reply here:
>>
>> https://lore.kernel.org/all/DBOPIJHY9NZ7.2CU5XP7UY7ES3@kernel.org
>>
>> The type-based approach with tuples doesn't handle dynamic number of
>> locks.
>>
>
> This is probably the default use-case by the way.
That's an important detail. In that case, a type state won't we a good
idea. Unless it's also common to have a finite number of them, in which
case we should have two API.
>>> /**
>>> * drm_exec_until_all_locked - loop until all GEM objects are locked
>>> * @exec: drm_exec object
>>> *
>>> * Core functionality of the drm_exec object. Loops until all GEM objects are
>>> * locked and no more contention exists. At the beginning of the loop it is
>>> * guaranteed that no GEM object is locked.
>>> *
>>> * Since labels can't be defined local to the loops body we use a jump pointer
>>> * to make sure that the retry is only used from within the loops body.
>>> */
>>> #define drm_exec_until_all_locked(exec) \
>>> __PASTE(__drm_exec_, __LINE__): \
>>> for (void *__drm_exec_retry_ptr; ({ \
>>> __drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
>>> (void)__drm_exec_retry_ptr; \
>>> drm_exec_cleanup(exec); \
>>> });)
>>
>> My understanding of C preprocessor macros is not good enough to parse or
>> understand this :( What is that `__PASTE` thing?
>
> This macro is very useful, but also cursed :)
>
> This declares a unique label before the loop, so you can jump back to it on
> contention. It is usually used in conjunction with:
Ahh, I missed the `:` at the end of the line. Thanks for explaining!
(also Miguel in the other reply!) If you don't mind I'll ask some more
basic C questions :)
And yeah it's pretty cursed...
> /**
> * drm_exec_retry_on_contention - restart the loop to grap all locks
> * @exec: drm_exec object
> *
> * Control flow helper to continue when a contention was detected and we need to
> * clean up and re-start the loop to prepare all GEM objects.
> */
> #define drm_exec_retry_on_contention(exec) \
> do { \
> if (unlikely(drm_exec_is_contended(exec))) \
> goto *__drm_exec_retry_ptr; \
> } while (0)
The `do { ... } while(0)` is used because C doesn't have `{ ... }`
scopes? (& because you want to be able to have this be called from an if
without braces?)
> The termination is handled by:
>
> /**
> * drm_exec_cleanup - cleanup when contention is detected
> * @exec: the drm_exec object to cleanup
> *
> * Cleanup the current state and return true if we should stay inside the retry
> * loop, false if there wasn't any contention detected and we can keep the
> * objects locked.
> */
> bool drm_exec_cleanup(struct drm_exec *exec)
> {
> if (likely(!exec->contended)) {
> ww_acquire_done(&exec->ticket);
> return false;
> }
>
> if (likely(exec->contended == DRM_EXEC_DUMMY)) {
> exec->contended = NULL;
> ww_acquire_init(&exec->ticket, &reservation_ww_class);
> return true;
> }
>
> drm_exec_unlock_all(exec);
> exec->num_objects = 0;
> return true;
> }
> EXPORT_SYMBOL(drm_exec_cleanup);
>
> The third clause in the loop is empty.
>
> For example, in amdgpu:
>
> /**
> * reserve_bo_and_vm - reserve a BO and a VM unconditionally.
> * @mem: KFD BO structure.
> * @vm: the VM to reserve.
> * @ctx: the struct that will be used in unreserve_bo_and_vms().
> */
> static int reserve_bo_and_vm(struct kgd_mem *mem,
> struct amdgpu_vm *vm,
> struct bo_vm_reservation_context *ctx)
> {
> struct amdgpu_bo *bo = mem->bo;
> int ret;
>
> WARN_ON(!vm);
>
> ctx->n_vms = 1;
> ctx->sync = &mem->sync;
> drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
> drm_exec_until_all_locked(&ctx->exec) {
> ret = amdgpu_vm_lock_pd(vm, &ctx->exec, 2);
> drm_exec_retry_on_contention(&ctx->exec);
> if (unlikely(ret))
> goto error;
>
> ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1);
> drm_exec_retry_on_contention(&ctx->exec);
> if (unlikely(ret))
> goto error;
> }
> // <—— everything is locked at this point.
Which function call locks the mutexes?
> return 0;
>
>
> So, something like:
>
> some_unique_label:
> for(void *retry_ptr;
> ({ retry_ptr = &some_unique_label; drm_exec_cleanup(); });
Normally this should be a condition, or rather an expression evaluating
to bool, why is this okay? Or does C just take the value of the last
function call due to the `({})`?
Why isn't `({})` used instead of `do { ... } while(0)` above?
> /* empty *) {
> /* user code here, which potentially jumps back to some_unique_label */
> }
Thanks for the example & the macro expansion. What I gather from this is
that we'd probably want a closure that executes the code & reruns it
when contention is detected.
>>> In fact, perhaps we can copy drm_exec, basically? i.e.:
>>>
>>> /**
>>> * struct drm_exec - Execution context
>>> */
>>> struct drm_exec {
>>> /**
>>> * @flags: Flags to control locking behavior
>>> */
>>> u32 flags;
>>>
>>> /**
>>> * @ticket: WW ticket used for acquiring locks
>>> */
>>> struct ww_acquire_ctx ticket;
>>>
>>> /**
>>> * @num_objects: number of objects locked
>>> */
>>> unsigned int num_objects;
>>>
>>> /**
>>> * @max_objects: maximum objects in array
>>> */
>>> unsigned int max_objects;
>>>
>>> /**
>>> * @objects: array of the locked objects
>>> */
>>> struct drm_gem_object **objects;
>>>
>>> /**
>>> * @contended: contended GEM object we backed off for
>>> */
>>> struct drm_gem_object *contended;
>>>
>>> /**
>>> * @prelocked: already locked GEM object due to contention
>>> */
>>> struct drm_gem_object *prelocked;
>>> };
>>>
>>> This is GEM-specific, but we could perhaps implement the same idea by
>>> tracking ww_mutexes instead of GEM objects.
>>
>> But this would only work for `Vec<WwMutex<T>>`, right?
>
> I’m not sure if I understand your point here.
>
> The list of ww_mutexes that we've managed to currently lock would be something
> that we'd keep track internally in our context. In what way is a KVec an issue?
I saw "array of the locked objects" and thus thought so this must only
work for an array of locks. Looking at the type a bit closer, it
actually is an array of pointers, so it does work for arbitrary data
structures storing the locks.
So essentially it would amount to storing `Vec<WwMutexGuard<'_, T>>` in
Rust IIUC. I was under the impression that we wanted to avoid that,
because it's an extra allocation.
But maybe that's actually what's done on the C side.
> Btw, I can also try to implement a proof of concept, so long as people agree that
> this approach makes sense.
I'm not sure I understand what you are proposing, so I can't give a
recommendation yet.
---
Cheers,
Benno
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-08-02 14:15 ` Daniel Almeida
2025-08-02 20:58 ` Benno Lossin
@ 2025-08-05 9:08 ` Onur Özkan
2025-08-05 12:41 ` Daniel Almeida
1 sibling, 1 reply; 12+ messages in thread
From: Onur Özkan @ 2025-08-05 9:08 UTC (permalink / raw)
To: Daniel Almeida
Cc: Benno Lossin, Boqun Feng, linux-kernel, rust-for-linux, ojeda,
alex.gaynor, gary, a.hindborg, aliceryhl, tmgross, dakr, peterz,
mingo, will, longman, felipe_life, daniel, bjorn3_gh, dri-devel
On Sat, 2 Aug 2025 11:15:07 -0300
Daniel Almeida <daniel.almeida@collabora.com> wrote:
> Btw, I can also try to implement a proof of concept, so long as
> people agree that this approach makes sense.
It's not necessary to provide a full P.o.C but a small demonstration of
the kind of ww_mutex API you would prefer would be helpful. Seeing a few
sample Rust use-cases (especially in comparison to existing C
implementations) would give a clearer picture for me.
At the moment, the implementation is just a wrapper ([1]) around the C
ww_mutex with no additional functionality, mostly because we don't have
a solid consensus on the API design yet (we had some ideas about Tuple
based approach, but seems like that isn't going to be useful for most
of the ww_mutex users).
[1]: https://github.com/onur-ozkan/linux/commits/673e01a9c309c
> By the way, dri-devel seems to not be on cc? Added them now.
Thanks!
--
Regards,
Onur
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-08-05 9:08 ` Onur Özkan
@ 2025-08-05 12:41 ` Daniel Almeida
2025-08-05 13:50 ` Onur Özkan
0 siblings, 1 reply; 12+ messages in thread
From: Daniel Almeida @ 2025-08-05 12:41 UTC (permalink / raw)
To: Onur Özkan
Cc: Benno Lossin, Boqun Feng, linux-kernel, rust-for-linux, ojeda,
alex.gaynor, gary, a.hindborg, aliceryhl, tmgross, dakr, peterz,
mingo, will, longman, felipe_life, daniel, bjorn3_gh, dri-devel
Hi Onur,
> On 5 Aug 2025, at 06:08, Onur Özkan <work@onurozkan.dev> wrote:
>
> On Sat, 2 Aug 2025 11:15:07 -0300
> Daniel Almeida <daniel.almeida@collabora.com> wrote:
>
>> Btw, I can also try to implement a proof of concept, so long as
>> people agree that this approach makes sense.
>
> It's not necessary to provide a full P.o.C but a small demonstration of
> the kind of ww_mutex API you would prefer would be helpful. Seeing a few
> sample Rust use-cases (especially in comparison to existing C
> implementations) would give a clearer picture for me.
>
> At the moment, the implementation is just a wrapper ([1]) around the C
> ww_mutex with no additional functionality, mostly because we don't have
> a solid consensus on the API design yet (we had some ideas about Tuple
> based approach, but seems like that isn't going to be useful for most
> of the ww_mutex users).
>
> [1]: https://github.com/onur-ozkan/linux/commits/673e01a9c309c
>
>> By the way, dri-devel seems to not be on cc? Added them now.
>
> Thanks!
>
> --
>
> Regards,
> Onur
>
This topic is on my TODO for this week.
— Daniel
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-08-05 12:41 ` Daniel Almeida
@ 2025-08-05 13:50 ` Onur Özkan
0 siblings, 0 replies; 12+ messages in thread
From: Onur Özkan @ 2025-08-05 13:50 UTC (permalink / raw)
To: Daniel Almeida
Cc: Benno Lossin, Boqun Feng, linux-kernel, rust-for-linux, ojeda,
alex.gaynor, gary, a.hindborg, aliceryhl, tmgross, dakr, peterz,
mingo, will, longman, felipe_life, daniel, bjorn3_gh, dri-devel
On Tue, 5 Aug 2025 09:41:43 -0300
Daniel Almeida <daniel.almeida@collabora.com> wrote:
> Hi Onur,
>
> > On 5 Aug 2025, at 06:08, Onur Özkan <work@onurozkan.dev> wrote:
> >
> > On Sat, 2 Aug 2025 11:15:07 -0300
> > Daniel Almeida <daniel.almeida@collabora.com> wrote:
> >
> >> Btw, I can also try to implement a proof of concept, so long as
> >> people agree that this approach makes sense.
> >
> > It's not necessary to provide a full P.o.C but a small
> > demonstration of the kind of ww_mutex API you would prefer would be
> > helpful. Seeing a few sample Rust use-cases (especially in
> > comparison to existing C implementations) would give a clearer
> > picture for me.
> >
> > At the moment, the implementation is just a wrapper ([1]) around
> > the C ww_mutex with no additional functionality, mostly because we
> > don't have a solid consensus on the API design yet (we had some
> > ideas about Tuple based approach, but seems like that isn't going
> > to be useful for most of the ww_mutex users).
> >
> > [1]: https://github.com/onur-ozkan/linux/commits/673e01a9c309c
> >
> >> By the way, dri-devel seems to not be on cc? Added them now.
> >
> > Thanks!
> >
> > --
> >
> > Regards,
> > Onur
> >
>
> This topic is on my TODO for this week.
>
> — Daniel
Awesome, thank you for doing it. :)
Regards,
Onur
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree
2025-08-02 20:58 ` Benno Lossin
@ 2025-08-05 15:18 ` Daniel Almeida
0 siblings, 0 replies; 12+ messages in thread
From: Daniel Almeida @ 2025-08-05 15:18 UTC (permalink / raw)
To: Benno Lossin
Cc: Onur, Boqun Feng, linux-kernel, rust-for-linux, ojeda,
alex.gaynor, gary, a.hindborg, aliceryhl, tmgross, dakr, peterz,
mingo, will, longman, felipe_life, daniel, bjorn3_gh, dri-devel
Hi Benno,
> On 2 Aug 2025, at 17:58, Benno Lossin <lossin@kernel.org> wrote:
>
> On Sat Aug 2, 2025 at 4:15 PM CEST, Daniel Almeida wrote:
>> On 2 Aug 2025, at 07:42, Benno Lossin <lossin@kernel.org> wrote:
>>> On Fri Aug 1, 2025 at 11:22 PM CEST, Daniel Almeida wrote:
>>>> One thing I didn’t understand with your approach: is it amenable to loops?
>>>> i.e.: are things like drm_exec() implementable?
>>>
>>> I don't think so, see also my reply here:
>>>
>>> https://lore.kernel.org/all/DBOPIJHY9NZ7.2CU5XP7UY7ES3@kernel.org
>>>
>>> The type-based approach with tuples doesn't handle dynamic number of
>>> locks.
>>>
>>
>> This is probably the default use-case by the way.
>
> That's an important detail. In that case, a type state won't we a good
> idea. Unless it's also common to have a finite number of them, in which
> case we should have two API.
>
>>>> /**
>>>> * drm_exec_until_all_locked - loop until all GEM objects are locked
>>>> * @exec: drm_exec object
>>>> *
>>>> * Core functionality of the drm_exec object. Loops until all GEM objects are
>>>> * locked and no more contention exists. At the beginning of the loop it is
>>>> * guaranteed that no GEM object is locked.
>>>> *
>>>> * Since labels can't be defined local to the loops body we use a jump pointer
>>>> * to make sure that the retry is only used from within the loops body.
>>>> */
>>>> #define drm_exec_until_all_locked(exec) \
>>>> __PASTE(__drm_exec_, __LINE__): \
>>>> for (void *__drm_exec_retry_ptr; ({ \
>>>> __drm_exec_retry_ptr = &&__PASTE(__drm_exec_, __LINE__);\
>>>> (void)__drm_exec_retry_ptr; \
>>>> drm_exec_cleanup(exec); \
>>>> });)
>>>
>>> My understanding of C preprocessor macros is not good enough to parse or
>>> understand this :( What is that `__PASTE` thing?
>>
>> This macro is very useful, but also cursed :)
>>
>> This declares a unique label before the loop, so you can jump back to it on
>> contention. It is usually used in conjunction with:
>
> Ahh, I missed the `:` at the end of the line. Thanks for explaining!
> (also Miguel in the other reply!) If you don't mind I'll ask some more
> basic C questions :)
>
> And yeah it's pretty cursed...
>
>> /**
>> * drm_exec_retry_on_contention - restart the loop to grap all locks
>> * @exec: drm_exec object
>> *
>> * Control flow helper to continue when a contention was detected and we need to
>> * clean up and re-start the loop to prepare all GEM objects.
>> */
>> #define drm_exec_retry_on_contention(exec) \
>> do { \
>> if (unlikely(drm_exec_is_contended(exec))) \
>> goto *__drm_exec_retry_ptr; \
>> } while (0)
>
> The `do { ... } while(0)` is used because C doesn't have `{ ... }`
> scopes? (& because you want to be able to have this be called from an if
> without braces?)
do {} while (0) makes this behave as a single statement. It usually used in
macros to ensure that they can be correctly called from control statements even
when no braces are used, like you said. It also enforces that a semi-colon has
to be placed at the end when the macro is called, which makes it behave a bit
more like a function call.
There may be other uses that I am not aware of, but it’s not something that
specific to “drm_exec_retry_on_contention".
>
>> The termination is handled by:
>>
>> /**
>> * drm_exec_cleanup - cleanup when contention is detected
>> * @exec: the drm_exec object to cleanup
>> *
>> * Cleanup the current state and return true if we should stay inside the retry
>> * loop, false if there wasn't any contention detected and we can keep the
>> * objects locked.
>> */
>> bool drm_exec_cleanup(struct drm_exec *exec)
>> {
>> if (likely(!exec->contended)) {
>> ww_acquire_done(&exec->ticket);
>> return false;
>> }
>>
>> if (likely(exec->contended == DRM_EXEC_DUMMY)) {
>> exec->contended = NULL;
>> ww_acquire_init(&exec->ticket, &reservation_ww_class);
>> return true;
>> }
>>
>> drm_exec_unlock_all(exec);
>> exec->num_objects = 0;
>> return true;
>> }
>> EXPORT_SYMBOL(drm_exec_cleanup);
>>
>> The third clause in the loop is empty.
>>
>> For example, in amdgpu:
>>
>> /**
>> * reserve_bo_and_vm - reserve a BO and a VM unconditionally.
>> * @mem: KFD BO structure.
>> * @vm: the VM to reserve.
>> * @ctx: the struct that will be used in unreserve_bo_and_vms().
>> */
>> static int reserve_bo_and_vm(struct kgd_mem *mem,
>> struct amdgpu_vm *vm,
>> struct bo_vm_reservation_context *ctx)
>> {
>> struct amdgpu_bo *bo = mem->bo;
>> int ret;
>>
>> WARN_ON(!vm);
>>
>> ctx->n_vms = 1;
>> ctx->sync = &mem->sync;
>> drm_exec_init(&ctx->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
>> drm_exec_until_all_locked(&ctx->exec) {
>> ret = amdgpu_vm_lock_pd(vm, &ctx->exec, 2);
>> drm_exec_retry_on_contention(&ctx->exec);
>> if (unlikely(ret))
>> goto error;
>>
>> ret = drm_exec_prepare_obj(&ctx->exec, &bo->tbo.base, 1);
>> drm_exec_retry_on_contention(&ctx->exec);
>> if (unlikely(ret))
>> goto error;
>> }
>> // <—— everything is locked at this point.
>
> Which function call locks the mutexes?
The function below, which is indirectly called from amdgpu_vm_lock_pd() in
this particular example:
```
/**
* drm_exec_lock_obj - lock a GEM object for use
* @exec: the drm_exec object with the state
* @obj: the GEM object to lock
*
* Lock a GEM object for use and grab a reference to it.
*
* Returns: -EDEADLK if a contention is detected, -EALREADY when object is
* already locked (can be suppressed by setting the DRM_EXEC_IGNORE_DUPLICATES
* flag), -ENOMEM when memory allocation failed and zero for success.
*/
int drm_exec_lock_obj(struct drm_exec *exec, struct drm_gem_object *obj)
{
int ret;
ret = drm_exec_lock_contended(exec);
if (unlikely(ret))
return ret;
if (exec->prelocked == obj) {
drm_gem_object_put(exec->prelocked);
exec->prelocked = NULL;
return 0;
}
if (exec->flags & DRM_EXEC_INTERRUPTIBLE_WAIT)
ret = dma_resv_lock_interruptible(obj->resv, &exec->ticket);
else
ret = dma_resv_lock(obj->resv, &exec->ticket);
if (unlikely(ret == -EDEADLK)) {
drm_gem_object_get(obj);
exec->contended = obj;
return -EDEADLK;
}
if (unlikely(ret == -EALREADY) &&
exec->flags & DRM_EXEC_IGNORE_DUPLICATES)
return 0;
if (unlikely(ret))
return ret;
ret = drm_exec_obj_locked(exec, obj);
if (ret)
goto error_unlock;
return 0;
error_unlock:
dma_resv_unlock(obj->resv);
return ret;
}
EXPORT_SYMBOL(drm_exec_lock_obj);
```
And the tracking of locked objects is done at:
```
/* Track the locked object in the array */
static int drm_exec_obj_locked(struct drm_exec *exec,
struct drm_gem_object *obj)
{
if (unlikely(exec->num_objects == exec->max_objects)) {
size_t size = exec->max_objects * sizeof(void *);
void *tmp;
tmp = kvrealloc(exec->objects, size + PAGE_SIZE, GFP_KERNEL);
if (!tmp)
return -ENOMEM;
exec->objects = tmp;
exec->max_objects += PAGE_SIZE / sizeof(void *);
}
drm_gem_object_get(obj);
exec->objects[exec->num_objects++] = obj;
return 0;
}
```
Note that dma_resv_lock() is:
```
static inline int dma_resv_lock(struct dma_resv *obj,
struct ww_acquire_ctx *ctx)
{
return ww_mutex_lock(&obj->lock, ctx);
}
```
Again, this is GEM-specific, but the idea is to generalize it.
>
>> return 0;
>>
>>
>> So, something like:
>>
>> some_unique_label:
>> for(void *retry_ptr;
>> ({ retry_ptr = &some_unique_label; drm_exec_cleanup(); });
>
> Normally this should be a condition, or rather an expression evaluating
> to bool, why is this okay? Or does C just take the value of the last
> function call due to the `({})`?
This is described here [0]. As per the docs, it evaluates to bool (as
drm_exec_cleanup() is last, and that evaluates to bool)
>
> Why isn't `({})` used instead of `do { ... } while(0)` above?
I’m not sure I understand what you’re trying to ask.
If you’re asking why ({}) is being used here, then it’s because we need
to return (i.e. evaluate to) a value, and a do {…} while(0) does not do
that.
>
>> /* empty *) {
>> /* user code here, which potentially jumps back to some_unique_label */
>> }
>
> Thanks for the example & the macro expansion. What I gather from this is
> that we'd probably want a closure that executes the code & reruns it
> when contention is detected.
Yep, I think so, too.
>
>>>> In fact, perhaps we can copy drm_exec, basically? i.e.:
>>>>
>>>> /**
>>>> * struct drm_exec - Execution context
>>>> */
>>>> struct drm_exec {
>>>> /**
>>>> * @flags: Flags to control locking behavior
>>>> */
>>>> u32 flags;
>>>>
>>>> /**
>>>> * @ticket: WW ticket used for acquiring locks
>>>> */
>>>> struct ww_acquire_ctx ticket;
>>>>
>>>> /**
>>>> * @num_objects: number of objects locked
>>>> */
>>>> unsigned int num_objects;
>>>>
>>>> /**
>>>> * @max_objects: maximum objects in array
>>>> */
>>>> unsigned int max_objects;
>>>>
>>>> /**
>>>> * @objects: array of the locked objects
>>>> */
>>>> struct drm_gem_object **objects;
>>>>
>>>> /**
>>>> * @contended: contended GEM object we backed off for
>>>> */
>>>> struct drm_gem_object *contended;
>>>>
>>>> /**
>>>> * @prelocked: already locked GEM object due to contention
>>>> */
>>>> struct drm_gem_object *prelocked;
>>>> };
>>>>
>>>> This is GEM-specific, but we could perhaps implement the same idea by
>>>> tracking ww_mutexes instead of GEM objects.
>>>
>>> But this would only work for `Vec<WwMutex<T>>`, right?
>>
>> I’m not sure if I understand your point here.
>>
>> The list of ww_mutexes that we've managed to currently lock would be something
>> that we'd keep track internally in our context. In what way is a KVec an issue?
>
> I saw "array of the locked objects" and thus thought so this must only
> work for an array of locks. Looking at the type a bit closer, it
> actually is an array of pointers, so it does work for arbitrary data
> structures storing the locks.
>
> So essentially it would amount to storing `Vec<WwMutexGuard<'_, T>>` in
> Rust IIUC. I was under the impression that we wanted to avoid that,
> because it's an extra allocation.
It’s the price to pay for correctness IMHO.
The “exec” abstraction also allocates:
```
/* Track the locked object in the array */
static int drm_exec_obj_locked(struct drm_exec *exec,
struct drm_gem_object *obj)
{
if (unlikely(exec->num_objects == exec->max_objects)) {
size_t size = exec->max_objects * sizeof(void *);
void *tmp;
tmp = kvrealloc(exec->objects, size + PAGE_SIZE, GFP_KERNEL);
if (!tmp)
return -ENOMEM;
exec->objects = tmp;
exec->max_objects += PAGE_SIZE / sizeof(void *);
}
drm_gem_object_get(obj);
exec->objects[exec->num_objects++] = obj;
return 0;
}
```
>
> But maybe that's actually what's done on the C side.
See above.
>
>> Btw, I can also try to implement a proof of concept, so long as people agree that
>> this approach makes sense.
>
> I'm not sure I understand what you are proposing, so I can't give a
> recommendation yet.
>
I am suggesting what you said above and more:
a) run a user closure where the user can indicate which ww_mutexes they want to lock
b) keep track of the objects above
c) keep track of whether a contention happened
d) rollback if a contention happened, releasing all locks
e) rerun the user closure from a clean slate after rolling back
f) run a separate user closure whenever we know that all objects have been locked.
That’s a very broad description, but I think it can work.
Note that the operations above would be implemented by a separate type, not by
the ww_mutex abstraction itself. But users should probably be using the API
above unless there’s a strong reason not to.
> ---
> Cheers,
> Benno
>
— Daniel
[0]: https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-08-05 15:19 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20250621184454.8354-1-work@onurozkan.dev>
[not found] ` <20250621184454.8354-3-work@onurozkan.dev>
[not found] ` <DASY7BECFRCT.332X5ZHZMV2W@kernel.org>
[not found] ` <aFlQ7K_mYYbrG8Cl@Mac.home>
[not found] ` <DATYHYJVPL3L.3NLMH7PPHYU9@kernel.org>
[not found] ` <aFlpFQ4ivKw81d-y@Mac.home>
[not found] ` <DAU0ELV91E2Q.35FZOII18W44J@kernel.org>
2025-06-23 17:11 ` [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree Boqun Feng
2025-06-23 23:22 ` Benno Lossin
2025-06-24 5:34 ` Onur
2025-06-24 8:20 ` Benno Lossin
2025-06-24 12:31 ` Onur
2025-06-24 12:48 ` Benno Lossin
[not found] ` <20250707163913.5ffc046d@nimda.home>
[not found] ` <DB5XIWGZ8U36.1VB58YBJFL7OT@kernel.org>
[not found] ` <20250707210613.2fd5bb55@nimda.home>
[not found] ` <DB62ZN1LTO31.1HVWDLAWJWVM8@kernel.org>
[not found] ` <FF481535-86EF-41EB-830A-1DA2434AAEA0@collabora.com>
[not found] ` <DBRVNP4MM5KO.3IXLMXKGK4XTS@kernel.org>
2025-08-02 14:15 ` Daniel Almeida
2025-08-02 20:58 ` Benno Lossin
2025-08-05 15:18 ` Daniel Almeida
2025-08-05 9:08 ` Onur Özkan
2025-08-05 12:41 ` Daniel Almeida
2025-08-05 13:50 ` Onur Özkan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).