linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Onur Özkan" <work@onurozkan.dev>
To: Daniel Almeida <daniel.almeida@collabora.com>
Cc: Benno Lossin <lossin@kernel.org>, Lyude Paul <lyude@redhat.com>,
	linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org,
	ojeda@kernel.org, alex.gaynor@gmail.com, boqun.feng@gmail.com,
	gary@garyguo.net, a.hindborg@kernel.org, aliceryhl@google.com,
	tmgross@umich.edu, dakr@kernel.org, peterz@infradead.org,
	mingo@redhat.com, will@kernel.org, longman@redhat.com,
	felipe_life@live.com, daniel@sedlak.dev,
	bjorn3_gh@protonmail.com
Subject: Re: [PATCH v5 0/3] rust: add `ww_mutex` support
Date: Mon, 18 Aug 2025 15:56:28 +0300	[thread overview]
Message-ID: <20250818155628.1b39d511@nimda.home> (raw)
In-Reply-To: <182E916F-3B59-4721-B415-81C3CF175DA7@collabora.com>

On Thu, 14 Aug 2025 15:22:57 -0300
Daniel Almeida <daniel.almeida@collabora.com> wrote:

> 
> Hi Onur,
> 
> > On 14 Aug 2025, at 12:56, Onur <work@onurozkan.dev> wrote:
> > 
> > On Thu, 14 Aug 2025 09:38:38 -0300
> > Daniel Almeida <daniel.almeida@collabora.com> wrote:
> > 
> >> Hi Onur,
> >> 
> >>> On 14 Aug 2025, at 08:13, Onur Özkan <work@onurozkan.dev> wrote:
> >>> 
> >>> Hi all,
> >>> 
> >>> I have been brainstorming on the auto-unlocking (on dynamic number
> >>> of mutexes) idea we have been discussing for some time.
> >>> 
> >>> There is a challange with how we handle lock guards and my current
> >>> thought is to remove direct data dereferencing from guards.
> >>> Instead, data access would only be possible through a fallible
> >>> method (e.g., `try_get`). If the guard is no longer valid, this
> >>> method would fail to not allow data-accessing after auto-unlock.
> >>> 
> >>> In practice, it would work like this:
> >>> 
> >>> let a_guard = ctx.lock(mutex_a)?;
> >>> let b_guard = ctx.lock(mutex_b)?;
> >>> 
> >>> // Suppose user tries to lock `mutex_c` without aborting the
> >>> // entire function (for some reason). This means that even on
> >>> // failure, `a_guard` and `b_guard` will still be accessible.
> >>> if let Ok(c_guard) = ctx.lock(mutex_c) {
> >>>    // ...some logic
> >>> }
> >>> 
> >>> let a_data = a_guard.try_get()?;
> >>> let b_data = b_guard.try_get()?;
> >> 
> >> Can you add more code here? How is this going to look like with the
> >> two closures we’ve been discussing?
> > 
> > Didn't we said that tuple-based closures are not sufficient when
> > dealing with a dynamic number of locks (ref [1]) and ww_mutex is
> > mostly used with dynamic locks? I thought implementing that
> > approach is not worth it (at least for now) because of that.
> > 
> > [1]:
> > https://lore.kernel.org/all/DBS8REY5E82S.3937FAHS25ANA@kernel.org
> > 
> > Regards,
> > Onur
> 
> 
> 
> I am referring to this [0]. See the discussion and itemized list at
> the end.
> 
> To recap, I am proposing a separate type that is similar to drm_exec,
> and that implements this:
> 
> ```
> a) run a user closure where the user can indicate which ww_mutexes
> they want to lock b) keep track of the objects above
> c) keep track of whether a contention happened
> d) rollback if a contention happened, releasing all locks
> e) rerun the user closure from a clean slate after rolling back
> f) run a separate user closure whenever we know that all objects have
> been locked. ```
> 
> In other words, we need to run a closure to let the user implement a
> given locking strategy, and then one closure that runs when the user
> signals that there are no more locks to take.
> 
> What I said is different from what Benno suggested here:
> 
> >>>>>>    let (a, c, d) = ctx.begin()
> >>>>>>        .lock(a)
> >>>>>>        .lock(b)
> >>>>>>        .lock(c)
> >>>>>>        .custom(|(a, _, c)| (a, c))
> >>>>>>        .lock(d)
> >>>>>>        .finish();
> 
> i.e.: here is a brief example of how the API should be used by
> clients:
> 
> ```
> // The Context keeps track of which locks were successfully taken.
> let locking_algorithm = |ctx: &Context| {
>   // client-specific code, likely some loop trying to acquire
> multiple locks: //
>   // note that it does not _have_ to be a loop, though. It up to the
> clients to // provide a suitable implementation here.
>   for (..) {
>     ctx.lock(foo); // If this succeeds, the context will add  "foo"
> to the list of taken locks. }
> 
>   // if this closure returns EDEADLK, then our abstraction must
> rollback, and // run it again.
> };
> 
> // This runs when the closure above has indicated that there are no
> more locks // to take.
> let on_all_locks_taken = |ctx: &Context| {
>   // everything is locked here, give access to the data in the guards.
> };
> 
> ctx.lock_all(locking_algorithm, on_all_locks_taken)?;
> ```
> 
> Yes, this will allocate but that is fine because drm_exec allocates
> as well.
> 
> We might be able to give more control of when the allocation happens
> if the number of locks is known in advance, e.g.:
> 
> ```
> struct Context<T> {
>   taken_locks: KVec<Guard<T>>,
> }
> 
> impl<T> Context<T> {
>   fn prealloc_slots(num_slots: usize, flags: ...) -> Result<Self> {
>     let taken_locks = ... // pre-alloc a KVec here. 
>     Self {
>       taken_slots,
>     } 
>   }
> }
> ```
> 
> The main point is that this API is optional. It builds a lot of
> convenience of top of the Rust WWMutex abstraction, but no one is
> forced to use it.
> 
> IOW: What I said should be implementable with a dynamic number of
> locks. Please let me know if I did not explain this very well. 
> 
> [0]:
> https://lore.kernel.org/rust-for-linux/8B1FB608-7D43-4DD9-8737-DCE59ED74CCA@collabora.com/

Hi Daniel,

Thank you for repointing it, I must have missed your previour mail.

It seems crystal clear, I will review this mail in detail when I am
working on this patch again.

Regards,
Onur

  reply	other threads:[~2025-08-18 12:56 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-21 18:44 [PATCH v5 0/3] rust: add `ww_mutex` support Onur Özkan
2025-06-21 18:44 ` [PATCH v5 1/3] rust: add C wrappers for `ww_mutex` inline functions Onur Özkan
2025-06-21 18:44 ` [PATCH v5 2/3] implement ww_mutex abstraction for the Rust tree Onur Özkan
2025-06-22  9:18   ` Benno Lossin
2025-06-23 13:04     ` Boqun Feng
2025-06-23 13:44       ` Benno Lossin
2025-06-23 14:47         ` Boqun Feng
2025-06-23 15:14           ` Benno Lossin
2025-06-23 17:11             ` Boqun Feng
2025-06-23 23:22               ` Benno Lossin
2025-06-24  5:34                 ` Onur
2025-06-24  8:20                   ` Benno Lossin
2025-06-24 12:31                     ` Onur
2025-06-24 12:48                       ` Benno Lossin
2025-07-07 13:39             ` Onur
2025-07-07 15:31               ` Benno Lossin
2025-07-07 18:06                 ` Onur
2025-07-07 19:48                   ` Benno Lossin
2025-07-08 14:21                     ` Onur
2025-08-01 21:22                     ` Daniel Almeida
2025-08-02 10:42                       ` Benno Lossin
2025-08-02 13:41                         ` Miguel Ojeda
2025-08-02 14:15                         ` Daniel Almeida
2025-08-02 20:58                           ` Benno Lossin
2025-08-05 15:18                             ` Daniel Almeida
2025-08-05  9:08                           ` Onur Özkan
2025-08-05 12:41                             ` Daniel Almeida
2025-08-05 13:50                               ` Onur Özkan
2025-06-23 11:51   ` Alice Ryhl
2025-06-23 13:26   ` Boqun Feng
2025-06-23 18:17     ` Onur
2025-06-23 21:54       ` Boqun Feng
2025-06-21 18:44 ` [PATCH v5 3/3] add KUnit coverage on Rust `ww_mutex` implementation Onur Özkan
2025-06-22  9:16 ` [PATCH v5 0/3] rust: add `ww_mutex` support Benno Lossin
2025-07-24 13:53 ` Onur Özkan
2025-07-29 17:15   ` Benno Lossin
2025-07-30 10:24     ` Onur Özkan
2025-07-30 10:55       ` Benno Lossin
2025-08-05 16:22   ` Lyude Paul
2025-08-05 17:56     ` Daniel Almeida
2025-08-06  5:57     ` Onur Özkan
2025-08-06 17:37       ` Lyude Paul
2025-08-06 19:30         ` Benno Lossin
2025-08-14 11:13           ` Onur Özkan
2025-08-14 12:38             ` Daniel Almeida
2025-08-14 15:56               ` Onur
2025-08-14 18:22                 ` Daniel Almeida
2025-08-18 12:56                   ` Onur Özkan [this message]
2025-09-01 10:05                     ` Onur Özkan
2025-09-01 12:28                       ` Daniel Almeida
2025-09-02 16:53                   ` Onur
2025-09-03  6:24                     ` Onur
2025-09-03 13:04                       ` Daniel Almeida

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250818155628.1b39d511@nimda.home \
    --to=work@onurozkan.dev \
    --cc=a.hindborg@kernel.org \
    --cc=alex.gaynor@gmail.com \
    --cc=aliceryhl@google.com \
    --cc=bjorn3_gh@protonmail.com \
    --cc=boqun.feng@gmail.com \
    --cc=dakr@kernel.org \
    --cc=daniel.almeida@collabora.com \
    --cc=daniel@sedlak.dev \
    --cc=felipe_life@live.com \
    --cc=gary@garyguo.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=lossin@kernel.org \
    --cc=lyude@redhat.com \
    --cc=mingo@redhat.com \
    --cc=ojeda@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rust-for-linux@vger.kernel.org \
    --cc=tmgross@umich.edu \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).