From: Thomas Gleixner <tglx@linutronix.de>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
linux-kernel@vger.kernel.org
Cc: Crystal Wood <swood@redhat.com>, John Keeping <john@metanate.com>,
Boqun Feng <boqun.feng@gmail.com>, Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Waiman Long <longman@redhat.com>, Will Deacon <will@kernel.org>
Subject: Re: [PATCH] locking/rtmutex: Flush the plug before entering the slowpath.
Date: Fri, 21 Apr 2023 21:18:08 +0200 [thread overview]
Message-ID: <87sfct11u7.ffs@tglx> (raw)
In-Reply-To: <20230322162719.wYG1N0hh@linutronix.de>
On Wed, Mar 22 2023 at 17:27, Sebastian Andrzej Siewior wrote:
>> This still leaves the problem vs. io_wq_worker_sleeping() and it's
>> running() counterpart after schedule().
>
> io_wq_worker_sleeping() has a kfree() so it probably should be moved,
> too.
> io_wq_worker_running() is a OR and INC and is fine.
Why is io_wq_worker_sleeping() not cured in the same way? Just because
it did not yet result in a splat?
Why not just expose sched_submit_work()?
> --- a/kernel/locking/rwbase_rt.c
> +++ b/kernel/locking/rwbase_rt.c
> @@ -143,6 +143,14 @@ static __always_inline int rwbase_read_lock(struct rwbase_rt *rwb,
> if (rwbase_read_trylock(rwb))
> return 0;
>
> + if (state != TASK_RTLOCK_WAIT) {
Bah. That code has explicit rwbase_foo() helpers which are filled in by
rwsem and rwlock. Making this conditional on state is creative at best.
> + /*
> + * If we are going to sleep and we have plugged IO queued,
> + * make sure to submit it to avoid deadlocks.
> + */
> + blk_flush_plug(current->plug, true);
> + }
> +
> return __rwbase_read_lock(rwb, state);
> }
>
> diff --git a/kernel/locking/ww_rt_mutex.c b/kernel/locking/ww_rt_mutex.c
> index d1473c624105c..472e3622abf09 100644
> --- a/kernel/locking/ww_rt_mutex.c
> +++ b/kernel/locking/ww_rt_mutex.c
> @@ -67,6 +67,11 @@ __ww_rt_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx,
> ww_mutex_set_context_fastpath(lock, ww_ctx);
> return 0;
> }
> + /*
> + * If we are going to sleep and we have plugged IO queued, make sure to
> + * submit it to avoid deadlocks.
> + */
> + blk_flush_plug(current->plug, true);
>
> ret = rt_mutex_slowlock(&rtm->rtmutex, ww_ctx, state);
This hunk can be avoided by moving the submit work invocation to
rt_mutex_slowlock().
Thanks,
tglx
prev parent reply other threads:[~2023-04-21 19:18 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-22 16:27 [PATCH] locking/rtmutex: Flush the plug before entering the slowpath Sebastian Andrzej Siewior
2023-03-28 16:54 ` [PATCH] locking/rtmutex: Do the trylock-slowpath with DEBUG_RT_MUTEXES enabled Sebastian Andrzej Siewior
2023-04-21 17:58 ` Thomas Gleixner
2023-04-24 8:42 ` Sebastian Andrzej Siewior
2023-04-18 15:18 ` [PATCH] locking/rtmutex: Flush the plug before entering the slowpath Sebastian Andrzej Siewior
2023-04-18 23:43 ` Crystal Wood
2023-04-19 14:04 ` Sebastian Andrzej Siewior
2023-04-24 23:22 ` Crystal Wood
2023-04-21 19:18 ` Thomas Gleixner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87sfct11u7.ffs@tglx \
--to=tglx@linutronix.de \
--cc=bigeasy@linutronix.de \
--cc=boqun.feng@gmail.com \
--cc=john@metanate.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=swood@redhat.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox