From: johannes@sipsolutions.net (Johannes Berg)
Subject: [PATCH RFC] nvmet-rdma: use a private workqueue for delete
Date: Tue, 23 Oct 2018 21:22:57 +0200 [thread overview]
Message-ID: <a0d857b4bd5c7da86c7496dfaddb69c7f85fefec.camel@sipsolutions.net> (raw)
In-Reply-To: <0d62caa5-28ec-593b-7537-c27c8260366a@grimberg.me> (sfid-20181023_024040_093368_2E475D06)
On Mon, 2018-10-22@17:40 -0700, Sagi Grimberg wrote:
> > > >
> I'm not sure how I can divide them into classes. The priv is really
> an internal structure that surrounds a connection connection
> representation. The priv->handler_mutex wraps every event handling
> dispatched to the upper layer consumer.
>
> The connection destruction barriers by acquiring and releasing this
> event_handler mutex such that no events are handled by the
>
> See drivers/infiniband/core/cma.c rdma_destroy_id()
>
> In our case, one of the event handlers flushes a workqueue that
> is hosting work items that essentially call rdma_destroy_id() on
> connections that are guaranteed not to be the one currently handling
> the event. So the priv is guaranteed to be a different instance.
I think the key here would be "are guaranteed not to be the one
currently handling the event". How exactly is that guaranteed?
Is there some sort of static guarantee for this?
But I guess the easiest thing to do would be to use mutex_lock_nested()
in some places here, or add something like flush_workqueue_nested().
That would let you annotate this place in the code, basically saying -
yes, I know I'm doing
CPU0 CPU1
mutex_lock(A) start_work(B)
flush_work(B) mutex_lock(A)
but it's fine because I know, in this specific instance, that it's
really A' not A for the mutex. So if you were to do
mutex_lock_nested(A, SINGLE_DEPTH_NESTING)
around the flush_work() - since your code knows that the flush_work() is
actually a different item and guaranteed to not build a graph connecting
back to itself, that should work to tell lockdep enough to not complain
here.
If this could actually recurse, but not to itself, you'd have to bump
the level up each time - which does get difficult since you're executing
asynchronously with the work queue. Not sure off the top of my head how
I'd solve that.
johannes
prev parent reply other threads:[~2018-10-23 19:22 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-27 18:00 [PATCH RFC] nvmet-rdma: use a private workqueue for delete Sagi Grimberg
2018-09-28 22:14 ` Bart Van Assche
2018-10-01 20:12 ` Sagi Grimberg
2018-10-02 15:02 ` Bart Van Assche
2018-10-05 7:25 ` Christoph Hellwig
[not found] ` <CAO+b5-oBVw=-wvnWk1EF=RBaZtjX6bjUG+3WABXbvzX9UTu26w@mail.gmail.com>
2018-10-19 1:08 ` Sagi Grimberg
2018-10-19 16:23 ` Bart Van Assche
2018-10-22 8:56 ` Johannes Berg
2018-10-22 21:17 ` Bart Van Assche
2018-10-23 19:18 ` Johannes Berg
2018-10-23 19:54 ` Bart Van Assche
2018-10-23 19:59 ` Johannes Berg
2018-10-23 20:00 ` Johannes Berg
2018-10-23 0:40 ` Sagi Grimberg
2018-10-23 19:22 ` Johannes Berg [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a0d857b4bd5c7da86c7496dfaddb69c7f85fefec.camel@sipsolutions.net \
--to=johannes@sipsolutions.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).