From: Dominique Martinet <asmadeus@codewreck.org>
To: jiangyiwen <jiangyiwen@huawei.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Eric Van Hensbergen <ericvh@gmail.com>,
Ron Minnich <rminnich@sandia.gov>,
Latchesar Ionkov <lucho@ionkov.net>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
v9fs-developer@lists.sourceforge.net
Subject: Re: [V9fs-developer] [PATCH] net/9p: Fix a deadlock case in the virtio transport
Date: Sat, 14 Jul 2018 14:47:15 +0200 [thread overview]
Message-ID: <20180714124715.GA16134@nautica> (raw)
In-Reply-To: <5B49DAA5.3020600@huawei.com>
jiangyiwen wrote on Sat, Jul 14, 2018:
> On 2018/7/14 17:05, Dominique Martinet wrote:
> > jiangyiwen wrote on Sat, Jul 14, 2018:
> >> When client has multiple threads that issue io requests all the
> >> time, and the server has a very good performance, it may cause
> >> cpu is running in the irq context for a long time because it can
> >> check virtqueue has buf in the *while* loop.
> >>
> >> So we should keep chan->lock in the whole loop.
> >
> > Hmm, this is generally bad practice to hold a spin lock for long.
> > In general, spin locks are meant to protect data, not code.
> >
> > I'd want some numbers to decide on this one, even if I think this
> > particular case is safe (e.g. this cannot dead-lock)
> >
>
> Actually, the loop will not hold a spin lock for long, because other
> threads will not issue new requests in this case. In addition,
> virtio-blk or virtio-scsi also use this solution, I guess it may also
> encounter this problem before.
Fair enough. If you do have some numbers to give though (throughput
and/or iops before/after) I'd still be really curious.
> >> chan->ring_bufs_avail = 1;
> >> - spin_unlock_irqrestore(&chan->lock, flags);
> >> /* Wakeup if anyone waiting for VirtIO ring space. */
> >> wake_up(chan->vc_wq);
> >
> > In particular, the wake up here echoes to wait events that will
> > immediately try to grab the lock, and will needlessly spin on it until
> > this thread is done.
> > If we do go this way I'd want setting chan->ring_bufs_avail to be done
> > just before unlocking and the wakeup to be done just after unlocking out
> > of the loop iff we processed at least one iteration here.
>
> I can move the wakeup operation after the unlocking. Like what I said
> above, I think this loop will not execute for long.
Please do, you listed virtio_blk as doing this and they have the same
kind of pattern with a req_done bool and only restarting stopped queues
if they processed something
--
Dominique
next prev parent reply other threads:[~2018-07-14 12:49 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-14 8:48 [V9fs-developer] [PATCH] net/9p: Fix a deadlock case in the virtio transport jiangyiwen
2018-07-14 9:05 ` Dominique Martinet
2018-07-14 11:12 ` jiangyiwen
2018-07-14 12:47 ` Dominique Martinet [this message]
2018-07-16 1:55 ` jiangyiwen
2018-07-16 13:38 ` Dominique Martinet
2018-07-17 1:12 ` jiangyiwen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180714124715.GA16134@nautica \
--to=asmadeus@codewreck.org \
--cc=akpm@linux-foundation.org \
--cc=ericvh@gmail.com \
--cc=jiangyiwen@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lucho@ionkov.net \
--cc=rminnich@sandia.gov \
--cc=v9fs-developer@lists.sourceforge.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox