From: Jason Wang <jasowang@redhat.com>
To: Qin Chuanyu <qinchuanyu@huawei.com>,
virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: kvm@vger.kernel.org, mst@redhat.com
Subject: Re: [PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done
Date: Wed, 12 Feb 2014 18:06:15 +0800 [thread overview]
Message-ID: <52FB4797.2020101@redhat.com> (raw)
In-Reply-To: <52FB24EA.3060001@huawei.com>
On 02/12/2014 03:38 PM, Qin Chuanyu wrote:
> On 2013/8/30 12:29, Jason Wang wrote:
>> We used to poll vhost queue before making DMA is done, this is racy
>> if vhost
>> thread were waked up before marking DMA is done which can result the
>> signal to
>> be missed. Fix this by always poll the vhost thread before DMA is done.
>>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>> drivers/vhost/net.c | 9 +++++----
>> 1 files changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
>> index ff60c2a..d09c17c 100644
>> --- a/drivers/vhost/net.c
>> +++ b/drivers/vhost/net.c
>> @@ -308,6 +308,11 @@ static void vhost_zerocopy_callback(struct
>> ubuf_info *ubuf, bool success)
>> struct vhost_virtqueue *vq = ubufs->vq;
>> int cnt = atomic_read(&ubufs->kref.refcount);
>>
>> + /* set len to mark this desc buffers done DMA */
>> + vq->heads[ubuf->desc].len = success ?
>> + VHOST_DMA_DONE_LEN : VHOST_DMA_FAILED_LEN;
>> + vhost_net_ubuf_put(ubufs);
>> +
>> /*
>> * Trigger polling thread if guest stopped submitting new buffers:
>> * in this case, the refcount after decrement will eventually
>> reach 1
>> @@ -318,10 +323,6 @@ static void vhost_zerocopy_callback(struct
>> ubuf_info *ubuf, bool success)
>> */
>> if (cnt <= 2 || !(cnt % 16))
>> vhost_poll_queue(&vq->poll);
>> - /* set len to mark this desc buffers done DMA */
>> - vq->heads[ubuf->desc].len = success ?
>> - VHOST_DMA_DONE_LEN : VHOST_DMA_FAILED_LEN;
>> - vhost_net_ubuf_put(ubufs);
>> }
>>
>> /* Expects to be always run from workqueue - which acts as
>>
> with this change, vq would lose protection that provided by ubufs->kref.
> if another thread is waiting at vhost_net_ubuf_put_and_wait called by
> vhost_net_release, then after vhost_net_ubuf_put, vq would been free
> by vhost_net_release soon, vhost_poll_queue(&vq->poll) may cause NULL
> pointer Exception.
>
Good catch.
> another question is that vhost_zerocopy_callback is called by kfree_skb,
> it may called in different thread context.
> vhost_poll_queue is called decided by ubufs->kref.refcount, this may
> cause there isn't any thread call vhost_poll_queue, but at least one
> is needed. and this cause network break.
> We could repeat it by using 8 netperf thread in guest to xmit tcp to
> its host.
>
> I think if using atomic_read to decide while do vhost_poll_queue or not,
> at least a spink_lock is needed.
Then you need another ref count to protect that spinlock? Care to send
patches?
Thanks
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2014-02-12 10:06 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-30 4:29 [PATCH V2 0/6] vhost code cleanup and minor enhancement Jason Wang
2013-08-30 4:29 ` [PATCH V2 1/6] vhost_net: make vhost_zerocopy_signal_used() returns void Jason Wang
2013-09-02 5:51 ` Michael S. Tsirkin
2013-09-02 6:29 ` Jason Wang
2013-08-30 4:29 ` [PATCH V2 2/6] vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used() Jason Wang
2013-09-02 5:50 ` Michael S. Tsirkin
2013-09-02 6:28 ` Jason Wang
2013-08-30 4:29 ` [PATCH V2 3/6] vhost: switch to use vhost_add_used_n() Jason Wang
2013-08-30 4:29 ` [PATCH V2 4/6] vhost_net: determine whether or not to use zerocopy at one time Jason Wang
2013-08-30 18:35 ` Sergei Shtylyov
2013-09-02 3:15 ` Jason Wang
2013-08-30 4:29 ` [PATCH V2 5/6] vhost_net: poll vhost queue after marking DMA is done Jason Wang
2013-08-30 16:44 ` Ben Hutchings
2013-09-02 3:06 ` Jason Wang
2014-02-12 7:38 ` Qin Chuanyu
2014-02-12 10:06 ` Jason Wang [this message]
2014-02-12 16:01 ` Michael S. Tsirkin
2013-08-30 4:29 ` [PATCH V2 6/6] vhost_net: correctly limit the max pending buffers Jason Wang
2013-09-02 5:56 ` Michael S. Tsirkin
2013-09-02 6:30 ` Jason Wang
2013-09-02 8:37 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52FB4797.2020101@redhat.com \
--to=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=qinchuanyu@huawei.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).