public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org
Subject: Re: [PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling
Date: Wed, 12 Dec 2018 11:03:57 +0800	[thread overview]
Message-ID: <aa8f36da-1489-a094-35ce-286bb3f25243@redhat.com> (raw)
In-Reply-To: <20181210230106-mutt-send-email-mst@kernel.org>


On 2018/12/11 下午12:04, Michael S. Tsirkin wrote:
> On Tue, Dec 11, 2018 at 11:06:43AM +0800, Jason Wang wrote:
>> On 2018/12/11 上午9:34, Michael S. Tsirkin wrote:
>>> On Mon, Dec 10, 2018 at 05:44:52PM +0800, Jason Wang wrote:
>>>> When we try to do rx busy polling in tx path in commit 441abde4cd84
>>>> ("net: vhost: add rx busy polling in tx path"), we lock rx vq mutex
>>>> after tx vq mutex is held. This may lead deadlock so we try to lock vq
>>>> one by one in commit 78139c94dc8c ("net: vhost: lock the vqs one by
>>>> one"). With this commit, we avoid the deadlock with the assumption
>>>> that handle_rx() and handle_tx() run in a same process. But this
>>>> commit remove the protection for IOTLB updating which requires the
>>>> mutex of each vq to be held.
>>>>
>>>> To solve this issue, the first step is to have a exact same lock
>>>> ordering for vhost_net. This is done through:
>>>>
>>>> - For handle_rx(), if busy polling is enabled, lock tx vq immediately.
>>>> - For handle_tx(), always lock rx vq before tx vq, and unlock it if
>>>>     busy polling is not enabled.
>>>> - Remove the tricky locking codes in busy polling.
>>>>
>>>> With this, we can have a exact same lock ordering for vhost_net, this
>>>> allows us to safely revert commit 78139c94dc8c ("net: vhost: lock the
>>>> vqs one by one") in next patch.
>>>>
>>>> The patch will add two more atomic operations on the tx path during
>>>> each round of handle_tx(). 1 byte TCP_RR does not notice such
>>>> overhead.
>>>>
>>>> Fixes: commit 78139c94dc8c ("net: vhost: lock the vqs one by one")
>>>> Cc: Tonghao Zhang<xiangxia.m.yue@gmail.com>
>>>> Signed-off-by: Jason Wang<jasowang@redhat.com>
>>>> ---
>>>>    drivers/vhost/net.c | 18 +++++++++++++++---
>>>>    1 file changed, 15 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
>>>> index ab11b2bee273..5f272ab4d5b4 100644
>>>> --- a/drivers/vhost/net.c
>>>> +++ b/drivers/vhost/net.c
>>>> @@ -513,7 +513,6 @@ static void vhost_net_busy_poll(struct vhost_net *net,
>>>>    	struct socket *sock;
>>>>    	struct vhost_virtqueue *vq = poll_rx ? tvq : rvq;
>>>> -	mutex_lock_nested(&vq->mutex, poll_rx ? VHOST_NET_VQ_TX: VHOST_NET_VQ_RX);
>>>>    	vhost_disable_notify(&net->dev, vq);
>>>>    	sock = rvq->private_data;
>>>> @@ -543,8 +542,6 @@ static void vhost_net_busy_poll(struct vhost_net *net,
>>>>    		vhost_net_busy_poll_try_queue(net, vq);
>>>>    	else if (!poll_rx) /* On tx here, sock has no rx data. */
>>>>    		vhost_enable_notify(&net->dev, rvq);
>>>> -
>>>> -	mutex_unlock(&vq->mutex);
>>>>    }
>>>>    static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
>>>> @@ -913,10 +910,16 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock)
>>>>    static void handle_tx(struct vhost_net *net)
>>>>    {
>>>>    	struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX];
>>>> +	struct vhost_net_virtqueue *nvq_rx = &net->vqs[VHOST_NET_VQ_RX];
>>>>    	struct vhost_virtqueue *vq = &nvq->vq;
>>>> +	struct vhost_virtqueue *vq_rx = &nvq_rx->vq;
>>>>    	struct socket *sock;
>>>> +	mutex_lock_nested(&vq_rx->mutex, VHOST_NET_VQ_RX);
>>>>    	mutex_lock_nested(&vq->mutex, VHOST_NET_VQ_TX);
>>>> +	if (!vq->busyloop_timeout)
>>>> +		mutex_unlock(&vq_rx->mutex);
>>>> +
>>>>    	sock = vq->private_data;
>>>>    	if (!sock)
>>>>    		goto out;
>>>> @@ -933,6 +936,8 @@ static void handle_tx(struct vhost_net *net)
>>>>    		handle_tx_copy(net, sock);
>>>>    out:
>>>> +	if (vq->busyloop_timeout)
>>>> +		mutex_unlock(&vq_rx->mutex);
>>>>    	mutex_unlock(&vq->mutex);
>>>>    }
>>> So rx mutex taken on tx path now.  And tx mutex is on rc path ...  This
>>> is just messed up. Why can't tx polling drop rx lock before
>>> getting the tx lock and vice versa?
>>
>> Because we want to poll both tx and rx virtqueue at the same time
>> (vhost_net_busy_poll()).
>>
>>      while (vhost_can_busy_poll(endtime)) {
>>          if (vhost_has_work(&net->dev)) {
>>              *busyloop_intr = true;
>>              break;
>>          }
>>
>>          if ((sock_has_rx_data(sock) &&
>>               !vhost_vq_avail_empty(&net->dev, rvq)) ||
>>              !vhost_vq_avail_empty(&net->dev, tvq))
>>              break;
>>
>>          cpu_relax();
>>
>>      }
>>
>>
>> And we disable kicks and notification for better performance.
> Right but it's all slow path - it happens when queue is
> otherwise empty. So this is what I am saying: let's drop the locks
> we hold around this.


Is this really safe? I looks to me it can race with SET_VRING_ADDR. And 
the codes did more:

- access sock object

- access device IOTLB

- enable and disable notification

None of above is safe without the protection of vq mutex.


>
>
>>> Or if we really wanted to force everything to be locked at
>>> all times, let's just use a single mutex.
>>>
>>>
>>>
>> We could, but it might requires more changes which could be done for -next I
>> believe.
>>
>>
>> Thanks
> I'd rather we kept the fine grained locking. E.g. people are
> looking at splitting the tx and rx threads. But if not possible
> let's fix it cleanly with a coarse-grained one. A mess here will
> just create more trouble later.
>

I believe we won't go back to coarse one. Looks like we can solve this 
by using mutex_trylock() for rxq during TX. And don't do polling for rxq 
is a IOTLB updating is pending.

Let me post V2.

Thanks

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2018-12-12  3:03 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-10  9:44 [PATCH net 0/4] Fix various issue of vhost Jason Wang
2018-12-10  9:44 ` [PATCH net 1/4] vhost: make sure used idx is seen before log in vhost_add_used_n() Jason Wang
2018-12-10  9:44 ` [PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling Jason Wang
2018-12-11  1:34   ` Michael S. Tsirkin
2018-12-11  3:06     ` Jason Wang
2018-12-11  4:04       ` Michael S. Tsirkin
2018-12-12  3:03         ` Jason Wang [this message]
2018-12-12  3:40           ` Michael S. Tsirkin
2018-12-10  9:44 ` [PATCH net 3/4] Revert "net: vhost: lock the vqs one by one" Jason Wang
2018-12-10  9:44 ` [PATCH net 4/4] vhost: log dirty page correctly Jason Wang
2018-12-10 15:14   ` kbuild test robot
2018-12-11  1:30     ` Michael S. Tsirkin
2018-12-19 17:29   ` kbuild test robot
2018-12-10 19:47 ` [PATCH net 0/4] Fix various issue of vhost David Miller
2018-12-11  3:01   ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aa8f36da-1489-a094-35ce-286bb3f25243@redhat.com \
    --to=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox