qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Greg Kurz <groug@kaod.org>, qemu-devel@nongnu.org
Cc: R Nageswara Sastry <nasastry@in.ibm.com>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v2] virtio_net: flush uncompleted TX on reset
Date: Tue, 20 Mar 2018 10:34:09 +0800	[thread overview]
Message-ID: <181e938f-76b7-84b2-96f1-1f8832236e23@redhat.com> (raw)
In-Reply-To: <152120204902.1103.7114773412109402452.stgit@bahia.lan>



On 2018年03月16日 20:07, Greg Kurz wrote:
> If the backend could not transmit a packet right away for some reason,
> the packet is queued for asynchronous sending. The corresponding vq
> element is tracked in the async_tx.elem field of the VirtIONetQueue,
> for later freeing when the transmission is complete.
>
> If a reset happens before completion, virtio_net_tx_complete() will push
> async_tx.elem back to the guest anyway, and we end up with the inuse flag
> of the vq being equal to -1. The next call to virtqueue_pop() is then
> likely to fail with "Virtqueue size exceeded".
>
> This can be reproduced easily by starting a guest with an hubport backend
> that is not connected to a functional network, eg,
>
>   -device virtio-net-pci,netdev=hub0 -netdev hubport,id=hub0,hubid=0
>
> and no other -netdev hubport,hubid=0 on the command line.
>
> The appropriate fix is to ensure that such an asynchronous transmission
> cannot survive a device reset. So for all queues, we first try to send
> the packet again, and eventually we purge it if the backend still could
> not deliver it.
>
> Reported-by: R. Nageswara Sastry <nasastry@in.ibm.com>
> Buglink: https://github.com/open-power-host-os/qemu/issues/37
> Signed-off-by: Greg Kurz <groug@kaod.org>
> Tested-by: R. Nageswara Sastry <nasastry@in.ibm.com>
> ---
> v2: - make qemu_flush_or_purge_queued_packets() extern and use it
>      - reworded reproducer paragraph in changelog
> ---
>   hw/net/virtio-net.c |    8 ++++++++
>   include/net/net.h   |    1 +
>   net/net.c           |    1 -
>   3 files changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 188744e17d57..e5ed35489380 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -422,6 +422,7 @@ static RxFilterInfo *virtio_net_query_rxfilter(NetClientState *nc)
>   static void virtio_net_reset(VirtIODevice *vdev)
>   {
>       VirtIONet *n = VIRTIO_NET(vdev);
> +    int i;
>   
>       /* Reset back to compatibility mode */
>       n->promisc = 1;
> @@ -445,6 +446,13 @@ static void virtio_net_reset(VirtIODevice *vdev)
>       memcpy(&n->mac[0], &n->nic->conf->macaddr, sizeof(n->mac));
>       qemu_format_nic_info_str(qemu_get_queue(n->nic), n->mac);
>       memset(n->vlans, 0, MAX_VLAN >> 3);
> +
> +    /* Flush any async TX */
> +    for (i = 0;  i < n->max_queues; i++) {
> +        NetClientState *nc = qemu_get_subqueue(n->nic, i);
> +        qemu_flush_or_purge_queued_packets(nc->peer, true);
> +        assert(!virtio_net_get_subqueue(nc)->async_tx.elem);
> +    }
>   }
>   
>   static void peer_test_vnet_hdr(VirtIONet *n)
> diff --git a/include/net/net.h b/include/net/net.h
> index a943e968a3dc..1f7341e4592b 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -153,6 +153,7 @@ ssize_t qemu_send_packet_async(NetClientState *nc, const uint8_t *buf,
>                                  int size, NetPacketSent *sent_cb);
>   void qemu_purge_queued_packets(NetClientState *nc);
>   void qemu_flush_queued_packets(NetClientState *nc);
> +void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool purge);
>   void qemu_format_nic_info_str(NetClientState *nc, uint8_t macaddr[6]);
>   bool qemu_has_ufo(NetClientState *nc);
>   bool qemu_has_vnet_hdr(NetClientState *nc);
> diff --git a/net/net.c b/net/net.c
> index 5222e450698c..29f83983e55d 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -595,7 +595,6 @@ void qemu_purge_queued_packets(NetClientState *nc)
>       qemu_net_queue_purge(nc->peer->incoming_queue, nc);
>   }
>   
> -static
>   void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool purge)
>   {
>       nc->receive_disabled = 0;
>
>

Applied and queued for -stable.

Thanks

  reply	other threads:[~2018-03-20  2:34 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-16 12:07 [Qemu-devel] [PATCH v2] virtio_net: flush uncompleted TX on reset Greg Kurz
2018-03-20  2:34 ` Jason Wang [this message]
2018-03-20  3:27   ` Jason Wang
2018-03-20 10:20     ` Greg Kurz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=181e938f-76b7-84b2-96f1-1f8832236e23@redhat.com \
    --to=jasowang@redhat.com \
    --cc=groug@kaod.org \
    --cc=mst@redhat.com \
    --cc=nasastry@in.ibm.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).