From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38861) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZExHW-0004AI-Bf for qemu-devel@nongnu.org; Tue, 14 Jul 2015 06:17:47 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZExHS-0005fD-Bg for qemu-devel@nongnu.org; Tue, 14 Jul 2015 06:17:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34881) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZExHS-0005ex-67 for qemu-devel@nongnu.org; Tue, 14 Jul 2015 06:17:42 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (Postfix) with ESMTPS id B92B1AB114 for ; Tue, 14 Jul 2015 10:17:41 +0000 (UTC) Date: Tue, 14 Jul 2015 18:17:37 +0800 From: Fam Zheng Message-ID: <20150714101737.GF27873@ad.nay.redhat.com> References: <1436866864-19925-1-git-send-email-famz@redhat.com> <20150714130902-mutt-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150714130902-mutt-send-email-mst@redhat.com> Subject: Re: [Qemu-devel] [PATCH for-2.4] virtio-net: Flush incoming queues when DRIVER_OK is being set List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: jasowang@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com On Tue, 07/14 13:09, Michael S. Tsirkin wrote: > On Tue, Jul 14, 2015 at 05:41:04PM +0800, Fam Zheng wrote: > > This patch fixes network hang after "stop" then "cont", while network > > packets keep arriving. > > > > Tested both manually (tap, host pinging guest) and with Jason's qtest > > series (plus his "[PATCH 2.4] socket: pass correct size in > > net_socket_send()" fix). > > > > As virtio_net_set_status is called when guest driver is setting status > > byte and when vm state is changing, it is a good opportunity to flush > > queued packets. > > > > This is necessary because during vm stop the backend (e.g. tap) would > > stop rx processing after .can_receive returns false, until the queue is > > explicitly flushed or purged. > > > > The other interesting condition in .can_receive, virtio_queue_ready(), > > is handled by virtio_net_handle_rx() when guest kicks; the 3rd condition > > is invalid queue index which doesn't need flushing. > > > > Signed-off-by: Fam Zheng > > --- > > hw/net/virtio-net.c | 5 +++++ > > 1 file changed, 5 insertions(+) > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > > index d728233..7c178c6 100644 > > --- a/hw/net/virtio-net.c > > +++ b/hw/net/virtio-net.c > > @@ -162,8 +162,13 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status) > > virtio_net_vhost_status(n, status); > > > > for (i = 0; i < n->max_queues; i++) { > > + NetClientState *ncs = qemu_get_subqueue(n->nic, i); > > q = &n->vqs[i]; > > > > + if (status & VIRTIO_CONFIG_S_DRIVER_OK) { > > + qemu_flush_queued_packets(ncs); > > + } > > + > > if ((!n->multiqueue && i != 0) || i >= n->curr_queues) { > > queue_status = 0; > > } else { > > I think this should be limited to > virtio_net_started(n, queue_status) && !n->vhost_started Yes, that looks better. Fam