From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56002) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZCP2Y-0003XR-S2 for qemu-devel@nongnu.org; Tue, 07 Jul 2015 05:19:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZCP2T-0001y4-SY for qemu-devel@nongnu.org; Tue, 07 Jul 2015 05:19:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42344) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZCP2T-0001xf-MA for qemu-devel@nongnu.org; Tue, 07 Jul 2015 05:19:41 -0400 Date: Tue, 7 Jul 2015 12:19:27 +0300 From: "Michael S. Tsirkin" Message-ID: <20150707121521-mutt-send-email-mst@redhat.com> References: <1436232067-29144-1-git-send-email-famz@redhat.com> <20150707111103-mutt-send-email-mst@redhat.com> <20150707090909.GB28682@ad.nay.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150707090909.GB28682@ad.nay.redhat.com> Subject: Re: [Qemu-devel] [PATCH v2] net: Flush queued packets when guest resumes List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Fam Zheng Cc: jcmvbkbc@gmail.com, Jason Wang , qemu-devel@nongnu.org, Stefan Hajnoczi On Tue, Jul 07, 2015 at 05:09:09PM +0800, Fam Zheng wrote: > On Tue, 07/07 11:13, Michael S. Tsirkin wrote: > > On Tue, Jul 07, 2015 at 09:21:07AM +0800, Fam Zheng wrote: > > > Since commit 6e99c63 "net/socket: Drop net_socket_can_send" and friends, > > > net queues need to be explicitly flushed after qemu_can_send_packet() > > > returns false, because the netdev side will disable the polling of fd. > > > > > > This fixes the case of "cont" after "stop" (or migration). > > > > > > Signed-off-by: Fam Zheng > > > > Note virtio has its own handler which must be used to > > flush packets - this one might run too early or too late. > > Which handler do you mean? I don't think virtio-net handles resume now. (If it > does, we probably should drop it together with this change, since it's needed > by as all NICs.) > > Fam virtio_vmstate_change It's all far from trivial. I suspect these whack-a-mole approach spreading purge here and there will only create more bugs. Why would we ever need to process network packets when VM is not running? I don't see any point to it. How about we simply stop the job processing network on vm stop and restart on vm start? > > > > > --- > > > > > > v2: Unify with VM stop handler. (Stefan) > > > --- > > > net/net.c | 19 ++++++++++++------- > > > 1 file changed, 12 insertions(+), 7 deletions(-) > > > > > > diff --git a/net/net.c b/net/net.c > > > index 6ff7fec..28a5597 100644 > > > --- a/net/net.c > > > +++ b/net/net.c > > > @@ -1257,14 +1257,19 @@ void qmp_set_link(const char *name, bool up, Error **errp) > > > static void net_vm_change_state_handler(void *opaque, int running, > > > RunState state) > > > { > > > - /* Complete all queued packets, to guarantee we don't modify > > > - * state later when VM is not running. > > > - */ > > > - if (!running) { > > > - NetClientState *nc; > > > - NetClientState *tmp; > > > + NetClientState *nc; > > > + NetClientState *tmp; > > > > > > - QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) { > > > + QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) { > > > + if (running) { > > > + /* Flush queued packets and wake up backends. */ > > > + if (nc->peer && qemu_can_send_packet(nc)) { > > > + qemu_flush_queued_packets(nc->peer); > > > + } > > > + } else { > > > + /* Complete all queued packets, to guarantee we don't modify > > > + * state later when VM is not running. > > > + */ > > > qemu_flush_or_purge_queued_packets(nc, true); > > > } > > > } > > > -- > > > 2.4.3