From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50592) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZCOnM-0001mP-6e for qemu-devel@nongnu.org; Tue, 07 Jul 2015 05:04:05 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZCOnJ-0001Qh-0g for qemu-devel@nongnu.org; Tue, 07 Jul 2015 05:04:04 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41754) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZCOnI-0001QQ-Rl for qemu-devel@nongnu.org; Tue, 07 Jul 2015 05:04:00 -0400 Date: Tue, 7 Jul 2015 17:03:58 +0800 From: Fam Zheng Message-ID: <20150707090358.GA28682@ad.nay.redhat.com> References: <1436232067-29144-1-git-send-email-famz@redhat.com> <559B834B.3010807@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <559B834B.3010807@redhat.com> Subject: Re: [Qemu-devel] [PATCH v2] net: Flush queued packets when guest resumes List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang Cc: jcmvbkbc@gmail.com, qemu-devel@nongnu.org, Stefan Hajnoczi , mst@redhat.com On Tue, 07/07 15:44, Jason Wang wrote: > > > On 07/07/2015 09:21 AM, Fam Zheng wrote: > > Since commit 6e99c63 "net/socket: Drop net_socket_can_send" and friends, > > net queues need to be explicitly flushed after qemu_can_send_packet() > > returns false, because the netdev side will disable the polling of fd. > > > > This fixes the case of "cont" after "stop" (or migration). > > > > Signed-off-by: Fam Zheng > > > > --- > > > > v2: Unify with VM stop handler. (Stefan) > > --- > > net/net.c | 19 ++++++++++++------- > > 1 file changed, 12 insertions(+), 7 deletions(-) > > > > diff --git a/net/net.c b/net/net.c > > index 6ff7fec..28a5597 100644 > > --- a/net/net.c > > +++ b/net/net.c > > @@ -1257,14 +1257,19 @@ void qmp_set_link(const char *name, bool up, Error **errp) > > static void net_vm_change_state_handler(void *opaque, int running, > > RunState state) > > { > > - /* Complete all queued packets, to guarantee we don't modify > > - * state later when VM is not running. > > - */ > > - if (!running) { > > - NetClientState *nc; > > - NetClientState *tmp; > > + NetClientState *nc; > > + NetClientState *tmp; > > > > - QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) { > > + QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) { > > + if (running) { > > + /* Flush queued packets and wake up backends. */ > > + if (nc->peer && qemu_can_send_packet(nc)) { > > + qemu_flush_queued_packets(nc->peer); > > + } > > + } else { > > + /* Complete all queued packets, to guarantee we don't modify > > + * state later when VM is not running. > > + */ > > qemu_flush_or_purge_queued_packets(nc, true); > > } > > Looks like qemu_can_send_packet() checks both nc->peer and runstate. So > probably, we can simplify this to: > > if (qemu_can_send_packet(nc)) > qemu_flush_queued_packets(nc->peer); > else > qemu_flush_or_purge_queued_packets(nc, true); > > > } > qemu_can_send_packet returns 1 if !nc->peer, so this doesn't work. Fam