From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:50513) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZClqj-0001N2-EC for qemu-devel@nongnu.org; Wed, 08 Jul 2015 05:41:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZClqe-0003ht-FQ for qemu-devel@nongnu.org; Wed, 08 Jul 2015 05:41:05 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59949) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZClqe-0003hZ-Ak for qemu-devel@nongnu.org; Wed, 08 Jul 2015 05:41:00 -0400 Message-ID: <559CF026.4090703@redhat.com> Date: Wed, 08 Jul 2015 17:40:54 +0800 From: Jason Wang MIME-Version: 1.0 References: <1436232067-29144-1-git-send-email-famz@redhat.com> <559B834B.3010807@redhat.com> <20150707090358.GA28682@ad.nay.redhat.com> In-Reply-To: <20150707090358.GA28682@ad.nay.redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v2] net: Flush queued packets when guest resumes List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Fam Zheng Cc: jcmvbkbc@gmail.com, qemu-devel@nongnu.org, Stefan Hajnoczi , mst@redhat.com On 07/07/2015 05:03 PM, Fam Zheng wrote: > On Tue, 07/07 15:44, Jason Wang wrote: >> >> On 07/07/2015 09:21 AM, Fam Zheng wrote: >>> Since commit 6e99c63 "net/socket: Drop net_socket_can_send" and friends, >>> net queues need to be explicitly flushed after qemu_can_send_packet() >>> returns false, because the netdev side will disable the polling of fd. >>> >>> This fixes the case of "cont" after "stop" (or migration). >>> >>> Signed-off-by: Fam Zheng >>> >>> --- >>> >>> v2: Unify with VM stop handler. (Stefan) >>> --- >>> net/net.c | 19 ++++++++++++------- >>> 1 file changed, 12 insertions(+), 7 deletions(-) >>> >>> diff --git a/net/net.c b/net/net.c >>> index 6ff7fec..28a5597 100644 >>> --- a/net/net.c >>> +++ b/net/net.c >>> @@ -1257,14 +1257,19 @@ void qmp_set_link(const char *name, bool up, Error **errp) >>> static void net_vm_change_state_handler(void *opaque, int running, >>> RunState state) >>> { >>> - /* Complete all queued packets, to guarantee we don't modify >>> - * state later when VM is not running. >>> - */ >>> - if (!running) { >>> - NetClientState *nc; >>> - NetClientState *tmp; >>> + NetClientState *nc; >>> + NetClientState *tmp; >>> >>> - QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) { >>> + QTAILQ_FOREACH_SAFE(nc, &net_clients, next, tmp) { >>> + if (running) { >>> + /* Flush queued packets and wake up backends. */ >>> + if (nc->peer && qemu_can_send_packet(nc)) { >>> + qemu_flush_queued_packets(nc->peer); >>> + } >>> + } else { >>> + /* Complete all queued packets, to guarantee we don't modify >>> + * state later when VM is not running. >>> + */ >>> qemu_flush_or_purge_queued_packets(nc, true); >>> } >> Looks like qemu_can_send_packet() checks both nc->peer and runstate. So >> probably, we can simplify this to: >> >> if (qemu_can_send_packet(nc)) >> qemu_flush_queued_packets(nc->peer); >> else >> qemu_flush_or_purge_queued_packets(nc, true); >> >>> } > qemu_can_send_packet returns 1 if !nc->peer, so this doesn't work. > > Fam Yes, I was wrong. Btw, instead of depending on vm handler (which seems racy with other state change handler). Can we do this in places like vm_start() and vm_stop(). Like we drain and flush block queue during vm stop.