From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55742) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XHoJg-0001jX-2X for qemu-devel@nongnu.org; Thu, 14 Aug 2014 02:15:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XHoJU-0006rq-4C for qemu-devel@nongnu.org; Thu, 14 Aug 2014 02:15:16 -0400 Received: from szxga01-in.huawei.com ([119.145.14.64]:11592) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XHoJT-0006ly-9U for qemu-devel@nongnu.org; Thu, 14 Aug 2014 02:15:04 -0400 From: zhanghailiang Date: Thu, 14 Aug 2014 14:13:57 +0800 Message-ID: <1407996838-10212-3-git-send-email-zhang.zhanghailiang@huawei.com> In-Reply-To: <1407996838-10212-1-git-send-email-zhang.zhanghailiang@huawei.com> References: <1407996838-10212-1-git-send-email-zhang.zhanghailiang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain Subject: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, stefanha@redhat.com, mst@redhat.com, luonengjun@huawei.com, peter.huangpeng@huawei.com, aliguori@amazon.com, akong@redhat.com, zhanghailiang When the runstate changes back to running, we definitely need to flush queues to get packets flowing again. Here we implement this in the net layer: (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState, Which will listen for VM runstate changes. (2) Register a handler function for VMstate change. When vm changes back to running, we flush all queues in the callback function. Signed-off-by: zhanghailiang --- include/net/net.h | 1 + net/net.c | 26 ++++++++++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/include/net/net.h b/include/net/net.h index 312f728..a294277 100644 --- a/include/net/net.h +++ b/include/net/net.h @@ -97,6 +97,7 @@ typedef struct NICState { NICConf *conf; void *opaque; bool peer_deleted; + VMChangeStateEntry *vmstate; } NICState; NetClientState *qemu_find_netdev(const char *id); diff --git a/net/net.c b/net/net.c index 5bb2821..506e58f 100644 --- a/net/net.c +++ b/net/net.c @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info, return nc; } +static void nic_vmstate_change_handler(void *opaque, + int running, + RunState state) +{ + NICState *nic = opaque; + NetClientState *nc; + int i, queues; + + if (!running) { + return; + } + + queues = MAX(1, nic->conf->peers.queues); + for (i = 0; i < queues; i++) { + nc = &nic->ncs[i]; + if (nc->receive_disabled + || (nc->info->can_receive && !nc->info->can_receive(nc))) { + continue; + } + qemu_flush_queued_packets(nc); + } +} + NICState *qemu_new_nic(NetClientInfo *info, NICConf *conf, const char *model, @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info, nic->ncs = (void *)nic + info->size; nic->conf = conf; nic->opaque = opaque; + nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler, + nic); for (i = 0; i < queues; i++) { qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name, @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic) qemu_free_net_client(nc); } + qemu_del_vm_change_state_handler(nic->vmstate); g_free(nic); } -- 1.7.12.4