From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53322) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XKjSj-0000Uj-BS for qemu-devel@nongnu.org; Fri, 22 Aug 2014 03:40:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XKjSe-0005Bq-Gv for qemu-devel@nongnu.org; Fri, 22 Aug 2014 03:40:41 -0400 Received: from mx1.redhat.com ([209.132.183.28]:20312) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XKjSe-0005Ba-8p for qemu-devel@nongnu.org; Fri, 22 Aug 2014 03:40:36 -0400 Message-ID: <53F6F3DF.2020304@redhat.com> Date: Fri, 22 Aug 2014 15:40:15 +0800 From: Jason Wang MIME-Version: 1.0 References: <1408624778-5168-1-git-send-email-zhang.zhanghailiang@huawei.com> In-Reply-To: <1408624778-5168-1-git-send-email-zhang.zhanghailiang@huawei.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH V2] net: Fix dealing with packets when runstate changes List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: zhanghailiang , qemu-devel@nongnu.org Cc: peter.maydell@linaro.org, luonengjun@huawei.com, peter.huangpeng@huawei.com, stefanha@redhat.com, mst@redhat.com On 08/21/2014 08:39 PM, zhanghailiang wrote: > For all NICs(except virtio-net) emulated by qemu, > Such as e1000, rtl8139, pcnet and ne2k_pci, > Qemu can still receive packets when VM is not running. > If this happened in *migration's* last PAUSE VM stage, > The new dirty RAM related to the packets will be missed, > And this will lead serious network fault in VM. > > To avoid this, we forbid receiving packets in generic net code when > VM is not running. Also, when the runstate changes back to running, > we definitely need to flush queues to get packets flowing again. Hi: Can you describe what will happen if you don't flush queues after vm is started? Btw, the notifier dependency and impact on vhost should be mentioned here as well. > > Here we implement this in the net layer: > (1) Judge the vm runstate in qemu_can_send_packet > (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState, > Which will listen for VM runstate changes. > (3) Register a handler function for VMstate change. > When vm changes back to running, we flush all queues in the callback function. > (4) Remove checking vm state in virtio_net_can_receive > > Signed-off-by: zhanghailiang > --- > v2: > - remove the superfluous check of nc->received_disabled > --- > hw/net/virtio-net.c | 4 ---- > include/net/net.h | 2 ++ > net/net.c | 31 +++++++++++++++++++++++++++++++ > 3 files changed, 33 insertions(+), 4 deletions(-) > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > index 268eff9..287d762 100644 > --- a/hw/net/virtio-net.c > +++ b/hw/net/virtio-net.c > @@ -839,10 +839,6 @@ static int virtio_net_can_receive(NetClientState *nc) > VirtIODevice *vdev = VIRTIO_DEVICE(n); > VirtIONetQueue *q = virtio_net_get_subqueue(nc); > > - if (!vdev->vm_running) { > - return 0; > - } > - > if (nc->queue_index >= n->curr_queues) { > return 0; > } > diff --git a/include/net/net.h b/include/net/net.h > index ed594f9..a294277 100644 > --- a/include/net/net.h > +++ b/include/net/net.h > @@ -8,6 +8,7 @@ > #include "net/queue.h" > #include "migration/vmstate.h" > #include "qapi-types.h" > +#include "sysemu/sysemu.h" > > #define MAX_QUEUE_NUM 1024 > > @@ -96,6 +97,7 @@ typedef struct NICState { > NICConf *conf; > void *opaque; > bool peer_deleted; > + VMChangeStateEntry *vmstate; > } NICState; > > NetClientState *qemu_find_netdev(const char *id); > diff --git a/net/net.c b/net/net.c > index 6d930ea..113a37b 100644 > --- a/net/net.c > +++ b/net/net.c > @@ -242,6 +242,28 @@ NetClientState *qemu_new_net_client(NetClientInfo *info, > return nc; > } > > +static void nic_vmstate_change_handler(void *opaque, > + int running, > + RunState state) > +{ > + NICState *nic = opaque; > + NetClientState *nc; > + int i, queues; > + > + if (!running) { > + return; > + } > + > + queues = MAX(1, nic->conf->peers.queues); > + for (i = 0; i < queues; i++) { > + nc = &nic->ncs[i]; > + if (nc->info->can_receive && !nc->info->can_receive(nc)) { > + continue; > + } Why not just using qemu_can_send_packet() here? > + qemu_flush_queued_packets(nc); > + } > +} > + > NICState *qemu_new_nic(NetClientInfo *info, > NICConf *conf, > const char *model, > @@ -259,6 +281,8 @@ NICState *qemu_new_nic(NetClientInfo *info, > nic->ncs = (void *)nic + info->size; > nic->conf = conf; > nic->opaque = opaque; > + nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler, > + nic); > > for (i = 0; i < queues; i++) { > qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name, > @@ -379,6 +403,7 @@ void qemu_del_nic(NICState *nic) > qemu_free_net_client(nc); > } > > + qemu_del_vm_change_state_handler(nic->vmstate); > g_free(nic); > } > > @@ -452,6 +477,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len) > > int qemu_can_send_packet(NetClientState *sender) > { > + int vmstat = runstate_is_running(); > + > + if (!vmstat) { > + return 0; > + } I think you want "vmstart" here? > + > if (!sender->peer) { > return 1; > }