From: Jason Wang <jasowang@redhat.com>
To: zhanghailiang <zhang.zhanghailiang@huawei.com>
Cc: peter.maydell@linaro.org, mst@redhat.com, luonengjun@huawei.com,
peter.huangpeng@huawei.com, qemu-devel@nongnu.org,
stefanha@redhat.com
Subject: Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
Date: Mon, 18 Aug 2014 17:14:10 +0800 [thread overview]
Message-ID: <53F1C3E2.8080708@redhat.com> (raw)
In-Reply-To: <53F1BA2A.8020008@huawei.com>
On 08/18/2014 04:32 PM, zhanghailiang wrote:
> On 2014/8/18 14:55, Jason Wang wrote:
>> On 08/18/2014 12:46 PM, zhanghailiang wrote:
>>> For all NICs(except virtio-net) emulated by qemu,
>>> Such as e1000, rtl8139, pcnet and ne2k_pci,
>>> Qemu can still receive packets when VM is not running.
>>> If this happened in *migration's* last PAUSE VM stage,
>>> The new dirty RAM related to the packets will be missed,
>>> And this will lead serious network fault in VM.
>>>
>>> To avoid this, we forbid receiving packets in generic net code when
>>> VM is not running. Also, when the runstate changes back to running,
>>> we definitely need to flush queues to get packets flowing again.
>>
>> You probably need a better title since it does not cover this change.
>>>
>
> Hmm, you are right, i will modify it, thanks.:)
>
>>> Here we implement this in the net layer:
>>> (1) Judge the vm runstate in qemu_can_send_packet
>>> (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>>> Which will listen for VM runstate changes.
>>> (3) Register a handler function for VMstate change.
>>> When vm changes back to running, we flush all queues in the callback
>>> function.
>>> (4) Remove checking vm state in virtio_net_can_receive
>>>
>>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>>> ---
>>> hw/net/virtio-net.c | 4 ----
>>> include/net/net.h | 2 ++
>>> net/net.c | 32 ++++++++++++++++++++++++++++++++
>>> 3 files changed, 34 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>> index 268eff9..287d762 100644
>>> --- a/hw/net/virtio-net.c
>>> +++ b/hw/net/virtio-net.c
>>> @@ -839,10 +839,6 @@ static int
>>> virtio_net_can_receive(NetClientState *nc)
>>> VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>> VirtIONetQueue *q = virtio_net_get_subqueue(nc);
>>>
>>> - if (!vdev->vm_running) {
>>> - return 0;
>>> - }
>>> -
>>> if (nc->queue_index>= n->curr_queues) {
>>> return 0;
>>> }
>>> diff --git a/include/net/net.h b/include/net/net.h
>>> index ed594f9..a294277 100644
>>> --- a/include/net/net.h
>>> +++ b/include/net/net.h
>>> @@ -8,6 +8,7 @@
>>> #include "net/queue.h"
>>> #include "migration/vmstate.h"
>>> #include "qapi-types.h"
>>> +#include "sysemu/sysemu.h"
>>>
>>> #define MAX_QUEUE_NUM 1024
>>>
>>> @@ -96,6 +97,7 @@ typedef struct NICState {
>>> NICConf *conf;
>>> void *opaque;
>>> bool peer_deleted;
>>> + VMChangeStateEntry *vmstate;
>>> } NICState;
>>>
>>> NetClientState *qemu_find_netdev(const char *id);
>>> diff --git a/net/net.c b/net/net.c
>>> index 6d930ea..21f0d48 100644
>>> --- a/net/net.c
>>> +++ b/net/net.c
>>> @@ -242,6 +242,29 @@ NetClientState
>>> *qemu_new_net_client(NetClientInfo *info,
>>> return nc;
>>> }
>>>
>>> +static void nic_vmstate_change_handler(void *opaque,
>>> + int running,
>>> + RunState state)
>>> +{
>>> + NICState *nic = opaque;
>>> + NetClientState *nc;
>>> + int i, queues;
>>> +
>>> + if (!running) {
>>> + return;
>>> + }
>>> +
>>> + queues = MAX(1, nic->conf->peers.queues);
>>> + for (i = 0; i< queues; i++) {
>>> + nc =&nic->ncs[i];
>>> + if (nc->receive_disabled
>>> + || (nc->info->can_receive&&
>>> !nc->info->can_receive(nc))) {
>>> + continue;
>>> + }
>>> + qemu_flush_queued_packets(nc);
>>
>> How about simply purge the receive queue during stop? If ok, there's no
>> need to introduce extra vmstate change handler.
>>
>
> I don't know whether it is OK to purge the receive packages, it was
> suggested by Stefan Hajnoczi, and i am waiting for his opinion .:)
>
> I think we still need the extra vmstate change handler, Without the
> change handler, we don't know if the VM will go to stop and the time
> when to call qemu_purge_queued_packets.
>
Or you can do it in do_vm_stop().
>>> + }
>>> +}
>>> +
>>> NICState *qemu_new_nic(NetClientInfo *info,
>>> NICConf *conf,
>>> const char *model,
>>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>>> nic->ncs = (void *)nic + info->size;
>>> nic->conf = conf;
>>> nic->opaque = opaque;
>>> + nic->vmstate =
>>> qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>>> + nic);
>>>
>>
>> Does this depend on other vm state change handler to be called first? I
>> mean virtio has its own vmstate_change handler and which seems to be
>> called after this. Is this an issue?
>>
>
> Yes, it is. The check vm state in virtio-net is unnecessary,
> Actually it will prevent the flushing process, this is why we
> do step 4 "Remove checking vm state in virtio_net_can_receive".
How about other handlers (especially kvm/xen specific ones)? If not,
looks like vm_start() is a more safer place since all handlers were
called before.
>
> Besides, i think it is OK to do common things in vmstate_change handler
> of generic net layer and do private things in their own vmstate_change
> handlers. :)
This is true only if there's no dependency. Virtio has a generic vmstate
change handler, a subtle change of your patch is even if vhost is
enabled, during vm start qemu will still process packets since you can
qemu_flush_queued_packets() before vhost_net is started (since virtio
vmstate change handler is called after). So probably we need only do
purging which can eliminate the processing during vm start.
>
>>> for (i = 0; i< queues; i++) {
>>> qemu_net_client_setup(&nic->ncs[i], info, peers[i], model,
>>> name,
>>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>>> qemu_free_net_client(nc);
>>> }
>>>
>>> + qemu_del_vm_change_state_handler(nic->vmstate);
>>> g_free(nic);
>>> }
>>>
>>> @@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc,
>>> int len)
>>>
>>> int qemu_can_send_packet(NetClientState *sender)
>>> {
>>> + int vmstat = runstate_is_running();
>>> +
>>> + if (!vmstat) {
>>> + return 0;
>>> + }
>>> +
>>> if (!sender->peer) {
>>> return 1;
>>> }
>>
>>
>> .
>>
>
>
next prev parent reply other threads:[~2014-08-18 9:31 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-18 4:46 [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running zhanghailiang
2014-08-18 6:55 ` Jason Wang
2014-08-18 8:32 ` zhanghailiang
2014-08-18 9:14 ` Jason Wang [this message]
2014-08-20 1:59 ` zhanghailiang
2014-08-19 12:29 ` Stefan Hajnoczi
2014-08-20 2:19 ` zhanghailiang
2014-08-20 3:17 ` Jason Wang
2014-08-22 10:08 ` Stefan Hajnoczi
2014-08-18 12:27 ` Dr. David Alan Gilbert
2014-08-19 6:46 ` zhanghailiang
2014-08-19 8:48 ` Dr. David Alan Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53F1C3E2.8080708@redhat.com \
--to=jasowang@redhat.com \
--cc=luonengjun@huawei.com \
--cc=mst@redhat.com \
--cc=peter.huangpeng@huawei.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=zhang.zhanghailiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).