qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
@ 2014-08-18  4:46 zhanghailiang
  2014-08-18  6:55 ` Jason Wang
  2014-08-18 12:27 ` Dr. David Alan Gilbert
  0 siblings, 2 replies; 12+ messages in thread
From: zhanghailiang @ 2014-08-18  4:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, zhanghailiang, mst, luonengjun, peter.huangpeng,
	stefanha

For all NICs(except virtio-net) emulated by qemu,
Such as e1000, rtl8139, pcnet and ne2k_pci,
Qemu can still receive packets when VM is not running.
If this happened in *migration's* last PAUSE VM stage,
The new dirty RAM related to the packets will be missed,
And this will lead serious network fault in VM.

To avoid this, we forbid receiving packets in generic net code when
VM is not running. Also, when the runstate changes back to running,
we definitely need to flush queues to get packets flowing again.

Here we implement this in the net layer:
(1) Judge the vm runstate in qemu_can_send_packet
(2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
Which will listen for VM runstate changes.
(3) Register a handler function for VMstate change.
When vm changes back to running, we flush all queues in the callback function.
(4) Remove checking vm state in virtio_net_can_receive

Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
---
 hw/net/virtio-net.c |  4 ----
 include/net/net.h   |  2 ++
 net/net.c           | 32 ++++++++++++++++++++++++++++++++
 3 files changed, 34 insertions(+), 4 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 268eff9..287d762 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -839,10 +839,6 @@ static int virtio_net_can_receive(NetClientState *nc)
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     VirtIONetQueue *q = virtio_net_get_subqueue(nc);
 
-    if (!vdev->vm_running) {
-        return 0;
-    }
-
     if (nc->queue_index >= n->curr_queues) {
         return 0;
     }
diff --git a/include/net/net.h b/include/net/net.h
index ed594f9..a294277 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -8,6 +8,7 @@
 #include "net/queue.h"
 #include "migration/vmstate.h"
 #include "qapi-types.h"
+#include "sysemu/sysemu.h"
 
 #define MAX_QUEUE_NUM 1024
 
@@ -96,6 +97,7 @@ typedef struct NICState {
     NICConf *conf;
     void *opaque;
     bool peer_deleted;
+    VMChangeStateEntry *vmstate;
 } NICState;
 
 NetClientState *qemu_find_netdev(const char *id);
diff --git a/net/net.c b/net/net.c
index 6d930ea..21f0d48 100644
--- a/net/net.c
+++ b/net/net.c
@@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
     return nc;
 }
 
+static void nic_vmstate_change_handler(void *opaque,
+                                       int running,
+                                       RunState state)
+{
+    NICState *nic = opaque;
+    NetClientState *nc;
+    int i, queues;
+
+    if (!running) {
+        return;
+    }
+
+    queues = MAX(1, nic->conf->peers.queues);
+    for (i = 0; i < queues; i++) {
+        nc = &nic->ncs[i];
+        if (nc->receive_disabled
+            || (nc->info->can_receive && !nc->info->can_receive(nc))) {
+            continue;
+        }
+        qemu_flush_queued_packets(nc);
+    }
+}
+
 NICState *qemu_new_nic(NetClientInfo *info,
                        NICConf *conf,
                        const char *model,
@@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
     nic->ncs = (void *)nic + info->size;
     nic->conf = conf;
     nic->opaque = opaque;
+    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
+                                                    nic);
 
     for (i = 0; i < queues; i++) {
         qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
@@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
         qemu_free_net_client(nc);
     }
 
+    qemu_del_vm_change_state_handler(nic->vmstate);
     g_free(nic);
 }
 
@@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
 
 int qemu_can_send_packet(NetClientState *sender)
 {
+    int vmstat = runstate_is_running();
+
+    if (!vmstat) {
+        return 0;
+    }
+
     if (!sender->peer) {
         return 1;
     }
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-18  4:46 [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running zhanghailiang
@ 2014-08-18  6:55 ` Jason Wang
  2014-08-18  8:32   ` zhanghailiang
  2014-08-18 12:27 ` Dr. David Alan Gilbert
  1 sibling, 1 reply; 12+ messages in thread
From: Jason Wang @ 2014-08-18  6:55 UTC (permalink / raw)
  To: zhanghailiang, qemu-devel
  Cc: peter.maydell, luonengjun, peter.huangpeng, stefanha, mst

On 08/18/2014 12:46 PM, zhanghailiang wrote:
> For all NICs(except virtio-net) emulated by qemu,
> Such as e1000, rtl8139, pcnet and ne2k_pci,
> Qemu can still receive packets when VM is not running.
> If this happened in *migration's* last PAUSE VM stage,
> The new dirty RAM related to the packets will be missed,
> And this will lead serious network fault in VM.
>
> To avoid this, we forbid receiving packets in generic net code when
> VM is not running. Also, when the runstate changes back to running,
> we definitely need to flush queues to get packets flowing again.

You probably need a better title since it does not cover this change.
>
> Here we implement this in the net layer:
> (1) Judge the vm runstate in qemu_can_send_packet
> (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> Which will listen for VM runstate changes.
> (3) Register a handler function for VMstate change.
> When vm changes back to running, we flush all queues in the callback function.
> (4) Remove checking vm state in virtio_net_can_receive
>
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> ---
>  hw/net/virtio-net.c |  4 ----
>  include/net/net.h   |  2 ++
>  net/net.c           | 32 ++++++++++++++++++++++++++++++++
>  3 files changed, 34 insertions(+), 4 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 268eff9..287d762 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -839,10 +839,6 @@ static int virtio_net_can_receive(NetClientState *nc)
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
>      VirtIONetQueue *q = virtio_net_get_subqueue(nc);
>  
> -    if (!vdev->vm_running) {
> -        return 0;
> -    }
> -
>      if (nc->queue_index >= n->curr_queues) {
>          return 0;
>      }
> diff --git a/include/net/net.h b/include/net/net.h
> index ed594f9..a294277 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -8,6 +8,7 @@
>  #include "net/queue.h"
>  #include "migration/vmstate.h"
>  #include "qapi-types.h"
> +#include "sysemu/sysemu.h"
>  
>  #define MAX_QUEUE_NUM 1024
>  
> @@ -96,6 +97,7 @@ typedef struct NICState {
>      NICConf *conf;
>      void *opaque;
>      bool peer_deleted;
> +    VMChangeStateEntry *vmstate;
>  } NICState;
>  
>  NetClientState *qemu_find_netdev(const char *id);
> diff --git a/net/net.c b/net/net.c
> index 6d930ea..21f0d48 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>      return nc;
>  }
>  
> +static void nic_vmstate_change_handler(void *opaque,
> +                                       int running,
> +                                       RunState state)
> +{
> +    NICState *nic = opaque;
> +    NetClientState *nc;
> +    int i, queues;
> +
> +    if (!running) {
> +        return;
> +    }
> +
> +    queues = MAX(1, nic->conf->peers.queues);
> +    for (i = 0; i < queues; i++) {
> +        nc = &nic->ncs[i];
> +        if (nc->receive_disabled
> +            || (nc->info->can_receive && !nc->info->can_receive(nc))) {
> +            continue;
> +        }
> +        qemu_flush_queued_packets(nc);

How about simply purge the receive queue during stop? If ok, there's no
need to introduce extra vmstate change handler.

> +    }
> +}
> +
>  NICState *qemu_new_nic(NetClientInfo *info,
>                         NICConf *conf,
>                         const char *model,
> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>      nic->ncs = (void *)nic + info->size;
>      nic->conf = conf;
>      nic->opaque = opaque;
> +    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
> +                                                    nic);
>  

Does this depend on other vm state change handler to be called first? I
mean virtio has its own vmstate_change handler and which seems to be
called after this. Is this an issue?

>      for (i = 0; i < queues; i++) {
>          qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>          qemu_free_net_client(nc);
>      }
>  
> +    qemu_del_vm_change_state_handler(nic->vmstate);
>      g_free(nic);
>  }
>  
> @@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
>  
>  int qemu_can_send_packet(NetClientState *sender)
>  {
> +    int vmstat = runstate_is_running();
> +
> +    if (!vmstat) {
> +        return 0;
> +    }
> +
>      if (!sender->peer) {
>          return 1;
>      }

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-18  6:55 ` Jason Wang
@ 2014-08-18  8:32   ` zhanghailiang
  2014-08-18  9:14     ` Jason Wang
  2014-08-19 12:29     ` Stefan Hajnoczi
  0 siblings, 2 replies; 12+ messages in thread
From: zhanghailiang @ 2014-08-18  8:32 UTC (permalink / raw)
  To: Jason Wang
  Cc: peter.maydell, mst, luonengjun, peter.huangpeng, qemu-devel,
	stefanha

On 2014/8/18 14:55, Jason Wang wrote:
> On 08/18/2014 12:46 PM, zhanghailiang wrote:
>> For all NICs(except virtio-net) emulated by qemu,
>> Such as e1000, rtl8139, pcnet and ne2k_pci,
>> Qemu can still receive packets when VM is not running.
>> If this happened in *migration's* last PAUSE VM stage,
>> The new dirty RAM related to the packets will be missed,
>> And this will lead serious network fault in VM.
>>
>> To avoid this, we forbid receiving packets in generic net code when
>> VM is not running. Also, when the runstate changes back to running,
>> we definitely need to flush queues to get packets flowing again.
>
> You probably need a better title since it does not cover this change.
>>

Hmm, you are right, i will modify it, thanks.:)

>> Here we implement this in the net layer:
>> (1) Judge the vm runstate in qemu_can_send_packet
>> (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>> Which will listen for VM runstate changes.
>> (3) Register a handler function for VMstate change.
>> When vm changes back to running, we flush all queues in the callback function.
>> (4) Remove checking vm state in virtio_net_can_receive
>>
>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>> ---
>>   hw/net/virtio-net.c |  4 ----
>>   include/net/net.h   |  2 ++
>>   net/net.c           | 32 ++++++++++++++++++++++++++++++++
>>   3 files changed, 34 insertions(+), 4 deletions(-)
>>
>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>> index 268eff9..287d762 100644
>> --- a/hw/net/virtio-net.c
>> +++ b/hw/net/virtio-net.c
>> @@ -839,10 +839,6 @@ static int virtio_net_can_receive(NetClientState *nc)
>>       VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>       VirtIONetQueue *q = virtio_net_get_subqueue(nc);
>>
>> -    if (!vdev->vm_running) {
>> -        return 0;
>> -    }
>> -
>>       if (nc->queue_index>= n->curr_queues) {
>>           return 0;
>>       }
>> diff --git a/include/net/net.h b/include/net/net.h
>> index ed594f9..a294277 100644
>> --- a/include/net/net.h
>> +++ b/include/net/net.h
>> @@ -8,6 +8,7 @@
>>   #include "net/queue.h"
>>   #include "migration/vmstate.h"
>>   #include "qapi-types.h"
>> +#include "sysemu/sysemu.h"
>>
>>   #define MAX_QUEUE_NUM 1024
>>
>> @@ -96,6 +97,7 @@ typedef struct NICState {
>>       NICConf *conf;
>>       void *opaque;
>>       bool peer_deleted;
>> +    VMChangeStateEntry *vmstate;
>>   } NICState;
>>
>>   NetClientState *qemu_find_netdev(const char *id);
>> diff --git a/net/net.c b/net/net.c
>> index 6d930ea..21f0d48 100644
>> --- a/net/net.c
>> +++ b/net/net.c
>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>>       return nc;
>>   }
>>
>> +static void nic_vmstate_change_handler(void *opaque,
>> +                                       int running,
>> +                                       RunState state)
>> +{
>> +    NICState *nic = opaque;
>> +    NetClientState *nc;
>> +    int i, queues;
>> +
>> +    if (!running) {
>> +        return;
>> +    }
>> +
>> +    queues = MAX(1, nic->conf->peers.queues);
>> +    for (i = 0; i<  queues; i++) {
>> +        nc =&nic->ncs[i];
>> +        if (nc->receive_disabled
>> +            || (nc->info->can_receive&&  !nc->info->can_receive(nc))) {
>> +            continue;
>> +        }
>> +        qemu_flush_queued_packets(nc);
>
> How about simply purge the receive queue during stop? If ok, there's no
> need to introduce extra vmstate change handler.
>

I don't know whether it is OK to purge the receive packages, it was
suggested by Stefan Hajnoczi, and i am waiting for his opinion .:)

I think we still need the extra vmstate change handler, Without the
change handler, we don't know if the VM will go to stop and the time
when to call qemu_purge_queued_packets.

>> +    }
>> +}
>> +
>>   NICState *qemu_new_nic(NetClientInfo *info,
>>                          NICConf *conf,
>>                          const char *model,
>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>>       nic->ncs = (void *)nic + info->size;
>>       nic->conf = conf;
>>       nic->opaque = opaque;
>> +    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>> +                                                    nic);
>>
>
> Does this depend on other vm state change handler to be called first? I
> mean virtio has its own vmstate_change handler and which seems to be
> called after this. Is this an issue?
>

Yes, it is. The check vm state in virtio-net is unnecessary,
Actually it will prevent the flushing process, this is why we
do step 4 "Remove checking vm state in virtio_net_can_receive".

Besides, i think it is OK to do common things in vmstate_change handler
of generic net layer and do private things in their own vmstate_change
handlers. :)

>>       for (i = 0; i<  queues; i++) {
>>           qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>>           qemu_free_net_client(nc);
>>       }
>>
>> +    qemu_del_vm_change_state_handler(nic->vmstate);
>>       g_free(nic);
>>   }
>>
>> @@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
>>
>>   int qemu_can_send_packet(NetClientState *sender)
>>   {
>> +    int vmstat = runstate_is_running();
>> +
>> +    if (!vmstat) {
>> +        return 0;
>> +    }
>> +
>>       if (!sender->peer) {
>>           return 1;
>>       }
>
>
> .
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-18  8:32   ` zhanghailiang
@ 2014-08-18  9:14     ` Jason Wang
  2014-08-20  1:59       ` zhanghailiang
  2014-08-19 12:29     ` Stefan Hajnoczi
  1 sibling, 1 reply; 12+ messages in thread
From: Jason Wang @ 2014-08-18  9:14 UTC (permalink / raw)
  To: zhanghailiang
  Cc: peter.maydell, mst, luonengjun, peter.huangpeng, qemu-devel,
	stefanha

On 08/18/2014 04:32 PM, zhanghailiang wrote:
> On 2014/8/18 14:55, Jason Wang wrote:
>> On 08/18/2014 12:46 PM, zhanghailiang wrote:
>>> For all NICs(except virtio-net) emulated by qemu,
>>> Such as e1000, rtl8139, pcnet and ne2k_pci,
>>> Qemu can still receive packets when VM is not running.
>>> If this happened in *migration's* last PAUSE VM stage,
>>> The new dirty RAM related to the packets will be missed,
>>> And this will lead serious network fault in VM.
>>>
>>> To avoid this, we forbid receiving packets in generic net code when
>>> VM is not running. Also, when the runstate changes back to running,
>>> we definitely need to flush queues to get packets flowing again.
>>
>> You probably need a better title since it does not cover this change.
>>>
>
> Hmm, you are right, i will modify it, thanks.:)
>
>>> Here we implement this in the net layer:
>>> (1) Judge the vm runstate in qemu_can_send_packet
>>> (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>>> Which will listen for VM runstate changes.
>>> (3) Register a handler function for VMstate change.
>>> When vm changes back to running, we flush all queues in the callback
>>> function.
>>> (4) Remove checking vm state in virtio_net_can_receive
>>>
>>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>>> ---
>>>   hw/net/virtio-net.c |  4 ----
>>>   include/net/net.h   |  2 ++
>>>   net/net.c           | 32 ++++++++++++++++++++++++++++++++
>>>   3 files changed, 34 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>> index 268eff9..287d762 100644
>>> --- a/hw/net/virtio-net.c
>>> +++ b/hw/net/virtio-net.c
>>> @@ -839,10 +839,6 @@ static int
>>> virtio_net_can_receive(NetClientState *nc)
>>>       VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>>       VirtIONetQueue *q = virtio_net_get_subqueue(nc);
>>>
>>> -    if (!vdev->vm_running) {
>>> -        return 0;
>>> -    }
>>> -
>>>       if (nc->queue_index>= n->curr_queues) {
>>>           return 0;
>>>       }
>>> diff --git a/include/net/net.h b/include/net/net.h
>>> index ed594f9..a294277 100644
>>> --- a/include/net/net.h
>>> +++ b/include/net/net.h
>>> @@ -8,6 +8,7 @@
>>>   #include "net/queue.h"
>>>   #include "migration/vmstate.h"
>>>   #include "qapi-types.h"
>>> +#include "sysemu/sysemu.h"
>>>
>>>   #define MAX_QUEUE_NUM 1024
>>>
>>> @@ -96,6 +97,7 @@ typedef struct NICState {
>>>       NICConf *conf;
>>>       void *opaque;
>>>       bool peer_deleted;
>>> +    VMChangeStateEntry *vmstate;
>>>   } NICState;
>>>
>>>   NetClientState *qemu_find_netdev(const char *id);
>>> diff --git a/net/net.c b/net/net.c
>>> index 6d930ea..21f0d48 100644
>>> --- a/net/net.c
>>> +++ b/net/net.c
>>> @@ -242,6 +242,29 @@ NetClientState
>>> *qemu_new_net_client(NetClientInfo *info,
>>>       return nc;
>>>   }
>>>
>>> +static void nic_vmstate_change_handler(void *opaque,
>>> +                                       int running,
>>> +                                       RunState state)
>>> +{
>>> +    NICState *nic = opaque;
>>> +    NetClientState *nc;
>>> +    int i, queues;
>>> +
>>> +    if (!running) {
>>> +        return;
>>> +    }
>>> +
>>> +    queues = MAX(1, nic->conf->peers.queues);
>>> +    for (i = 0; i<  queues; i++) {
>>> +        nc =&nic->ncs[i];
>>> +        if (nc->receive_disabled
>>> +            || (nc->info->can_receive&& 
>>> !nc->info->can_receive(nc))) {
>>> +            continue;
>>> +        }
>>> +        qemu_flush_queued_packets(nc);
>>
>> How about simply purge the receive queue during stop? If ok, there's no
>> need to introduce extra vmstate change handler.
>>
>
> I don't know whether it is OK to purge the receive packages, it was
> suggested by Stefan Hajnoczi, and i am waiting for his opinion .:)
>
> I think we still need the extra vmstate change handler, Without the
> change handler, we don't know if the VM will go to stop and the time
> when to call qemu_purge_queued_packets.
>

Or you can do it in do_vm_stop().
>>> +    }
>>> +}
>>> +
>>>   NICState *qemu_new_nic(NetClientInfo *info,
>>>                          NICConf *conf,
>>>                          const char *model,
>>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>>>       nic->ncs = (void *)nic + info->size;
>>>       nic->conf = conf;
>>>       nic->opaque = opaque;
>>> +    nic->vmstate =
>>> qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>>> +                                                    nic);
>>>
>>
>> Does this depend on other vm state change handler to be called first? I
>> mean virtio has its own vmstate_change handler and which seems to be
>> called after this. Is this an issue?
>>
>
> Yes, it is. The check vm state in virtio-net is unnecessary,
> Actually it will prevent the flushing process, this is why we
> do step 4 "Remove checking vm state in virtio_net_can_receive".

How about other handlers (especially kvm/xen specific ones)? If not,
looks like vm_start() is a more safer place since all handlers were
called before.
>
> Besides, i think it is OK to do common things in vmstate_change handler
> of generic net layer and do private things in their own vmstate_change
> handlers. :)

This is true only if there's no dependency. Virtio has a generic vmstate
change handler, a subtle change of your patch is even if vhost is
enabled, during vm start qemu will still process packets since you can
qemu_flush_queued_packets() before vhost_net is started (since virtio
vmstate change handler is called after). So probably we need only do
purging which can eliminate the processing during vm start.
>
>>>       for (i = 0; i<  queues; i++) {
>>>           qemu_net_client_setup(&nic->ncs[i], info, peers[i], model,
>>> name,
>>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>>>           qemu_free_net_client(nc);
>>>       }
>>>
>>> +    qemu_del_vm_change_state_handler(nic->vmstate);
>>>       g_free(nic);
>>>   }
>>>
>>> @@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc,
>>> int len)
>>>
>>>   int qemu_can_send_packet(NetClientState *sender)
>>>   {
>>> +    int vmstat = runstate_is_running();
>>> +
>>> +    if (!vmstat) {
>>> +        return 0;
>>> +    }
>>> +
>>>       if (!sender->peer) {
>>>           return 1;
>>>       }
>>
>>
>> .
>>
>
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-18  4:46 [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running zhanghailiang
  2014-08-18  6:55 ` Jason Wang
@ 2014-08-18 12:27 ` Dr. David Alan Gilbert
  2014-08-19  6:46   ` zhanghailiang
  1 sibling, 1 reply; 12+ messages in thread
From: Dr. David Alan Gilbert @ 2014-08-18 12:27 UTC (permalink / raw)
  To: zhanghailiang
  Cc: peter.maydell, mst, luonengjun, peter.huangpeng, qemu-devel,
	stefanha

* zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
> For all NICs(except virtio-net) emulated by qemu,
> Such as e1000, rtl8139, pcnet and ne2k_pci,
> Qemu can still receive packets when VM is not running.
> If this happened in *migration's* last PAUSE VM stage,
> The new dirty RAM related to the packets will be missed,
> And this will lead serious network fault in VM.

I'd like to understand more about when exactly this happens.
migration.c:migration_thread  in the last step, takes the iothread, puts
the runstate into RUN_STATE_FINISH_MIGRATE and only then calls qemu_savevm_state_complete
before unlocking the iothread.

If that's true, how does this problem happen - can the net code
still receive packets with the iothread lock taken?
qemu_savevm_state_complete does a call to the RAM code, so I think this should get
any RAM changes that happened before the lock was taken.

I'm gently worried about threading stuff as well - is all this net code
running off fd handlers on the main thread or are there separate threads?

Dave

> To avoid this, we forbid receiving packets in generic net code when
> VM is not running. Also, when the runstate changes back to running,
> we definitely need to flush queues to get packets flowing again.
> 
> Here we implement this in the net layer:
> (1) Judge the vm runstate in qemu_can_send_packet
> (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> Which will listen for VM runstate changes.
> (3) Register a handler function for VMstate change.
> When vm changes back to running, we flush all queues in the callback function.
> (4) Remove checking vm state in virtio_net_can_receive
> 
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> ---
>  hw/net/virtio-net.c |  4 ----
>  include/net/net.h   |  2 ++
>  net/net.c           | 32 ++++++++++++++++++++++++++++++++
>  3 files changed, 34 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 268eff9..287d762 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -839,10 +839,6 @@ static int virtio_net_can_receive(NetClientState *nc)
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
>      VirtIONetQueue *q = virtio_net_get_subqueue(nc);
>  
> -    if (!vdev->vm_running) {
> -        return 0;
> -    }
> -
>      if (nc->queue_index >= n->curr_queues) {
>          return 0;
>      }
> diff --git a/include/net/net.h b/include/net/net.h
> index ed594f9..a294277 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -8,6 +8,7 @@
>  #include "net/queue.h"
>  #include "migration/vmstate.h"
>  #include "qapi-types.h"
> +#include "sysemu/sysemu.h"
>  
>  #define MAX_QUEUE_NUM 1024
>  
> @@ -96,6 +97,7 @@ typedef struct NICState {
>      NICConf *conf;
>      void *opaque;
>      bool peer_deleted;
> +    VMChangeStateEntry *vmstate;
>  } NICState;
>  
>  NetClientState *qemu_find_netdev(const char *id);
> diff --git a/net/net.c b/net/net.c
> index 6d930ea..21f0d48 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>      return nc;
>  }
>  
> +static void nic_vmstate_change_handler(void *opaque,
> +                                       int running,
> +                                       RunState state)
> +{
> +    NICState *nic = opaque;
> +    NetClientState *nc;
> +    int i, queues;
> +
> +    if (!running) {
> +        return;
> +    }
> +
> +    queues = MAX(1, nic->conf->peers.queues);
> +    for (i = 0; i < queues; i++) {
> +        nc = &nic->ncs[i];
> +        if (nc->receive_disabled
> +            || (nc->info->can_receive && !nc->info->can_receive(nc))) {
> +            continue;
> +        }
> +        qemu_flush_queued_packets(nc);
> +    }
> +}
> +
>  NICState *qemu_new_nic(NetClientInfo *info,
>                         NICConf *conf,
>                         const char *model,
> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>      nic->ncs = (void *)nic + info->size;
>      nic->conf = conf;
>      nic->opaque = opaque;
> +    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
> +                                                    nic);
>  
>      for (i = 0; i < queues; i++) {
>          qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>          qemu_free_net_client(nc);
>      }
>  
> +    qemu_del_vm_change_state_handler(nic->vmstate);
>      g_free(nic);
>  }
>  
> @@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
>  
>  int qemu_can_send_packet(NetClientState *sender)
>  {
> +    int vmstat = runstate_is_running();
> +
> +    if (!vmstat) {
> +        return 0;
> +    }
> +
>      if (!sender->peer) {
>          return 1;
>      }
> -- 
> 1.7.12.4
> 
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-18 12:27 ` Dr. David Alan Gilbert
@ 2014-08-19  6:46   ` zhanghailiang
  2014-08-19  8:48     ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 12+ messages in thread
From: zhanghailiang @ 2014-08-19  6:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: peter.maydell, mst, luonengjun, peter.huangpeng, qemu-devel,
	stefanha

On 2014/8/18 20:27, Dr. David Alan Gilbert wrote:
> * zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
>> For all NICs(except virtio-net) emulated by qemu,
>> Such as e1000, rtl8139, pcnet and ne2k_pci,
>> Qemu can still receive packets when VM is not running.
>> If this happened in *migration's* last PAUSE VM stage,
>> The new dirty RAM related to the packets will be missed,
>> And this will lead serious network fault in VM.
>
> I'd like to understand more about when exactly this happens.
> migration.c:migration_thread  in the last step, takes the iothread, puts
> the runstate into RUN_STATE_FINISH_MIGRATE and only then calls qemu_savevm_state_complete
> before unlocking the iothread.
>

Hmm, Sorry, the description above may not be exact.

Actually, the action of receiving packets action happens after the
migration thread unlock the iothread-lock(when VM is stopped), but
before the end of the migration(to be exact, before close the socket
fd, which is used to send data to destination).

I think before close the socket fd, we can not assure all the RAM of the
VM has been copied to the memory of network card or has been send to
the destination, we still have the chance to modify its content. (If i
am wrong, please let me know, Thanks. ;) )

If the above situation happens, it will dirty parts of the RAM which
will be send and parts of new dirty RAM page may be missed. As a result,
The contents of memory in the destination is not integral, And this
will lead network fault in VM.

Here i have made a test:
(1) Download the new qemu.
(2) Extend the time window between qemu_savevm_state_complete and
migrate_fd_cleanup(where qemu_fclose(s->file) will be called), patch
like(this will extend the chances of receiving packets between the time
window):
diff --git a/migration.c b/migration.c
index 8d675b3..597abf9 100644
--- a/migration.c
+++ b/migration.c
@@ -614,7 +614,7 @@ static void *migration_thread(void *opaque)
                      qemu_savevm_state_complete(s->file);
                  }
                  qemu_mutex_unlock_iothread();
-
+                sleep(2);/*extend the time window between stop vm and 
end migration*/
                  if (ret < 0) {
                      migrate_set_state(s, MIG_STATE_ACTIVE, 
MIG_STATE_ERROR);
                      break;
(3) Start VM (suse11sp1) and in VM ping xxx -i 0.1
(4) Migrate the VM to another host.

And the *PING* command in VM will very likely fail with message:
'Destination HOST Unreachable', the NIC in VM will stay unavailable
unless you run 'service network restart'.(without step 2, you may have
to migration lots of times between two hosts before that happens).

Thanks,
zhanghailiang

> If that's true, how does this problem happen - can the net code
> still receive packets with the iothread lock taken?
> qemu_savevm_state_complete does a call to the RAM code, so I think this should get
> any RAM changes that happened before the lock was taken.
>
> I'm gently worried about threading stuff as well - is all this net code
> running off fd handlers on the main thread or are there separate threads?
>
> Dave
>
>> To avoid this, we forbid receiving packets in generic net code when
>> VM is not running. Also, when the runstate changes back to running,
>> we definitely need to flush queues to get packets flowing again.
>>
>> Here we implement this in the net layer:
>> (1) Judge the vm runstate in qemu_can_send_packet
>> (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>> Which will listen for VM runstate changes.
>> (3) Register a handler function for VMstate change.
>> When vm changes back to running, we flush all queues in the callback function.
>> (4) Remove checking vm state in virtio_net_can_receive
>>
>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>> ---
>>   hw/net/virtio-net.c |  4 ----
>>   include/net/net.h   |  2 ++
>>   net/net.c           | 32 ++++++++++++++++++++++++++++++++
>>   3 files changed, 34 insertions(+), 4 deletions(-)
>>
>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>> index 268eff9..287d762 100644
>> --- a/hw/net/virtio-net.c
>> +++ b/hw/net/virtio-net.c
>> @@ -839,10 +839,6 @@ static int virtio_net_can_receive(NetClientState *nc)
>>       VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>       VirtIONetQueue *q = virtio_net_get_subqueue(nc);
>>
>> -    if (!vdev->vm_running) {
>> -        return 0;
>> -    }
>> -
>>       if (nc->queue_index>= n->curr_queues) {
>>           return 0;
>>       }
>> diff --git a/include/net/net.h b/include/net/net.h
>> index ed594f9..a294277 100644
>> --- a/include/net/net.h
>> +++ b/include/net/net.h
>> @@ -8,6 +8,7 @@
>>   #include "net/queue.h"
>>   #include "migration/vmstate.h"
>>   #include "qapi-types.h"
>> +#include "sysemu/sysemu.h"
>>
>>   #define MAX_QUEUE_NUM 1024
>>
>> @@ -96,6 +97,7 @@ typedef struct NICState {
>>       NICConf *conf;
>>       void *opaque;
>>       bool peer_deleted;
>> +    VMChangeStateEntry *vmstate;
>>   } NICState;
>>
>>   NetClientState *qemu_find_netdev(const char *id);
>> diff --git a/net/net.c b/net/net.c
>> index 6d930ea..21f0d48 100644
>> --- a/net/net.c
>> +++ b/net/net.c
>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>>       return nc;
>>   }
>>
>> +static void nic_vmstate_change_handler(void *opaque,
>> +                                       int running,
>> +                                       RunState state)
>> +{
>> +    NICState *nic = opaque;
>> +    NetClientState *nc;
>> +    int i, queues;
>> +
>> +    if (!running) {
>> +        return;
>> +    }
>> +
>> +    queues = MAX(1, nic->conf->peers.queues);
>> +    for (i = 0; i<  queues; i++) {
>> +        nc =&nic->ncs[i];
>> +        if (nc->receive_disabled
>> +            || (nc->info->can_receive&&  !nc->info->can_receive(nc))) {
>> +            continue;
>> +        }
>> +        qemu_flush_queued_packets(nc);
>> +    }
>> +}
>> +
>>   NICState *qemu_new_nic(NetClientInfo *info,
>>                          NICConf *conf,
>>                          const char *model,
>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>>       nic->ncs = (void *)nic + info->size;
>>       nic->conf = conf;
>>       nic->opaque = opaque;
>> +    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>> +                                                    nic);
>>
>>       for (i = 0; i<  queues; i++) {
>>           qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>>           qemu_free_net_client(nc);
>>       }
>>
>> +    qemu_del_vm_change_state_handler(nic->vmstate);
>>       g_free(nic);
>>   }
>>
>> @@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
>>
>>   int qemu_can_send_packet(NetClientState *sender)
>>   {
>> +    int vmstat = runstate_is_running();
>> +
>> +    if (!vmstat) {
>> +        return 0;
>> +    }
>> +
>>       if (!sender->peer) {
>>           return 1;
>>       }
>> --
>> 1.7.12.4
>>
>>
>>
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
>
> .
>

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-19  6:46   ` zhanghailiang
@ 2014-08-19  8:48     ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 12+ messages in thread
From: Dr. David Alan Gilbert @ 2014-08-19  8:48 UTC (permalink / raw)
  To: zhanghailiang
  Cc: peter.maydell, mst, quintela, luonengjun, qemu-devel,
	peter.huangpeng, stefanha

* zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
> On 2014/8/18 20:27, Dr. David Alan Gilbert wrote:
> >* zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
> >>For all NICs(except virtio-net) emulated by qemu,
> >>Such as e1000, rtl8139, pcnet and ne2k_pci,
> >>Qemu can still receive packets when VM is not running.
> >>If this happened in *migration's* last PAUSE VM stage,
> >>The new dirty RAM related to the packets will be missed,
> >>And this will lead serious network fault in VM.
> >
> >I'd like to understand more about when exactly this happens.
> >migration.c:migration_thread  in the last step, takes the iothread, puts
> >the runstate into RUN_STATE_FINISH_MIGRATE and only then calls qemu_savevm_state_complete
> >before unlocking the iothread.
> >
> 
> Hmm, Sorry, the description above may not be exact.
> 
> Actually, the action of receiving packets action happens after the
> migration thread unlock the iothread-lock(when VM is stopped), but
> before the end of the migration(to be exact, before close the socket
> fd, which is used to send data to destination).
> 
> I think before close the socket fd, we can not assure all the RAM of the
> VM has been copied to the memory of network card or has been send to
> the destination, we still have the chance to modify its content. (If i
> am wrong, please let me know, Thanks. ;) )
> 
> If the above situation happens, it will dirty parts of the RAM which
> will be send and parts of new dirty RAM page may be missed. As a result,
> The contents of memory in the destination is not integral, And this
> will lead network fault in VM.

Interesting; the only reason I can think that would happen, is because
arch_init.c:ram_save_page uses qemu_put_buffer_async to send most of the RAM pages
and that won't make sure the write happens until later.

This fix will probably help other migration code as well; postcopy
runs the migration for a lot longer after stopping the CPUs, and I am seeing
something that might be due to this very occasionally.

Dave

> Here i have made a test:
> (1) Download the new qemu.
> (2) Extend the time window between qemu_savevm_state_complete and
> migrate_fd_cleanup(where qemu_fclose(s->file) will be called), patch
> like(this will extend the chances of receiving packets between the time
> window):
> diff --git a/migration.c b/migration.c
> index 8d675b3..597abf9 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -614,7 +614,7 @@ static void *migration_thread(void *opaque)
>                      qemu_savevm_state_complete(s->file);
>                  }
>                  qemu_mutex_unlock_iothread();
> -
> +                sleep(2);/*extend the time window between stop vm and end
> migration*/
>                  if (ret < 0) {
>                      migrate_set_state(s, MIG_STATE_ACTIVE,
> MIG_STATE_ERROR);
>                      break;
> (3) Start VM (suse11sp1) and in VM ping xxx -i 0.1
> (4) Migrate the VM to another host.
> 
> And the *PING* command in VM will very likely fail with message:
> 'Destination HOST Unreachable', the NIC in VM will stay unavailable
> unless you run 'service network restart'.(without step 2, you may have
> to migration lots of times between two hosts before that happens).
> 
> Thanks,
> zhanghailiang
> 
> >If that's true, how does this problem happen - can the net code
> >still receive packets with the iothread lock taken?
> >qemu_savevm_state_complete does a call to the RAM code, so I think this should get
> >any RAM changes that happened before the lock was taken.
> >
> >I'm gently worried about threading stuff as well - is all this net code
> >running off fd handlers on the main thread or are there separate threads?
> >
> >Dave
> >
> >>To avoid this, we forbid receiving packets in generic net code when
> >>VM is not running. Also, when the runstate changes back to running,
> >>we definitely need to flush queues to get packets flowing again.
> >>
> >>Here we implement this in the net layer:
> >>(1) Judge the vm runstate in qemu_can_send_packet
> >>(2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> >>Which will listen for VM runstate changes.
> >>(3) Register a handler function for VMstate change.
> >>When vm changes back to running, we flush all queues in the callback function.
> >>(4) Remove checking vm state in virtio_net_can_receive
> >>
> >>Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
> >>---
> >>  hw/net/virtio-net.c |  4 ----
> >>  include/net/net.h   |  2 ++
> >>  net/net.c           | 32 ++++++++++++++++++++++++++++++++
> >>  3 files changed, 34 insertions(+), 4 deletions(-)
> >>
> >>diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> >>index 268eff9..287d762 100644
> >>--- a/hw/net/virtio-net.c
> >>+++ b/hw/net/virtio-net.c
> >>@@ -839,10 +839,6 @@ static int virtio_net_can_receive(NetClientState *nc)
> >>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> >>      VirtIONetQueue *q = virtio_net_get_subqueue(nc);
> >>
> >>-    if (!vdev->vm_running) {
> >>-        return 0;
> >>-    }
> >>-
> >>      if (nc->queue_index>= n->curr_queues) {
> >>          return 0;
> >>      }
> >>diff --git a/include/net/net.h b/include/net/net.h
> >>index ed594f9..a294277 100644
> >>--- a/include/net/net.h
> >>+++ b/include/net/net.h
> >>@@ -8,6 +8,7 @@
> >>  #include "net/queue.h"
> >>  #include "migration/vmstate.h"
> >>  #include "qapi-types.h"
> >>+#include "sysemu/sysemu.h"
> >>
> >>  #define MAX_QUEUE_NUM 1024
> >>
> >>@@ -96,6 +97,7 @@ typedef struct NICState {
> >>      NICConf *conf;
> >>      void *opaque;
> >>      bool peer_deleted;
> >>+    VMChangeStateEntry *vmstate;
> >>  } NICState;
> >>
> >>  NetClientState *qemu_find_netdev(const char *id);
> >>diff --git a/net/net.c b/net/net.c
> >>index 6d930ea..21f0d48 100644
> >>--- a/net/net.c
> >>+++ b/net/net.c
> >>@@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
> >>      return nc;
> >>  }
> >>
> >>+static void nic_vmstate_change_handler(void *opaque,
> >>+                                       int running,
> >>+                                       RunState state)
> >>+{
> >>+    NICState *nic = opaque;
> >>+    NetClientState *nc;
> >>+    int i, queues;
> >>+
> >>+    if (!running) {
> >>+        return;
> >>+    }
> >>+
> >>+    queues = MAX(1, nic->conf->peers.queues);
> >>+    for (i = 0; i<  queues; i++) {
> >>+        nc =&nic->ncs[i];
> >>+        if (nc->receive_disabled
> >>+            || (nc->info->can_receive&&  !nc->info->can_receive(nc))) {
> >>+            continue;
> >>+        }
> >>+        qemu_flush_queued_packets(nc);
> >>+    }
> >>+}
> >>+
> >>  NICState *qemu_new_nic(NetClientInfo *info,
> >>                         NICConf *conf,
> >>                         const char *model,
> >>@@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
> >>      nic->ncs = (void *)nic + info->size;
> >>      nic->conf = conf;
> >>      nic->opaque = opaque;
> >>+    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
> >>+                                                    nic);
> >>
> >>      for (i = 0; i<  queues; i++) {
> >>          qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> >>@@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
> >>          qemu_free_net_client(nc);
> >>      }
> >>
> >>+    qemu_del_vm_change_state_handler(nic->vmstate);
> >>      g_free(nic);
> >>  }
> >>
> >>@@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
> >>
> >>  int qemu_can_send_packet(NetClientState *sender)
> >>  {
> >>+    int vmstat = runstate_is_running();
> >>+
> >>+    if (!vmstat) {
> >>+        return 0;
> >>+    }
> >>+
> >>      if (!sender->peer) {
> >>          return 1;
> >>      }
> >>--
> >>1.7.12.4
> >>
> >>
> >>
> >--
> >Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
> >.
> >
> 
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-18  8:32   ` zhanghailiang
  2014-08-18  9:14     ` Jason Wang
@ 2014-08-19 12:29     ` Stefan Hajnoczi
  2014-08-20  2:19       ` zhanghailiang
  2014-08-20  3:17       ` Jason Wang
  1 sibling, 2 replies; 12+ messages in thread
From: Stefan Hajnoczi @ 2014-08-19 12:29 UTC (permalink / raw)
  To: zhanghailiang
  Cc: peter.maydell, mst, Jason Wang, luonengjun, peter.huangpeng,
	qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1775 bytes --]

On Mon, Aug 18, 2014 at 04:32:42PM +0800, zhanghailiang wrote:
> On 2014/8/18 14:55, Jason Wang wrote:
> >On 08/18/2014 12:46 PM, zhanghailiang wrote:
> >>diff --git a/net/net.c b/net/net.c
> >>index 6d930ea..21f0d48 100644
> >>--- a/net/net.c
> >>+++ b/net/net.c
> >>@@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
> >>      return nc;
> >>  }
> >>
> >>+static void nic_vmstate_change_handler(void *opaque,
> >>+                                       int running,
> >>+                                       RunState state)
> >>+{
> >>+    NICState *nic = opaque;
> >>+    NetClientState *nc;
> >>+    int i, queues;
> >>+
> >>+    if (!running) {
> >>+        return;
> >>+    }
> >>+
> >>+    queues = MAX(1, nic->conf->peers.queues);
> >>+    for (i = 0; i<  queues; i++) {
> >>+        nc =&nic->ncs[i];
> >>+        if (nc->receive_disabled
> >>+            || (nc->info->can_receive&&  !nc->info->can_receive(nc))) {
> >>+            continue;
> >>+        }
> >>+        qemu_flush_queued_packets(nc);
> >
> >How about simply purge the receive queue during stop? If ok, there's no
> >need to introduce extra vmstate change handler.
> >
> 
> I don't know whether it is OK to purge the receive packages, it was
> suggested by Stefan Hajnoczi, and i am waiting for his opinion .:)
> 
> I think we still need the extra vmstate change handler, Without the
> change handler, we don't know if the VM will go to stop and the time
> when to call qemu_purge_queued_packets.

qemu_flush_queued_packets() sets nc->received_disabled = 0.  This may be
needed to get packets flowing again if ->receive() previously returned 0.

Purging the queue does not clear nc->received_disabled so it is not
enough.

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-18  9:14     ` Jason Wang
@ 2014-08-20  1:59       ` zhanghailiang
  0 siblings, 0 replies; 12+ messages in thread
From: zhanghailiang @ 2014-08-20  1:59 UTC (permalink / raw)
  To: Jason Wang
  Cc: peter.maydell, mst, luonengjun, peter.huangpeng, qemu-devel,
	stefanha

On 2014/8/18 17:14, Jason Wang wrote:
> On 08/18/2014 04:32 PM, zhanghailiang wrote:
>> On 2014/8/18 14:55, Jason Wang wrote:
>>> On 08/18/2014 12:46 PM, zhanghailiang wrote:
>>>> For all NICs(except virtio-net) emulated by qemu,
>>>> Such as e1000, rtl8139, pcnet and ne2k_pci,
>>>> Qemu can still receive packets when VM is not running.
>>>> If this happened in *migration's* last PAUSE VM stage,
>>>> The new dirty RAM related to the packets will be missed,
>>>> And this will lead serious network fault in VM.
>>>>
>>>> To avoid this, we forbid receiving packets in generic net code when
>>>> VM is not running. Also, when the runstate changes back to running,
>>>> we definitely need to flush queues to get packets flowing again.
>>>
>>> You probably need a better title since it does not cover this change.
>>>>
>>
>> Hmm, you are right, i will modify it, thanks.:)
>>
>>>> Here we implement this in the net layer:
>>>> (1) Judge the vm runstate in qemu_can_send_packet
>>>> (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>>>> Which will listen for VM runstate changes.
>>>> (3) Register a handler function for VMstate change.
>>>> When vm changes back to running, we flush all queues in the callback
>>>> function.
>>>> (4) Remove checking vm state in virtio_net_can_receive
>>>>
>>>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>>>> ---
>>>>    hw/net/virtio-net.c |  4 ----
>>>>    include/net/net.h   |  2 ++
>>>>    net/net.c           | 32 ++++++++++++++++++++++++++++++++
>>>>    3 files changed, 34 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>>> index 268eff9..287d762 100644
>>>> --- a/hw/net/virtio-net.c
>>>> +++ b/hw/net/virtio-net.c
>>>> @@ -839,10 +839,6 @@ static int
>>>> virtio_net_can_receive(NetClientState *nc)
>>>>        VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>>>        VirtIONetQueue *q = virtio_net_get_subqueue(nc);
>>>>
>>>> -    if (!vdev->vm_running) {
>>>> -        return 0;
>>>> -    }
>>>> -
>>>>        if (nc->queue_index>= n->curr_queues) {
>>>>            return 0;
>>>>        }
>>>> diff --git a/include/net/net.h b/include/net/net.h
>>>> index ed594f9..a294277 100644
>>>> --- a/include/net/net.h
>>>> +++ b/include/net/net.h
>>>> @@ -8,6 +8,7 @@
>>>>    #include "net/queue.h"
>>>>    #include "migration/vmstate.h"
>>>>    #include "qapi-types.h"
>>>> +#include "sysemu/sysemu.h"
>>>>
>>>>    #define MAX_QUEUE_NUM 1024
>>>>
>>>> @@ -96,6 +97,7 @@ typedef struct NICState {
>>>>        NICConf *conf;
>>>>        void *opaque;
>>>>        bool peer_deleted;
>>>> +    VMChangeStateEntry *vmstate;
>>>>    } NICState;
>>>>
>>>>    NetClientState *qemu_find_netdev(const char *id);
>>>> diff --git a/net/net.c b/net/net.c
>>>> index 6d930ea..21f0d48 100644
>>>> --- a/net/net.c
>>>> +++ b/net/net.c
>>>> @@ -242,6 +242,29 @@ NetClientState
>>>> *qemu_new_net_client(NetClientInfo *info,
>>>>        return nc;
>>>>    }
>>>>
>>>> +static void nic_vmstate_change_handler(void *opaque,
>>>> +                                       int running,
>>>> +                                       RunState state)
>>>> +{
>>>> +    NICState *nic = opaque;
>>>> +    NetClientState *nc;
>>>> +    int i, queues;
>>>> +
>>>> +    if (!running) {
>>>> +        return;
>>>> +    }
>>>> +
>>>> +    queues = MAX(1, nic->conf->peers.queues);
>>>> +    for (i = 0; i<   queues; i++) {
>>>> +        nc =&nic->ncs[i];
>>>> +        if (nc->receive_disabled
>>>> +            || (nc->info->can_receive&&
>>>> !nc->info->can_receive(nc))) {
>>>> +            continue;
>>>> +        }
>>>> +        qemu_flush_queued_packets(nc);
>>>
>>> How about simply purge the receive queue during stop? If ok, there's no
>>> need to introduce extra vmstate change handler.
>>>
>>
>> I don't know whether it is OK to purge the receive packages, it was
>> suggested by Stefan Hajnoczi, and i am waiting for his opinion .:)
>>
>> I think we still need the extra vmstate change handler, Without the
>> change handler, we don't know if the VM will go to stop and the time
>> when to call qemu_purge_queued_packets.
>>
>
> Or you can do it in do_vm_stop().

Actually, the callback function was called in do_vm_stop indirectly:
do_vm_stop--->vm_state_notify--->e->cb(e->opaque, running, state)
And i think use the callbacks is more graceful.:)

>>>> +    }
>>>> +}
>>>> +
>>>>    NICState *qemu_new_nic(NetClientInfo *info,
>>>>                           NICConf *conf,
>>>>                           const char *model,
>>>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>>>>        nic->ncs = (void *)nic + info->size;
>>>>        nic->conf = conf;
>>>>        nic->opaque = opaque;
>>>> +    nic->vmstate =
>>>> qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>>>> +                                                    nic);
>>>>
>>>
>>> Does this depend on other vm state change handler to be called first? I
>>> mean virtio has its own vmstate_change handler and which seems to be
>>> called after this. Is this an issue?
>>>
>>
>> Yes, it is. The check vm state in virtio-net is unnecessary,
>> Actually it will prevent the flushing process, this is why we
>> do step 4 "Remove checking vm state in virtio_net_can_receive".
>
> How about other handlers (especially kvm/xen specific ones)? If not,
> looks like vm_start() is a more safer place since all handlers were
> called before.
>>
>> Besides, i think it is OK to do common things in vmstate_change handler
>> of generic net layer and do private things in their own vmstate_change
>> handlers. :)
>
> This is true only if there's no dependency. Virtio has a generic vmstate
> change handler, a subtle change of your patch is even if vhost is
> enabled, during vm start qemu will still process packets since you can
> qemu_flush_queued_packets() before vhost_net is started (since virtio
> vmstate change handler is called after). So probably we need only do
> purging which can eliminate the processing during vm start.

Hmm,  i will check if this patch has side-effect for vhost_net, ;)

Thanks
zhanghailiang

>>
>>>>        for (i = 0; i<   queues; i++) {
>>>>            qemu_net_client_setup(&nic->ncs[i], info, peers[i], model,
>>>> name,
>>>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>>>>            qemu_free_net_client(nc);
>>>>        }
>>>>
>>>> +    qemu_del_vm_change_state_handler(nic->vmstate);
>>>>        g_free(nic);
>>>>    }
>>>>
>>>> @@ -452,6 +478,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc,
>>>> int len)
>>>>
>>>>    int qemu_can_send_packet(NetClientState *sender)
>>>>    {
>>>> +    int vmstat = runstate_is_running();
>>>> +
>>>> +    if (!vmstat) {
>>>> +        return 0;
>>>> +    }
>>>> +
>>>>        if (!sender->peer) {
>>>>            return 1;
>>>>        }
>>>
>>>
>>> .
>>>
>>
>>
>
>
> .
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-19 12:29     ` Stefan Hajnoczi
@ 2014-08-20  2:19       ` zhanghailiang
  2014-08-20  3:17       ` Jason Wang
  1 sibling, 0 replies; 12+ messages in thread
From: zhanghailiang @ 2014-08-20  2:19 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: peter.maydell, mst, Jason Wang, luonengjun, peter.huangpeng,
	qemu-devel

On 2014/8/19 20:29, Stefan Hajnoczi wrote:
> On Mon, Aug 18, 2014 at 04:32:42PM +0800, zhanghailiang wrote:
>> On 2014/8/18 14:55, Jason Wang wrote:
>>> On 08/18/2014 12:46 PM, zhanghailiang wrote:
>>>> diff --git a/net/net.c b/net/net.c
>>>> index 6d930ea..21f0d48 100644
>>>> --- a/net/net.c
>>>> +++ b/net/net.c
>>>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>>>>       return nc;
>>>>   }
>>>>
>>>> +static void nic_vmstate_change_handler(void *opaque,
>>>> +                                       int running,
>>>> +                                       RunState state)
>>>> +{
>>>> +    NICState *nic = opaque;
>>>> +    NetClientState *nc;
>>>> +    int i, queues;
>>>> +
>>>> +    if (!running) {
>>>> +        return;
>>>> +    }
>>>> +
>>>> +    queues = MAX(1, nic->conf->peers.queues);
>>>> +    for (i = 0; i<   queues; i++) {
>>>> +        nc =&nic->ncs[i];
>>>> +        if (nc->receive_disabled
>>>> +            || (nc->info->can_receive&&   !nc->info->can_receive(nc))) {
>>>> +            continue;
>>>> +        }
>>>> +        qemu_flush_queued_packets(nc);
>>>
>>> How about simply purge the receive queue during stop? If ok, there's no
>>> need to introduce extra vmstate change handler.
>>>
>>
>> I don't know whether it is OK to purge the receive packages, it was
>> suggested by Stefan Hajnoczi, and i am waiting for his opinion .:)
>>
>> I think we still need the extra vmstate change handler, Without the
>> change handler, we don't know if the VM will go to stop and the time
>> when to call qemu_purge_queued_packets.
>
> qemu_flush_queued_packets() sets nc->received_disabled = 0.  This may be
> needed to get packets flowing again if ->receive() previously returned 0.
>
> Purging the queue does not clear nc->received_disabled so it is not
> enough.

So this is the reason why we need to do flush after VM runstate come 
back to running again.:)

Oh, In the above patch, we don't need check the nc->receive_disabled
before qemu_flush_queued_packets(nc), but still need check
nc->info->can_receive(nc), is it?

Before send another patch, I will check if this patch has side-effect 
when we use vhost_net.

Thanks,
zhanghailiang

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-19 12:29     ` Stefan Hajnoczi
  2014-08-20  2:19       ` zhanghailiang
@ 2014-08-20  3:17       ` Jason Wang
  2014-08-22 10:08         ` Stefan Hajnoczi
  1 sibling, 1 reply; 12+ messages in thread
From: Jason Wang @ 2014-08-20  3:17 UTC (permalink / raw)
  To: Stefan Hajnoczi, zhanghailiang
  Cc: qemu-devel, peter.maydell, luonengjun, peter.huangpeng, mst

On 08/19/2014 08:29 PM, Stefan Hajnoczi wrote:
> On Mon, Aug 18, 2014 at 04:32:42PM +0800, zhanghailiang wrote:
>> On 2014/8/18 14:55, Jason Wang wrote:
>>> On 08/18/2014 12:46 PM, zhanghailiang wrote:
>>>> diff --git a/net/net.c b/net/net.c
>>>> index 6d930ea..21f0d48 100644
>>>> --- a/net/net.c
>>>> +++ b/net/net.c
>>>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>>>>      return nc;
>>>>  }
>>>>
>>>> +static void nic_vmstate_change_handler(void *opaque,
>>>> +                                       int running,
>>>> +                                       RunState state)
>>>> +{
>>>> +    NICState *nic = opaque;
>>>> +    NetClientState *nc;
>>>> +    int i, queues;
>>>> +
>>>> +    if (!running) {
>>>> +        return;
>>>> +    }
>>>> +
>>>> +    queues = MAX(1, nic->conf->peers.queues);
>>>> +    for (i = 0; i<  queues; i++) {
>>>> +        nc =&nic->ncs[i];
>>>> +        if (nc->receive_disabled
>>>> +            || (nc->info->can_receive&&  !nc->info->can_receive(nc))) {
>>>> +            continue;
>>>> +        }
>>>> +        qemu_flush_queued_packets(nc);
>>> How about simply purge the receive queue during stop? If ok, there's no
>>> need to introduce extra vmstate change handler.
>>>
>> I don't know whether it is OK to purge the receive packages, it was
>> suggested by Stefan Hajnoczi, and i am waiting for his opinion .:)
>>
>> I think we still need the extra vmstate change handler, Without the
>> change handler, we don't know if the VM will go to stop and the time
>> when to call qemu_purge_queued_packets.
> qemu_flush_queued_packets() sets nc->received_disabled = 0.  This may be
> needed to get packets flowing again if ->receive() previously returned 0.
>
> Purging the queue does not clear nc->received_disabled so it is not
> enough.

Confused.

virtio_net_receive() only returns 0 when it does not have enough rx
buffers. In this case, it just wait for the guest to refill and kick
again. Its rx kick handler will call qemu_flush_queued_packets() to
clear nc->received_disabled. So does usbnet and others.

If nic_received_disabled is 1, it means the no available rx buffer. We
need wait guest to do the processing and refilling. Then why need clear
it after vm was started?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running
  2014-08-20  3:17       ` Jason Wang
@ 2014-08-22 10:08         ` Stefan Hajnoczi
  0 siblings, 0 replies; 12+ messages in thread
From: Stefan Hajnoczi @ 2014-08-22 10:08 UTC (permalink / raw)
  To: Jason Wang
  Cc: peter.maydell, zhanghailiang, mst, luonengjun, qemu-devel,
	peter.huangpeng

[-- Attachment #1: Type: text/plain, Size: 2642 bytes --]

On Wed, Aug 20, 2014 at 11:17:56AM +0800, Jason Wang wrote:
> On 08/19/2014 08:29 PM, Stefan Hajnoczi wrote:
> > On Mon, Aug 18, 2014 at 04:32:42PM +0800, zhanghailiang wrote:
> >> On 2014/8/18 14:55, Jason Wang wrote:
> >>> On 08/18/2014 12:46 PM, zhanghailiang wrote:
> >>>> diff --git a/net/net.c b/net/net.c
> >>>> index 6d930ea..21f0d48 100644
> >>>> --- a/net/net.c
> >>>> +++ b/net/net.c
> >>>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
> >>>>      return nc;
> >>>>  }
> >>>>
> >>>> +static void nic_vmstate_change_handler(void *opaque,
> >>>> +                                       int running,
> >>>> +                                       RunState state)
> >>>> +{
> >>>> +    NICState *nic = opaque;
> >>>> +    NetClientState *nc;
> >>>> +    int i, queues;
> >>>> +
> >>>> +    if (!running) {
> >>>> +        return;
> >>>> +    }
> >>>> +
> >>>> +    queues = MAX(1, nic->conf->peers.queues);
> >>>> +    for (i = 0; i<  queues; i++) {
> >>>> +        nc =&nic->ncs[i];
> >>>> +        if (nc->receive_disabled
> >>>> +            || (nc->info->can_receive&&  !nc->info->can_receive(nc))) {
> >>>> +            continue;
> >>>> +        }
> >>>> +        qemu_flush_queued_packets(nc);
> >>> How about simply purge the receive queue during stop? If ok, there's no
> >>> need to introduce extra vmstate change handler.
> >>>
> >> I don't know whether it is OK to purge the receive packages, it was
> >> suggested by Stefan Hajnoczi, and i am waiting for his opinion .:)
> >>
> >> I think we still need the extra vmstate change handler, Without the
> >> change handler, we don't know if the VM will go to stop and the time
> >> when to call qemu_purge_queued_packets.
> > qemu_flush_queued_packets() sets nc->received_disabled = 0.  This may be
> > needed to get packets flowing again if ->receive() previously returned 0.
> >
> > Purging the queue does not clear nc->received_disabled so it is not
> > enough.
> 
> Confused.
> 
> virtio_net_receive() only returns 0 when it does not have enough rx
> buffers. In this case, it just wait for the guest to refill and kick
> again. Its rx kick handler will call qemu_flush_queued_packets() to
> clear nc->received_disabled. So does usbnet and others.
> 
> If nic_received_disabled is 1, it means the no available rx buffer. We
> need wait guest to do the processing and refilling. Then why need clear
> it after vm was started?

I took a look at other emulated NICs, they don't use return 0 in
->receive().

I think you are right, we don't need to worry about flushing.

Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-08-22 10:09 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-08-18  4:46 [Qemu-devel] [PATCH] net: Forbid dealing with packets when VM is not running zhanghailiang
2014-08-18  6:55 ` Jason Wang
2014-08-18  8:32   ` zhanghailiang
2014-08-18  9:14     ` Jason Wang
2014-08-20  1:59       ` zhanghailiang
2014-08-19 12:29     ` Stefan Hajnoczi
2014-08-20  2:19       ` zhanghailiang
2014-08-20  3:17       ` Jason Wang
2014-08-22 10:08         ` Stefan Hajnoczi
2014-08-18 12:27 ` Dr. David Alan Gilbert
2014-08-19  6:46   ` zhanghailiang
2014-08-19  8:48     ` Dr. David Alan Gilbert

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).