* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
2014-08-14 6:13 ` [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running zhanghailiang
@ 2014-08-14 7:12 ` Gonglei (Arei)
2014-08-14 8:24 ` zhanghailiang
2014-08-14 10:05 ` Michael S. Tsirkin
2014-08-14 10:09 ` Michael S. Tsirkin
2 siblings, 1 reply; 10+ messages in thread
From: Gonglei (Arei) @ 2014-08-14 7:12 UTC (permalink / raw)
To: Zhanghailiang, qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org, stefanha@redhat.com, mst@redhat.com,
Luonengjun, Huangpeng (Peter), aliguori@amazon.com,
akong@redhat.com
Hi,
> Subject: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes
> back to running
>
> When the runstate changes back to running, we definitely need to flush
> queues to get packets flowing again.
>
> Here we implement this in the net layer:
> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> Which will listen for VM runstate changes.
Does this change will block migration during with different QEMU versions?
> (2) Register a handler function for VMstate change.
> When vm changes back to running, we flush all queues in the callback function.
>
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> ---
> include/net/net.h | 1 +
> net/net.c | 26 ++++++++++++++++++++++++++
> 2 files changed, 27 insertions(+)
>
> diff --git a/include/net/net.h b/include/net/net.h
> index 312f728..a294277 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -97,6 +97,7 @@ typedef struct NICState {
> NICConf *conf;
> void *opaque;
> bool peer_deleted;
> + VMChangeStateEntry *vmstate;
> } NICState;
>
> NetClientState *qemu_find_netdev(const char *id);
> diff --git a/net/net.c b/net/net.c
> index 5bb2821..506e58f 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo
> *info,
> return nc;
> }
>
> +static void nic_vmstate_change_handler(void *opaque,
> + int running,
> + RunState state)
> +{
> + NICState *nic = opaque;
> + NetClientState *nc;
> + int i, queues;
> +
> + if (!running) {
> + return;
> + }
> +
> + queues = MAX(1, nic->conf->peers.queues);
^
A superfluous space.
Best regards,
-Gonglei
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
2014-08-14 7:12 ` Gonglei (Arei)
@ 2014-08-14 8:24 ` zhanghailiang
0 siblings, 0 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-14 8:24 UTC (permalink / raw)
To: Gonglei (Arei)
Cc: peter.maydell@linaro.org, aliguori@amazon.com, mst@redhat.com,
Luonengjun, Huangpeng (Peter), qemu-devel@nongnu.org,
stefanha@redhat.com, akong@redhat.com
On 2014/8/14 15:12, Gonglei (Arei) wrote:
> Hi,
>
>> Subject: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes
>> back to running
>>
>> When the runstate changes back to running, we definitely need to flush
>> queues to get packets flowing again.
>>
>> Here we implement this in the net layer:
>> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>> Which will listen for VM runstate changes.
>
> Does this change will block migration during with different QEMU versions?
>
No, I have tested migration between qemu-1.5.1 with new qemu which has
merged this patches, everything seems ok!
>> (2) Register a handler function for VMstate change.
>> When vm changes back to running, we flush all queues in the callback function.
>>
>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>> ---
>> include/net/net.h | 1 +
>> net/net.c | 26 ++++++++++++++++++++++++++
>> 2 files changed, 27 insertions(+)
>>
>> diff --git a/include/net/net.h b/include/net/net.h
>> index 312f728..a294277 100644
>> --- a/include/net/net.h
>> +++ b/include/net/net.h
>> @@ -97,6 +97,7 @@ typedef struct NICState {
>> NICConf *conf;
>> void *opaque;
>> bool peer_deleted;
>> + VMChangeStateEntry *vmstate;
>> } NICState;
>>
>> NetClientState *qemu_find_netdev(const char *id);
>> diff --git a/net/net.c b/net/net.c
>> index 5bb2821..506e58f 100644
>> --- a/net/net.c
>> +++ b/net/net.c
>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo
>> *info,
>> return nc;
>> }
>>
>> +static void nic_vmstate_change_handler(void *opaque,
>> + int running,
>> + RunState state)
>> +{
>> + NICState *nic = opaque;
>> + NetClientState *nc;
>> + int i, queues;
>> +
>> + if (!running) {
>> + return;
>> + }
>> +
>> + queues = MAX(1, nic->conf->peers.queues);
> ^
> A superfluous space.
>
Yes, Good catch, i will modify this. Thanks.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
2014-08-14 6:13 ` [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running zhanghailiang
2014-08-14 7:12 ` Gonglei (Arei)
@ 2014-08-14 10:05 ` Michael S. Tsirkin
2014-08-18 0:45 ` zhanghailiang
2014-08-14 10:09 ` Michael S. Tsirkin
2 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2014-08-14 10:05 UTC (permalink / raw)
To: zhanghailiang
Cc: peter.maydell, aliguori, luonengjun, peter.huangpeng, qemu-devel,
stefanha, akong
On Thu, Aug 14, 2014 at 02:13:57PM +0800, zhanghailiang wrote:
> When the runstate changes back to running, we definitely need to flush
> queues to get packets flowing again.
>
> Here we implement this in the net layer:
> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> Which will listen for VM runstate changes.
> (2) Register a handler function for VMstate change.
> When vm changes back to running, we flush all queues in the callback function.
OK but smash this together with patch 1, otherwise
after patch 1 things are broken, which breaks
git bisect.
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> ---
> include/net/net.h | 1 +
> net/net.c | 26 ++++++++++++++++++++++++++
> 2 files changed, 27 insertions(+)
>
> diff --git a/include/net/net.h b/include/net/net.h
> index 312f728..a294277 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -97,6 +97,7 @@ typedef struct NICState {
> NICConf *conf;
> void *opaque;
> bool peer_deleted;
> + VMChangeStateEntry *vmstate;
> } NICState;
>
> NetClientState *qemu_find_netdev(const char *id);
> diff --git a/net/net.c b/net/net.c
> index 5bb2821..506e58f 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
> return nc;
> }
>
> +static void nic_vmstate_change_handler(void *opaque,
> + int running,
> + RunState state)
> +{
> + NICState *nic = opaque;
> + NetClientState *nc;
> + int i, queues;
> +
> + if (!running) {
> + return;
> + }
> +
> + queues = MAX(1, nic->conf->peers.queues);
> + for (i = 0; i < queues; i++) {
> + nc = &nic->ncs[i];
> + if (nc->receive_disabled
> + || (nc->info->can_receive && !nc->info->can_receive(nc))) {
> + continue;
> + }
> + qemu_flush_queued_packets(nc);
> + }
> +}
> +
> NICState *qemu_new_nic(NetClientInfo *info,
> NICConf *conf,
> const char *model,
> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
> nic->ncs = (void *)nic + info->size;
> nic->conf = conf;
> nic->opaque = opaque;
> + nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
> + nic);
>
> for (i = 0; i < queues; i++) {
> qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
> qemu_free_net_client(nc);
> }
>
> + qemu_del_vm_change_state_handler(nic->vmstate);
> g_free(nic);
> }
>
> --
> 1.7.12.4
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
2014-08-14 10:05 ` Michael S. Tsirkin
@ 2014-08-18 0:45 ` zhanghailiang
0 siblings, 0 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-18 0:45 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: peter.maydell, aliguori, luonengjun, peter.huangpeng, qemu-devel,
stefanha, akong
On 2014/8/14 18:05, Michael S. Tsirkin wrote:
> On Thu, Aug 14, 2014 at 02:13:57PM +0800, zhanghailiang wrote:
>> When the runstate changes back to running, we definitely need to flush
>> queues to get packets flowing again.
>>
>> Here we implement this in the net layer:
>> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>> Which will listen for VM runstate changes.
>> (2) Register a handler function for VMstate change.
>> When vm changes back to running, we flush all queues in the callback function.
>
> OK but smash this together with patch 1, otherwise
> after patch 1 things are broken, which breaks
> git bisect.
>
Hmm, i will put them together into one patch. Thanks for your reviewing.
>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>> ---
>> include/net/net.h | 1 +
>> net/net.c | 26 ++++++++++++++++++++++++++
>> 2 files changed, 27 insertions(+)
>>
>> diff --git a/include/net/net.h b/include/net/net.h
>> index 312f728..a294277 100644
>> --- a/include/net/net.h
>> +++ b/include/net/net.h
>> @@ -97,6 +97,7 @@ typedef struct NICState {
>> NICConf *conf;
>> void *opaque;
>> bool peer_deleted;
>> + VMChangeStateEntry *vmstate;
>> } NICState;
>>
>> NetClientState *qemu_find_netdev(const char *id);
>> diff --git a/net/net.c b/net/net.c
>> index 5bb2821..506e58f 100644
>> --- a/net/net.c
>> +++ b/net/net.c
>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>> return nc;
>> }
>>
>> +static void nic_vmstate_change_handler(void *opaque,
>> + int running,
>> + RunState state)
>> +{
>> + NICState *nic = opaque;
>> + NetClientState *nc;
>> + int i, queues;
>> +
>> + if (!running) {
>> + return;
>> + }
>> +
>> + queues = MAX(1, nic->conf->peers.queues);
>> + for (i = 0; i< queues; i++) {
>> + nc =&nic->ncs[i];
>> + if (nc->receive_disabled
>> + || (nc->info->can_receive&& !nc->info->can_receive(nc))) {
>> + continue;
>> + }
>> + qemu_flush_queued_packets(nc);
>> + }
>> +}
>> +
>> NICState *qemu_new_nic(NetClientInfo *info,
>> NICConf *conf,
>> const char *model,
>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>> nic->ncs = (void *)nic + info->size;
>> nic->conf = conf;
>> nic->opaque = opaque;
>> + nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>> + nic);
>>
>> for (i = 0; i< queues; i++) {
>> qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>> qemu_free_net_client(nc);
>> }
>>
>> + qemu_del_vm_change_state_handler(nic->vmstate);
>> g_free(nic);
>> }
>>
>> --
>> 1.7.12.4
>>
>
> .
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
2014-08-14 6:13 ` [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running zhanghailiang
2014-08-14 7:12 ` Gonglei (Arei)
2014-08-14 10:05 ` Michael S. Tsirkin
@ 2014-08-14 10:09 ` Michael S. Tsirkin
2014-08-18 0:46 ` zhanghailiang
2 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2014-08-14 10:09 UTC (permalink / raw)
To: zhanghailiang
Cc: peter.maydell, aliguori, luonengjun, peter.huangpeng, qemu-devel,
stefanha, akong
On Thu, Aug 14, 2014 at 02:13:57PM +0800, zhanghailiang wrote:
> When the runstate changes back to running, we definitely need to flush
> queues to get packets flowing again.
>
> Here we implement this in the net layer:
> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> Which will listen for VM runstate changes.
> (2) Register a handler function for VMstate change.
> When vm changes back to running, we flush all queues in the callback function.
>
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Hmm looks like virtio patch will need to be squashed as well?
> ---
> include/net/net.h | 1 +
> net/net.c | 26 ++++++++++++++++++++++++++
> 2 files changed, 27 insertions(+)
>
> diff --git a/include/net/net.h b/include/net/net.h
> index 312f728..a294277 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -97,6 +97,7 @@ typedef struct NICState {
> NICConf *conf;
> void *opaque;
> bool peer_deleted;
> + VMChangeStateEntry *vmstate;
> } NICState;
>
> NetClientState *qemu_find_netdev(const char *id);
> diff --git a/net/net.c b/net/net.c
> index 5bb2821..506e58f 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
> return nc;
> }
>
> +static void nic_vmstate_change_handler(void *opaque,
> + int running,
> + RunState state)
> +{
> + NICState *nic = opaque;
> + NetClientState *nc;
> + int i, queues;
> +
> + if (!running) {
> + return;
> + }
> +
> + queues = MAX(1, nic->conf->peers.queues);
> + for (i = 0; i < queues; i++) {
> + nc = &nic->ncs[i];
> + if (nc->receive_disabled
> + || (nc->info->can_receive && !nc->info->can_receive(nc))) {
> + continue;
> + }
> + qemu_flush_queued_packets(nc);
> + }
> +}
> +
> NICState *qemu_new_nic(NetClientInfo *info,
> NICConf *conf,
> const char *model,
> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
> nic->ncs = (void *)nic + info->size;
> nic->conf = conf;
> nic->opaque = opaque;
> + nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
> + nic);
>
> for (i = 0; i < queues; i++) {
> qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
> qemu_free_net_client(nc);
> }
>
> + qemu_del_vm_change_state_handler(nic->vmstate);
> g_free(nic);
> }
>
> --
> 1.7.12.4
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
2014-08-14 10:09 ` Michael S. Tsirkin
@ 2014-08-18 0:46 ` zhanghailiang
0 siblings, 0 replies; 10+ messages in thread
From: zhanghailiang @ 2014-08-18 0:46 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: peter.maydell, aliguori, luonengjun, peter.huangpeng, qemu-devel,
stefanha, akong
On 2014/8/14 18:09, Michael S. Tsirkin wrote:
> On Thu, Aug 14, 2014 at 02:13:57PM +0800, zhanghailiang wrote:
>> When the runstate changes back to running, we definitely need to flush
>> queues to get packets flowing again.
>>
>> Here we implement this in the net layer:
>> (1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
>> Which will listen for VM runstate changes.
>> (2) Register a handler function for VMstate change.
>> When vm changes back to running, we flush all queues in the callback function.
>>
>> Signed-off-by: zhanghailiang<zhang.zhanghailiang@huawei.com>
>
> Hmm looks like virtio patch will need to be squashed as well?
>
OK, thanks.
>> ---
>> include/net/net.h | 1 +
>> net/net.c | 26 ++++++++++++++++++++++++++
>> 2 files changed, 27 insertions(+)
>>
>> diff --git a/include/net/net.h b/include/net/net.h
>> index 312f728..a294277 100644
>> --- a/include/net/net.h
>> +++ b/include/net/net.h
>> @@ -97,6 +97,7 @@ typedef struct NICState {
>> NICConf *conf;
>> void *opaque;
>> bool peer_deleted;
>> + VMChangeStateEntry *vmstate;
>> } NICState;
>>
>> NetClientState *qemu_find_netdev(const char *id);
>> diff --git a/net/net.c b/net/net.c
>> index 5bb2821..506e58f 100644
>> --- a/net/net.c
>> +++ b/net/net.c
>> @@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
>> return nc;
>> }
>>
>> +static void nic_vmstate_change_handler(void *opaque,
>> + int running,
>> + RunState state)
>> +{
>> + NICState *nic = opaque;
>> + NetClientState *nc;
>> + int i, queues;
>> +
>> + if (!running) {
>> + return;
>> + }
>> +
>> + queues = MAX(1, nic->conf->peers.queues);
>> + for (i = 0; i< queues; i++) {
>> + nc =&nic->ncs[i];
>> + if (nc->receive_disabled
>> + || (nc->info->can_receive&& !nc->info->can_receive(nc))) {
>> + continue;
>> + }
>> + qemu_flush_queued_packets(nc);
>> + }
>> +}
>> +
>> NICState *qemu_new_nic(NetClientInfo *info,
>> NICConf *conf,
>> const char *model,
>> @@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
>> nic->ncs = (void *)nic + info->size;
>> nic->conf = conf;
>> nic->opaque = opaque;
>> + nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
>> + nic);
>>
>> for (i = 0; i< queues; i++) {
>> qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
>> @@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
>> qemu_free_net_client(nc);
>> }
>>
>> + qemu_del_vm_change_state_handler(nic->vmstate);
>> g_free(nic);
>> }
>>
>> --
>> 1.7.12.4
>>
>
> .
>
^ permalink raw reply [flat|nested] 10+ messages in thread