* [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
@ 2014-03-07 5:28 Jason Wang
2014-03-07 21:39 ` David Miller
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Jason Wang @ 2014-03-07 5:28 UTC (permalink / raw)
To: mst, kvm, virtio-dev, virtualization, netdev, linux-kernel; +Cc: Qin Chuanyu
We used to stop the handling of tx when the number of pending DMAs
exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
of both host and guest. But it was too aggressive in some cases, since
any delay or blocking of a single packet may delay or block the guest
transmission. Consider the following setup:
+-----+ +-----+
| VM1 | | VM2 |
+--+--+ +--+--+
| |
+--+--+ +--+--+
| tap0| | tap1|
+--+--+ +--+--+
| |
pfifo_fast htb(10Mbit/s)
| |
+--+--------------+---+
| bridge |
+--+------------------+
|
pfifo_fast
|
+-----+
| eth0|(100Mbit/s)
+-----+
- start two VMs and connect them to a bridge
- add an physical card (100Mbit/s) to that bridge
- setup htb on tap1 and limit its throughput to 10Mbit/s
- run two netperfs in the same time, one is from VM1 to VM2. Another is
from VM1 to an external host through eth0.
- result shows that not only the VM1 to VM2 traffic were throttled but
also the VM1 to external host through eth0 is also throttled somehow.
This is because the delay added by htb may lead the delay the finish
of DMAs and cause the pending DMAs for tap0 exceeds the limit
(VHOST_MAX_PEND). In this case vhost stop handling tx request until
htb send some packets. The problem here is all of the packets
transmission were blocked even if it does not go to VM2.
We can solve this issue by relaxing it a little bit: switching to use
data copy instead of stopping tx when the number of pending DMAs
exceed half of the vq size. This is safe because:
- The number of pending DMAs were still limited (half of the vq size)
- The out of order completion during mode switch can make sure that
most of the tx buffers were freed in time in guest.
So even if about 50% packets were delayed in zero-copy case, vhost
could continue to do the transmission through data copy in this case.
Test result:
Before this patch:
VM1 to VM2 throughput is 9.3Mbit/s
VM1 to External throughput is 40Mbit/s
CPU utilization is 7%
After this patch:
VM1 to VM2 throughput is 9.3Mbit/s
Vm1 to External throughput is 93Mbit/s
CPU utilization is 16%
Completed performance test on 40gbe shows no obvious changes in both
throughput and cpu utilization with this patch.
The patch only solve this issue when unlimited sndbuf. We still need a
solution for limited sndbuf.
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Qin Chuanyu <qinchuanyu@huawei.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
Changes from V1:
- Remove VHOST_MAX_PEND and switch to use half of the vq size as the limit
- Add cpu utilization in commit log
---
drivers/vhost/net.c | 19 +++++++------------
1 file changed, 7 insertions(+), 12 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index a0fa5de..2925e9a 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -38,8 +38,6 @@ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
* Using this limit prevents one virtqueue from starving others. */
#define VHOST_NET_WEIGHT 0x80000
-/* MAX number of TX used buffers for outstanding zerocopy */
-#define VHOST_MAX_PEND 128
#define VHOST_GOODCOPY_LEN 256
/*
@@ -345,7 +343,7 @@ static void handle_tx(struct vhost_net *net)
.msg_flags = MSG_DONTWAIT,
};
size_t len, total_len = 0;
- int err;
+ int err, num_pends;
size_t hdr_size;
struct socket *sock;
struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
@@ -366,13 +364,6 @@ static void handle_tx(struct vhost_net *net)
if (zcopy)
vhost_zerocopy_signal_used(net, vq);
- /* If more outstanding DMAs, queue the work.
- * Handle upend_idx wrap around
- */
- if (unlikely((nvq->upend_idx + vq->num - VHOST_MAX_PEND)
- % UIO_MAXIOV == nvq->done_idx))
- break;
-
head = vhost_get_vq_desc(&net->dev, vq, vq->iov,
ARRAY_SIZE(vq->iov),
&out, &in,
@@ -405,9 +396,13 @@ static void handle_tx(struct vhost_net *net)
break;
}
+ num_pends = likely(nvq->upend_idx >= nvq->done_idx) ?
+ (nvq->upend_idx - nvq->done_idx) :
+ (nvq->upend_idx + UIO_MAXIOV -
+ nvq->done_idx);
+
zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN
- && (nvq->upend_idx + 1) % UIO_MAXIOV !=
- nvq->done_idx
+ && num_pends <= vq->num >> 1
&& vhost_net_tx_select_zcopy(net);
/* use msg_control to pass vhost zerocopy ubuf info to skb */
--
1.8.3.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
2014-03-07 5:28 [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit Jason Wang
@ 2014-03-07 21:39 ` David Miller
2014-03-10 5:15 ` Jason Wang
2014-03-10 2:52 ` Qin Chuanyu
2014-03-10 8:03 ` Michael S. Tsirkin
2 siblings, 1 reply; 9+ messages in thread
From: David Miller @ 2014-03-07 21:39 UTC (permalink / raw)
To: jasowang
Cc: virtio-dev, kvm, mst, netdev, linux-kernel, virtualization,
qinchuanyu
From: Jason Wang <jasowang@redhat.com>
Date: Fri, 7 Mar 2014 13:28:27 +0800
> This is because the delay added by htb may lead the delay the finish
> of DMAs and cause the pending DMAs for tap0 exceeds the limit
> (VHOST_MAX_PEND). In this case vhost stop handling tx request until
> htb send some packets. The problem here is all of the packets
> transmission were blocked even if it does not go to VM2.
Isn't this essentially head of line blocking?
> We can solve this issue by relaxing it a little bit: switching to use
> data copy instead of stopping tx when the number of pending DMAs
> exceed half of the vq size. This is safe because:
>
> - The number of pending DMAs were still limited (half of the vq size)
> - The out of order completion during mode switch can make sure that
> most of the tx buffers were freed in time in guest.
>
> So even if about 50% packets were delayed in zero-copy case, vhost
> could continue to do the transmission through data copy in this case.
>
> Test result:
>
> Before this patch:
> VM1 to VM2 throughput is 9.3Mbit/s
> VM1 to External throughput is 40Mbit/s
> CPU utilization is 7%
>
> After this patch:
> VM1 to VM2 throughput is 9.3Mbit/s
> Vm1 to External throughput is 93Mbit/s
> CPU utilization is 16%
>
> Completed performance test on 40gbe shows no obvious changes in both
> throughput and cpu utilization with this patch.
>
> The patch only solve this issue when unlimited sndbuf. We still need a
> solution for limited sndbuf.
>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Qin Chuanyu <qinchuanyu@huawei.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
I'd like some vhost experts reviewing this before I apply it.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
2014-03-07 5:28 [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit Jason Wang
2014-03-07 21:39 ` David Miller
@ 2014-03-10 2:52 ` Qin Chuanyu
2014-03-10 8:03 ` Michael S. Tsirkin
2 siblings, 0 replies; 9+ messages in thread
From: Qin Chuanyu @ 2014-03-10 2:52 UTC (permalink / raw)
To: Jason Wang, mst, kvm, virtio-dev, virtualization, netdev,
linux-kernel
On 2014/3/7 13:28, Jason Wang wrote:
> We used to stop the handling of tx when the number of pending DMAs
> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
> of both host and guest. But it was too aggressive in some cases, since
> any delay or blocking of a single packet may delay or block the guest
> transmission. Consider the following setup:
>
> +-----+ +-----+
> | VM1 | | VM2 |
> +--+--+ +--+--+
> | |
> +--+--+ +--+--+
> | tap0| | tap1|
> +--+--+ +--+--+
> | |
> pfifo_fast htb(10Mbit/s)
> | |
> +--+--------------+---+
> | bridge |
> +--+------------------+
> |
> pfifo_fast
> |
> +-----+
> | eth0|(100Mbit/s)
> +-----+
>
> - start two VMs and connect them to a bridge
> - add an physical card (100Mbit/s) to that bridge
> - setup htb on tap1 and limit its throughput to 10Mbit/s
> - run two netperfs in the same time, one is from VM1 to VM2. Another is
> from VM1 to an external host through eth0.
> - result shows that not only the VM1 to VM2 traffic were throttled but
> also the VM1 to external host through eth0 is also throttled somehow.
>
> This is because the delay added by htb may lead the delay the finish
> of DMAs and cause the pending DMAs for tap0 exceeds the limit
> (VHOST_MAX_PEND). In this case vhost stop handling tx request until
> htb send some packets. The problem here is all of the packets
> transmission were blocked even if it does not go to VM2.
>
> We can solve this issue by relaxing it a little bit: switching to use
> data copy instead of stopping tx when the number of pending DMAs
> exceed half of the vq size. This is safe because:
>
> - The number of pending DMAs were still limited (half of the vq size)
> - The out of order completion during mode switch can make sure that
> most of the tx buffers were freed in time in guest.
>
> So even if about 50% packets were delayed in zero-copy case, vhost
> could continue to do the transmission through data copy in this case.
>
> Test result:
>
> Before this patch:
> VM1 to VM2 throughput is 9.3Mbit/s
> VM1 to External throughput is 40Mbit/s
> CPU utilization is 7%
>
> After this patch:
> VM1 to VM2 throughput is 9.3Mbit/s
> Vm1 to External throughput is 93Mbit/s
> CPU utilization is 16%
>
> Completed performance test on 40gbe shows no obvious changes in both
> throughput and cpu utilization with this patch.
>
> The patch only solve this issue when unlimited sndbuf. We still need a
> solution for limited sndbuf.
>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Qin Chuanyu <qinchuanyu@huawei.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> Changes from V1:
> - Remove VHOST_MAX_PEND and switch to use half of the vq size as the limit
> - Add cpu utilization in commit log
> ---
> drivers/vhost/net.c | 19 +++++++------------
> 1 file changed, 7 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index a0fa5de..2925e9a 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -38,8 +38,6 @@ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
> * Using this limit prevents one virtqueue from starving others. */
> #define VHOST_NET_WEIGHT 0x80000
>
> -/* MAX number of TX used buffers for outstanding zerocopy */
> -#define VHOST_MAX_PEND 128
> #define VHOST_GOODCOPY_LEN 256
>
> /*
> @@ -345,7 +343,7 @@ static void handle_tx(struct vhost_net *net)
> .msg_flags = MSG_DONTWAIT,
> };
> size_t len, total_len = 0;
> - int err;
> + int err, num_pends;
> size_t hdr_size;
> struct socket *sock;
> struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
> @@ -366,13 +364,6 @@ static void handle_tx(struct vhost_net *net)
> if (zcopy)
> vhost_zerocopy_signal_used(net, vq);
>
> - /* If more outstanding DMAs, queue the work.
> - * Handle upend_idx wrap around
> - */
> - if (unlikely((nvq->upend_idx + vq->num - VHOST_MAX_PEND)
> - % UIO_MAXIOV == nvq->done_idx))
> - break;
> -
> head = vhost_get_vq_desc(&net->dev, vq, vq->iov,
> ARRAY_SIZE(vq->iov),
> &out, &in,
> @@ -405,9 +396,13 @@ static void handle_tx(struct vhost_net *net)
> break;
> }
>
> + num_pends = likely(nvq->upend_idx >= nvq->done_idx) ?
> + (nvq->upend_idx - nvq->done_idx) :
> + (nvq->upend_idx + UIO_MAXIOV -
> + nvq->done_idx);
> +
> zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN
> - && (nvq->upend_idx + 1) % UIO_MAXIOV !=
> - nvq->done_idx
> + && num_pends <= vq->num >> 1
> && vhost_net_tx_select_zcopy(net);
>
> /* use msg_control to pass vhost zerocopy ubuf info to skb */
>
Reviewed-by: Qin chuanyu <qinchuanyu@huawei.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
2014-03-07 21:39 ` David Miller
@ 2014-03-10 5:15 ` Jason Wang
0 siblings, 0 replies; 9+ messages in thread
From: Jason Wang @ 2014-03-10 5:15 UTC (permalink / raw)
To: David Miller
Cc: virtio-dev, kvm, mst, netdev, linux-kernel, virtualization,
qinchuanyu
On 03/08/2014 05:39 AM, David Miller wrote:
> From: Jason Wang <jasowang@redhat.com>
> Date: Fri, 7 Mar 2014 13:28:27 +0800
>
>> This is because the delay added by htb may lead the delay the finish
>> of DMAs and cause the pending DMAs for tap0 exceeds the limit
>> (VHOST_MAX_PEND). In this case vhost stop handling tx request until
>> htb send some packets. The problem here is all of the packets
>> transmission were blocked even if it does not go to VM2.
> Isn't this essentially head of line blocking?
Yes it is.
>> We can solve this issue by relaxing it a little bit: switching to use
>> data copy instead of stopping tx when the number of pending DMAs
>> exceed half of the vq size. This is safe because:
>>
>> - The number of pending DMAs were still limited (half of the vq size)
>> - The out of order completion during mode switch can make sure that
>> most of the tx buffers were freed in time in guest.
>>
>> So even if about 50% packets were delayed in zero-copy case, vhost
>> could continue to do the transmission through data copy in this case.
>>
>> Test result:
>>
>> Before this patch:
>> VM1 to VM2 throughput is 9.3Mbit/s
>> VM1 to External throughput is 40Mbit/s
>> CPU utilization is 7%
>>
>> After this patch:
>> VM1 to VM2 throughput is 9.3Mbit/s
>> Vm1 to External throughput is 93Mbit/s
>> CPU utilization is 16%
>>
>> Completed performance test on 40gbe shows no obvious changes in both
>> throughput and cpu utilization with this patch.
>>
>> The patch only solve this issue when unlimited sndbuf. We still need a
>> solution for limited sndbuf.
>>
>> Cc: Michael S. Tsirkin <mst@redhat.com>
>> Cc: Qin Chuanyu <qinchuanyu@huawei.com>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
> I'd like some vhost experts reviewing this before I apply it.
Sure.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
2014-03-07 5:28 [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit Jason Wang
2014-03-07 21:39 ` David Miller
2014-03-10 2:52 ` Qin Chuanyu
@ 2014-03-10 8:03 ` Michael S. Tsirkin
2014-03-13 7:28 ` Jason Wang
2 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2014-03-10 8:03 UTC (permalink / raw)
To: Jason Wang
Cc: kvm, virtio-dev, virtualization, netdev, linux-kernel,
Qin Chuanyu
On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
> We used to stop the handling of tx when the number of pending DMAs
> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
> of both host and guest. But it was too aggressive in some cases, since
> any delay or blocking of a single packet may delay or block the guest
> transmission. Consider the following setup:
>
> +-----+ +-----+
> | VM1 | | VM2 |
> +--+--+ +--+--+
> | |
> +--+--+ +--+--+
> | tap0| | tap1|
> +--+--+ +--+--+
> | |
> pfifo_fast htb(10Mbit/s)
> | |
> +--+--------------+---+
> | bridge |
> +--+------------------+
> |
> pfifo_fast
> |
> +-----+
> | eth0|(100Mbit/s)
> +-----+
>
> - start two VMs and connect them to a bridge
> - add an physical card (100Mbit/s) to that bridge
> - setup htb on tap1 and limit its throughput to 10Mbit/s
> - run two netperfs in the same time, one is from VM1 to VM2. Another is
> from VM1 to an external host through eth0.
> - result shows that not only the VM1 to VM2 traffic were throttled but
> also the VM1 to external host through eth0 is also throttled somehow.
>
> This is because the delay added by htb may lead the delay the finish
> of DMAs and cause the pending DMAs for tap0 exceeds the limit
> (VHOST_MAX_PEND). In this case vhost stop handling tx request until
> htb send some packets. The problem here is all of the packets
> transmission were blocked even if it does not go to VM2.
>
> We can solve this issue by relaxing it a little bit: switching to use
> data copy instead of stopping tx when the number of pending DMAs
> exceed half of the vq size. This is safe because:
>
> - The number of pending DMAs were still limited (half of the vq size)
> - The out of order completion during mode switch can make sure that
> most of the tx buffers were freed in time in guest.
>
> So even if about 50% packets were delayed in zero-copy case, vhost
> could continue to do the transmission through data copy in this case.
>
> Test result:
>
> Before this patch:
> VM1 to VM2 throughput is 9.3Mbit/s
> VM1 to External throughput is 40Mbit/s
> CPU utilization is 7%
>
> After this patch:
> VM1 to VM2 throughput is 9.3Mbit/s
> Vm1 to External throughput is 93Mbit/s
> CPU utilization is 16%
>
> Completed performance test on 40gbe shows no obvious changes in both
> throughput and cpu utilization with this patch.
>
> The patch only solve this issue when unlimited sndbuf. We still need a
> solution for limited sndbuf.
>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Qin Chuanyu <qinchuanyu@huawei.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
I thought hard about this.
Here's what worries me: if there are still head of line
blocking issues lurking in the stack, they will still
hurt guests such as windows which rely on timely
completion of buffers, but it makes it
that much harder to reproduce the problems with
linux guests which don't.
And this will make even it harder to figure out
whether zero copy is actually active to diagnose
high cpu utilization cases.
So I think this is a good trick, but let's make
this path conditional on a new debugging module parameter:
how about head_of_line_blocking with default off?
This way if we suspect packets are delayed forever
somewhere, we can enable that and see guest networking block.
Additionally, I think we should add a way to count zero copy
and non zero copy packets.
I see two ways to implement this: add tracepoints in vhost-net
or add counters in tun accessible with ethtool.
This can be a patch on top and does not have to block
this one though.
> ---
> Changes from V1:
> - Remove VHOST_MAX_PEND and switch to use half of the vq size as the limit
> - Add cpu utilization in commit log
> ---
> drivers/vhost/net.c | 19 +++++++------------
> 1 file changed, 7 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index a0fa5de..2925e9a 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -38,8 +38,6 @@ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
> * Using this limit prevents one virtqueue from starving others. */
> #define VHOST_NET_WEIGHT 0x80000
>
> -/* MAX number of TX used buffers for outstanding zerocopy */
> -#define VHOST_MAX_PEND 128
> #define VHOST_GOODCOPY_LEN 256
>
> /*
> @@ -345,7 +343,7 @@ static void handle_tx(struct vhost_net *net)
> .msg_flags = MSG_DONTWAIT,
> };
> size_t len, total_len = 0;
> - int err;
> + int err, num_pends;
> size_t hdr_size;
> struct socket *sock;
> struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
> @@ -366,13 +364,6 @@ static void handle_tx(struct vhost_net *net)
> if (zcopy)
> vhost_zerocopy_signal_used(net, vq);
>
> - /* If more outstanding DMAs, queue the work.
> - * Handle upend_idx wrap around
> - */
> - if (unlikely((nvq->upend_idx + vq->num - VHOST_MAX_PEND)
> - % UIO_MAXIOV == nvq->done_idx))
> - break;
> -
> head = vhost_get_vq_desc(&net->dev, vq, vq->iov,
> ARRAY_SIZE(vq->iov),
> &out, &in,
> @@ -405,9 +396,13 @@ static void handle_tx(struct vhost_net *net)
> break;
> }
>
> + num_pends = likely(nvq->upend_idx >= nvq->done_idx) ?
> + (nvq->upend_idx - nvq->done_idx) :
> + (nvq->upend_idx + UIO_MAXIOV -
> + nvq->done_idx);
> +
> zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN
> - && (nvq->upend_idx + 1) % UIO_MAXIOV !=
> - nvq->done_idx
> + && num_pends <= vq->num >> 1
> && vhost_net_tx_select_zcopy(net);
>
> /* use msg_control to pass vhost zerocopy ubuf info to skb */
> --
> 1.8.3.2
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
2014-03-10 8:03 ` Michael S. Tsirkin
@ 2014-03-13 7:28 ` Jason Wang
2014-03-17 6:43 ` Ronen Hod
0 siblings, 1 reply; 9+ messages in thread
From: Jason Wang @ 2014-03-13 7:28 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: virtio-dev, kvm, netdev, linux-kernel, virtualization,
Qin Chuanyu
On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>> > We used to stop the handling of tx when the number of pending DMAs
>> > exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>> > of both host and guest. But it was too aggressive in some cases, since
>> > any delay or blocking of a single packet may delay or block the guest
>> > transmission. Consider the following setup:
>> >
>> > +-----+ +-----+
>> > | VM1 | | VM2 |
>> > +--+--+ +--+--+
>> > | |
>> > +--+--+ +--+--+
>> > | tap0| | tap1|
>> > +--+--+ +--+--+
>> > | |
>> > pfifo_fast htb(10Mbit/s)
>> > | |
>> > +--+--------------+---+
>> > | bridge |
>> > +--+------------------+
>> > |
>> > pfifo_fast
>> > |
>> > +-----+
>> > | eth0|(100Mbit/s)
>> > +-----+
>> >
>> > - start two VMs and connect them to a bridge
>> > - add an physical card (100Mbit/s) to that bridge
>> > - setup htb on tap1 and limit its throughput to 10Mbit/s
>> > - run two netperfs in the same time, one is from VM1 to VM2. Another is
>> > from VM1 to an external host through eth0.
>> > - result shows that not only the VM1 to VM2 traffic were throttled but
>> > also the VM1 to external host through eth0 is also throttled somehow.
>> >
>> > This is because the delay added by htb may lead the delay the finish
>> > of DMAs and cause the pending DMAs for tap0 exceeds the limit
>> > (VHOST_MAX_PEND). In this case vhost stop handling tx request until
>> > htb send some packets. The problem here is all of the packets
>> > transmission were blocked even if it does not go to VM2.
>> >
>> > We can solve this issue by relaxing it a little bit: switching to use
>> > data copy instead of stopping tx when the number of pending DMAs
>> > exceed half of the vq size. This is safe because:
>> >
>> > - The number of pending DMAs were still limited (half of the vq size)
>> > - The out of order completion during mode switch can make sure that
>> > most of the tx buffers were freed in time in guest.
>> >
>> > So even if about 50% packets were delayed in zero-copy case, vhost
>> > could continue to do the transmission through data copy in this case.
>> >
>> > Test result:
>> >
>> > Before this patch:
>> > VM1 to VM2 throughput is 9.3Mbit/s
>> > VM1 to External throughput is 40Mbit/s
>> > CPU utilization is 7%
>> >
>> > After this patch:
>> > VM1 to VM2 throughput is 9.3Mbit/s
>> > Vm1 to External throughput is 93Mbit/s
>> > CPU utilization is 16%
>> >
>> > Completed performance test on 40gbe shows no obvious changes in both
>> > throughput and cpu utilization with this patch.
>> >
>> > The patch only solve this issue when unlimited sndbuf. We still need a
>> > solution for limited sndbuf.
>> >
>> > Cc: Michael S. Tsirkin <mst@redhat.com>
>> > Cc: Qin Chuanyu <qinchuanyu@huawei.com>
>> > Signed-off-by: Jason Wang <jasowang@redhat.com>
> I thought hard about this.
> Here's what worries me: if there are still head of line
> blocking issues lurking in the stack, they will still
> hurt guests such as windows which rely on timely
> completion of buffers, but it makes it
> that much harder to reproduce the problems with
> linux guests which don't.
> And this will make even it harder to figure out
> whether zero copy is actually active to diagnose
> high cpu utilization cases.
Yes.
>
>
> So I think this is a good trick, but let's make
> this path conditional on a new debugging module parameter:
> how about head_of_line_blocking with default off?
Sure. But the head of line blocking was only partially solved in the
patch since we only support in-order completion of zerocopy packets.
Maybe we need consider switching to out of order completion even for
zerocopy skbs?
> This way if we suspect packets are delayed forever
> somewhere, we can enable that and see guest networking block.
>
> Additionally, I think we should add a way to count zero copy
> and non zero copy packets.
> I see two ways to implement this: add tracepoints in vhost-net
> or add counters in tun accessible with ethtool.
> This can be a patch on top and does not have to block
> this one though.
>
Yes, I post a RFC about 2 years ago, see
https://lkml.org/lkml/2012/4/9/478 which only traces generic vhost
behaviours. I can refresh this and add some -net specific tracepoints.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
2014-03-13 7:28 ` Jason Wang
@ 2014-03-17 6:43 ` Ronen Hod
2014-03-17 17:20 ` Yan Vugenfirer
2014-03-17 17:21 ` Yan Vugenfirer
0 siblings, 2 replies; 9+ messages in thread
From: Ronen Hod @ 2014-03-17 6:43 UTC (permalink / raw)
To: Jason Wang, Michael S. Tsirkin, Yan Vugenfirer, Dmitry Fleytman
Cc: virtio-dev, kvm, netdev, linux-kernel, virtualization,
Qin Chuanyu
On 03/13/2014 09:28 AM, Jason Wang wrote:
> On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
>> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>>>> We used to stop the handling of tx when the number of pending DMAs
>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>>>> of both host and guest. But it was too aggressive in some cases, since
>>>> any delay or blocking of a single packet may delay or block the guest
>>>> transmission. Consider the following setup:
>>>>
>>>> +-----+ +-----+
>>>> | VM1 | | VM2 |
>>>> +--+--+ +--+--+
>>>> | |
>>>> +--+--+ +--+--+
>>>> | tap0| | tap1|
>>>> +--+--+ +--+--+
>>>> | |
>>>> pfifo_fast htb(10Mbit/s)
>>>> | |
>>>> +--+--------------+---+
>>>> | bridge |
>>>> +--+------------------+
>>>> |
>>>> pfifo_fast
>>>> |
>>>> +-----+
>>>> | eth0|(100Mbit/s)
>>>> +-----+
>>>>
>>>> - start two VMs and connect them to a bridge
>>>> - add an physical card (100Mbit/s) to that bridge
>>>> - setup htb on tap1 and limit its throughput to 10Mbit/s
>>>> - run two netperfs in the same time, one is from VM1 to VM2. Another is
>>>> from VM1 to an external host through eth0.
>>>> - result shows that not only the VM1 to VM2 traffic were throttled but
>>>> also the VM1 to external host through eth0 is also throttled somehow.
>>>>
>>>> This is because the delay added by htb may lead the delay the finish
>>>> of DMAs and cause the pending DMAs for tap0 exceeds the limit
>>>> (VHOST_MAX_PEND). In this case vhost stop handling tx request until
>>>> htb send some packets. The problem here is all of the packets
>>>> transmission were blocked even if it does not go to VM2.
>>>>
>>>> We can solve this issue by relaxing it a little bit: switching to use
>>>> data copy instead of stopping tx when the number of pending DMAs
>>>> exceed half of the vq size. This is safe because:
>>>>
>>>> - The number of pending DMAs were still limited (half of the vq size)
>>>> - The out of order completion during mode switch can make sure that
>>>> most of the tx buffers were freed in time in guest.
>>>>
>>>> So even if about 50% packets were delayed in zero-copy case, vhost
>>>> could continue to do the transmission through data copy in this case.
>>>>
>>>> Test result:
>>>>
>>>> Before this patch:
>>>> VM1 to VM2 throughput is 9.3Mbit/s
>>>> VM1 to External throughput is 40Mbit/s
>>>> CPU utilization is 7%
>>>>
>>>> After this patch:
>>>> VM1 to VM2 throughput is 9.3Mbit/s
>>>> Vm1 to External throughput is 93Mbit/s
>>>> CPU utilization is 16%
>>>>
>>>> Completed performance test on 40gbe shows no obvious changes in both
>>>> throughput and cpu utilization with this patch.
>>>>
>>>> The patch only solve this issue when unlimited sndbuf. We still need a
>>>> solution for limited sndbuf.
>>>>
>>>> Cc: Michael S. Tsirkin <mst@redhat.com>
>>>> Cc: Qin Chuanyu <qinchuanyu@huawei.com>
>>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> I thought hard about this.
>> Here's what worries me: if there are still head of line
>> blocking issues lurking in the stack, they will still
>> hurt guests such as windows which rely on timely
>> completion of buffers, but it makes it
>> that much harder to reproduce the problems with
>> linux guests which don't.
>> And this will make even it harder to figure out
>> whether zero copy is actually active to diagnose
>> high cpu utilization cases.
> Yes.
>>
>> So I think this is a good trick, but let's make
>> this path conditional on a new debugging module parameter:
>> how about head_of_line_blocking with default off?
> Sure. But the head of line blocking was only partially solved in the
> patch since we only support in-order completion of zerocopy packets.
> Maybe we need consider switching to out of order completion even for
> zerocopy skbs?
Yan, Dima,
I remember that there is an issue with out-of-order packets and WHQL.
Ronen.
>> This way if we suspect packets are delayed forever
>> somewhere, we can enable that and see guest networking block.
>>
>> Additionally, I think we should add a way to count zero copy
>> and non zero copy packets.
>> I see two ways to implement this: add tracepoints in vhost-net
>> or add counters in tun accessible with ethtool.
>> This can be a patch on top and does not have to block
>> this one though.
>>
> Yes, I post a RFC about 2 years ago, see
> https://lkml.org/lkml/2012/4/9/478 which only traces generic vhost
> behaviours. I can refresh this and add some -net specific tracepoints.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
2014-03-17 6:43 ` Ronen Hod
@ 2014-03-17 17:20 ` Yan Vugenfirer
2014-03-17 17:21 ` Yan Vugenfirer
1 sibling, 0 replies; 9+ messages in thread
From: Yan Vugenfirer @ 2014-03-17 17:20 UTC (permalink / raw)
To: Ronen Hod
Cc: virtio-dev, kvm, Michael S. Tsirkin, netdev, linux-kernel,
virtualization, Qin Chuanyu, Dmitry Fleytman
[-- Attachment #1.1: Type: text/plain, Size: 5326 bytes --]
On Mar 17, 2014, at 8:43 AM, Ronen Hod <rhod@redhat.com> wrote:
> On 03/13/2014 09:28 AM, Jason Wang wrote:
>> On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
>>> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>>>>> We used to stop the handling of tx when the number of pending DMAs
>>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>>>>> of both host and guest. But it was too aggressive in some cases, since
>>>>> any delay or blocking of a single packet may delay or block the guest
>>>>> transmission. Consider the following setup:
>>>>>
>>>>> +-----+ +-----+
>>>>> | VM1 | | VM2 |
>>>>> +--+--+ +--+--+
>>>>> | |
>>>>> +--+--+ +--+--+
>>>>> | tap0| | tap1|
>>>>> +--+--+ +--+--+
>>>>> | |
>>>>> pfifo_fast htb(10Mbit/s)
>>>>> | |
>>>>> +--+--------------+---+
>>>>> | bridge |
>>>>> +--+------------------+
>>>>> |
>>>>> pfifo_fast
>>>>> |
>>>>> +-----+
>>>>> | eth0|(100Mbit/s)
>>>>> +-----+
>>>>>
>>>>> - start two VMs and connect them to a bridge
>>>>> - add an physical card (100Mbit/s) to that bridge
>>>>> - setup htb on tap1 and limit its throughput to 10Mbit/s
>>>>> - run two netperfs in the same time, one is from VM1 to VM2. Another is
>>>>> from VM1 to an external host through eth0.
>>>>> - result shows that not only the VM1 to VM2 traffic were throttled but
>>>>> also the VM1 to external host through eth0 is also throttled somehow.
>>>>>
>>>>> This is because the delay added by htb may lead the delay the finish
>>>>> of DMAs and cause the pending DMAs for tap0 exceeds the limit
>>>>> (VHOST_MAX_PEND). In this case vhost stop handling tx request until
>>>>> htb send some packets. The problem here is all of the packets
>>>>> transmission were blocked even if it does not go to VM2.
>>>>>
>>>>> We can solve this issue by relaxing it a little bit: switching to use
>>>>> data copy instead of stopping tx when the number of pending DMAs
>>>>> exceed half of the vq size. This is safe because:
>>>>>
>>>>> - The number of pending DMAs were still limited (half of the vq size)
>>>>> - The out of order completion during mode switch can make sure that
>>>>> most of the tx buffers were freed in time in guest.
>>>>>
>>>>> So even if about 50% packets were delayed in zero-copy case, vhost
>>>>> could continue to do the transmission through data copy in this case.
>>>>>
>>>>> Test result:
>>>>>
>>>>> Before this patch:
>>>>> VM1 to VM2 throughput is 9.3Mbit/s
>>>>> VM1 to External throughput is 40Mbit/s
>>>>> CPU utilization is 7%
>>>>>
>>>>> After this patch:
>>>>> VM1 to VM2 throughput is 9.3Mbit/s
>>>>> Vm1 to External throughput is 93Mbit/s
>>>>> CPU utilization is 16%
>>>>>
>>>>> Completed performance test on 40gbe shows no obvious changes in both
>>>>> throughput and cpu utilization with this patch.
>>>>>
>>>>> The patch only solve this issue when unlimited sndbuf. We still need a
>>>>> solution for limited sndbuf.
>>>>>
>>>>> Cc: Michael S. Tsirkin <mst@redhat.com>
>>>>> Cc: Qin Chuanyu <qinchuanyu@huawei.com>
>>>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>> I thought hard about this.
>>> Here's what worries me: if there are still head of line
>>> blocking issues lurking in the stack, they will still
>>> hurt guests such as windows which rely on timely
>>> completion of buffers, but it makes it
>>> that much harder to reproduce the problems with
>>> linux guests which don't.
>>> And this will make even it harder to figure out
>>> whether zero copy is actually active to diagnose
>>> high cpu utilization cases.
>> Yes.
>>>
>>> So I think this is a good trick, but let's make
>>> this path conditional on a new debugging module parameter:
>>> how about head_of_line_blocking with default off?
>> Sure. But the head of line blocking was only partially solved in the
>> patch since we only support in-order completion of zerocopy packets.
>> Maybe we need consider switching to out of order completion even for
>> zerocopy skbs?
>
> Yan, Dima,
>
> I remember that there is an issue with out-of-order packets and WHQL.
The test considers out of order packets reception as a failure.
Yan.
>
> Ronen.
>
>>> This way if we suspect packets are delayed forever
>>> somewhere, we can enable that and see guest networking block.
>>>
>>> Additionally, I think we should add a way to count zero copy
>>> and non zero copy packets.
>>> I see two ways to implement this: add tracepoints in vhost-net
>>> or add counters in tun accessible with ethtool.
>>> This can be a patch on top and does not have to block
>>> this one though.
>>>
>> Yes, I post a RFC about 2 years ago, see
>> https://lkml.org/lkml/2012/4/9/478 which only traces generic vhost
>> behaviours. I can refresh this and add some -net specific tracepoints.
>> --
>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #1.2: Type: text/html, Size: 7925 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit
2014-03-17 6:43 ` Ronen Hod
2014-03-17 17:20 ` Yan Vugenfirer
@ 2014-03-17 17:21 ` Yan Vugenfirer
1 sibling, 0 replies; 9+ messages in thread
From: Yan Vugenfirer @ 2014-03-17 17:21 UTC (permalink / raw)
To: Ronen Hod
Cc: Jason Wang, Michael S. Tsirkin, Dmitry Fleytman, kvm, virtio-dev,
virtualization, netdev, linux-kernel, Qin Chuanyu
On Mar 17, 2014, at 8:43 AM, Ronen Hod <rhod@redhat.com> wrote:
> On 03/13/2014 09:28 AM, Jason Wang wrote:
>> On 03/10/2014 04:03 PM, Michael S. Tsirkin wrote:
>>> On Fri, Mar 07, 2014 at 01:28:27PM +0800, Jason Wang wrote:
>>>>> We used to stop the handling of tx when the number of pending DMAs
>>>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
>>>>> of both host and guest. But it was too aggressive in some cases, since
>>>>> any delay or blocking of a single packet may delay or block the guest
>>>>> transmission. Consider the following setup:
>>>>>
>>>>> +-----+ +-----+
>>>>> | VM1 | | VM2 |
>>>>> +--+--+ +--+--+
>>>>> | |
>>>>> +--+--+ +--+--+
>>>>> | tap0| | tap1|
>>>>> +--+--+ +--+--+
>>>>> | |
>>>>> pfifo_fast htb(10Mbit/s)
>>>>> | |
>>>>> +--+--------------+---+
>>>>> | bridge |
>>>>> +--+------------------+
>>>>> |
>>>>> pfifo_fast
>>>>> |
>>>>> +-----+
>>>>> | eth0|(100Mbit/s)
>>>>> +-----+
>>>>>
>>>>> - start two VMs and connect them to a bridge
>>>>> - add an physical card (100Mbit/s) to that bridge
>>>>> - setup htb on tap1 and limit its throughput to 10Mbit/s
>>>>> - run two netperfs in the same time, one is from VM1 to VM2. Another is
>>>>> from VM1 to an external host through eth0.
>>>>> - result shows that not only the VM1 to VM2 traffic were throttled but
>>>>> also the VM1 to external host through eth0 is also throttled somehow.
>>>>>
>>>>> This is because the delay added by htb may lead the delay the finish
>>>>> of DMAs and cause the pending DMAs for tap0 exceeds the limit
>>>>> (VHOST_MAX_PEND). In this case vhost stop handling tx request until
>>>>> htb send some packets. The problem here is all of the packets
>>>>> transmission were blocked even if it does not go to VM2.
>>>>>
>>>>> We can solve this issue by relaxing it a little bit: switching to use
>>>>> data copy instead of stopping tx when the number of pending DMAs
>>>>> exceed half of the vq size. This is safe because:
>>>>>
>>>>> - The number of pending DMAs were still limited (half of the vq size)
>>>>> - The out of order completion during mode switch can make sure that
>>>>> most of the tx buffers were freed in time in guest.
>>>>>
>>>>> So even if about 50% packets were delayed in zero-copy case, vhost
>>>>> could continue to do the transmission through data copy in this case.
>>>>>
>>>>> Test result:
>>>>>
>>>>> Before this patch:
>>>>> VM1 to VM2 throughput is 9.3Mbit/s
>>>>> VM1 to External throughput is 40Mbit/s
>>>>> CPU utilization is 7%
>>>>>
>>>>> After this patch:
>>>>> VM1 to VM2 throughput is 9.3Mbit/s
>>>>> Vm1 to External throughput is 93Mbit/s
>>>>> CPU utilization is 16%
>>>>>
>>>>> Completed performance test on 40gbe shows no obvious changes in both
>>>>> throughput and cpu utilization with this patch.
>>>>>
>>>>> The patch only solve this issue when unlimited sndbuf. We still need a
>>>>> solution for limited sndbuf.
>>>>>
>>>>> Cc: Michael S. Tsirkin <mst@redhat.com>
>>>>> Cc: Qin Chuanyu <qinchuanyu@huawei.com>
>>>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>> I thought hard about this.
>>> Here's what worries me: if there are still head of line
>>> blocking issues lurking in the stack, they will still
>>> hurt guests such as windows which rely on timely
>>> completion of buffers, but it makes it
>>> that much harder to reproduce the problems with
>>> linux guests which don't.
>>> And this will make even it harder to figure out
>>> whether zero copy is actually active to diagnose
>>> high cpu utilization cases.
>> Yes.
>>>
>>> So I think this is a good trick, but let's make
>>> this path conditional on a new debugging module parameter:
>>> how about head_of_line_blocking with default off?
>> Sure. But the head of line blocking was only partially solved in the
>> patch since we only support in-order completion of zerocopy packets.
>> Maybe we need consider switching to out of order completion even for
>> zerocopy skbs?
>
> Yan, Dima,
>
> I remember that there is an issue with out-of-order packets and WHQL.
The test considers out of order packets reception as a failure.
Yan.
>
> Ronen.
>
>>> This way if we suspect packets are delayed forever
>>> somewhere, we can enable that and see guest networking block.
>>>
>>> Additionally, I think we should add a way to count zero copy
>>> and non zero copy packets.
>>> I see two ways to implement this: add tracepoints in vhost-net
>>> or add counters in tun accessible with ethtool.
>>> This can be a patch on top and does not have to block
>>> this one though.
>>>
>> Yes, I post a RFC about 2 years ago, see
>> https://lkml.org/lkml/2012/4/9/478 which only traces generic vhost
>> behaviours. I can refresh this and add some -net specific tracepoints.
>> --
>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-03-17 17:21 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-03-07 5:28 [PATCH net V2] vhost: net: switch to use data copy if pending DMAs exceed the limit Jason Wang
2014-03-07 21:39 ` David Miller
2014-03-10 5:15 ` Jason Wang
2014-03-10 2:52 ` Qin Chuanyu
2014-03-10 8:03 ` Michael S. Tsirkin
2014-03-13 7:28 ` Jason Wang
2014-03-17 6:43 ` Ronen Hod
2014-03-17 17:20 ` Yan Vugenfirer
2014-03-17 17:21 ` Yan Vugenfirer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).