* Re: [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails [not found] ` <6b4c5fff8705dc4b5b6a25a45c50f36349350c73.1608065644.git.wangyunjian@huawei.com> @ 2020-12-16 9:23 ` Michael S. Tsirkin 2020-12-17 3:19 ` Jason Wang 2020-12-21 23:07 ` Willem de Bruijn 1 sibling, 1 reply; 9+ messages in thread From: Michael S. Tsirkin @ 2020-12-16 9:23 UTC (permalink / raw) To: wangyunjian Cc: willemdebruijn.kernel, netdev, jerry.lilijun, virtualization, xudingke, brian.huangbin, chenchanghu On Wed, Dec 16, 2020 at 04:20:37PM +0800, wangyunjian wrote: > From: Yunjian Wang <wangyunjian@huawei.com> > > Currently we break the loop and wake up the vhost_worker when > sendmsg fails. When the worker wakes up again, we'll meet the > same error. This will cause high CPU load. To fix this issue, > we can skip this description by ignoring the error. When we > exceeds sndbuf, the return value of sendmsg is -EAGAIN. In > the case we don't skip the description and don't drop packet. Question: with this patch, what happens if sendmsg is interrupted by a signal? > > Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> > --- > drivers/vhost/net.c | 21 +++++++++------------ > 1 file changed, 9 insertions(+), 12 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index c8784dfafdd7..3d33f3183abe 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -827,16 +827,13 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock) > msg.msg_flags &= ~MSG_MORE; > } > > - /* TODO: Check specific error and bomb out unless ENOBUFS? */ > err = sock->ops->sendmsg(sock, &msg, len); > - if (unlikely(err < 0)) { > + if (unlikely(err == -EAGAIN)) { > vhost_discard_vq_desc(vq, 1); > vhost_net_enable_vq(net, vq); > break; > - } > - if (err != len) > - pr_debug("Truncated TX packet: len %d != %zd\n", > - err, len); > + } else if (unlikely(err != len)) > + vq_err(vq, "Fail to sending packets err : %d, len : %zd\n", err, len); > done: > vq->heads[nvq->done_idx].id = cpu_to_vhost32(vq, head); > vq->heads[nvq->done_idx].len = 0; > @@ -922,7 +919,6 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) > msg.msg_flags &= ~MSG_MORE; > } > > - /* TODO: Check specific error and bomb out unless ENOBUFS? */ > err = sock->ops->sendmsg(sock, &msg, len); > if (unlikely(err < 0)) { > if (zcopy_used) { > @@ -931,13 +927,14 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) > nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) > % UIO_MAXIOV; > } > - vhost_discard_vq_desc(vq, 1); > - vhost_net_enable_vq(net, vq); > - break; > + if (err == -EAGAIN) { > + vhost_discard_vq_desc(vq, 1); > + vhost_net_enable_vq(net, vq); > + break; > + } > } > if (err != len) > - pr_debug("Truncated TX packet: " > - " len %d != %zd\n", err, len); > + vq_err(vq, "Fail to sending packets err : %d, len : %zd\n", err, len); I'd rather make the pr_debug -> vq_err a separate change, with proper commit log describing motivation. > if (!zcopy_used) > vhost_add_used_and_signal(&net->dev, vq, head, 0); > else > -- > 2.23.0 _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails 2020-12-16 9:23 ` [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails Michael S. Tsirkin @ 2020-12-17 3:19 ` Jason Wang 0 siblings, 0 replies; 9+ messages in thread From: Jason Wang @ 2020-12-17 3:19 UTC (permalink / raw) To: Michael S. Tsirkin, wangyunjian Cc: willemdebruijn.kernel, netdev, jerry.lilijun, virtualization, chenchanghu, brian.huangbin, xudingke On 2020/12/16 下午5:23, Michael S. Tsirkin wrote: > On Wed, Dec 16, 2020 at 04:20:37PM +0800, wangyunjian wrote: >> From: Yunjian Wang<wangyunjian@huawei.com> >> >> Currently we break the loop and wake up the vhost_worker when >> sendmsg fails. When the worker wakes up again, we'll meet the >> same error. This will cause high CPU load. To fix this issue, >> we can skip this description by ignoring the error. When we >> exceeds sndbuf, the return value of sendmsg is -EAGAIN. In >> the case we don't skip the description and don't drop packet. > Question: with this patch, what happens if sendmsg is interrupted by a signal? Since we use MSG_DONTWAIT, we don't need to care about signal I think. Thanks > > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails [not found] ` <6b4c5fff8705dc4b5b6a25a45c50f36349350c73.1608065644.git.wangyunjian@huawei.com> 2020-12-16 9:23 ` [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails Michael S. Tsirkin @ 2020-12-21 23:07 ` Willem de Bruijn 2020-12-22 4:41 ` Jason Wang 1 sibling, 1 reply; 9+ messages in thread From: Willem de Bruijn @ 2020-12-21 23:07 UTC (permalink / raw) To: wangyunjian Cc: Michael S. Tsirkin, Network Development, Lilijun (Jerry), virtualization, xudingke, huangbin (J), chenchanghu On Wed, Dec 16, 2020 at 3:20 AM wangyunjian <wangyunjian@huawei.com> wrote: > > From: Yunjian Wang <wangyunjian@huawei.com> > > Currently we break the loop and wake up the vhost_worker when > sendmsg fails. When the worker wakes up again, we'll meet the > same error. The patch is based on the assumption that such error cases always return EAGAIN. Can it not also be ENOMEM, such as from tun_build_skb? > This will cause high CPU load. To fix this issue, > we can skip this description by ignoring the error. When we > exceeds sndbuf, the return value of sendmsg is -EAGAIN. In > the case we don't skip the description and don't drop packet. the -> that here and above: description -> descriptor Perhaps slightly revise to more explicitly state that 1. in the case of persistent failure (i.e., bad packet), the driver drops the packet 2. in the case of transient failure (e.g,. memory pressure) the driver schedules the worker to try again later > Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> > --- > drivers/vhost/net.c | 21 +++++++++------------ > 1 file changed, 9 insertions(+), 12 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index c8784dfafdd7..3d33f3183abe 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -827,16 +827,13 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock) > msg.msg_flags &= ~MSG_MORE; > } > > - /* TODO: Check specific error and bomb out unless ENOBUFS? */ > err = sock->ops->sendmsg(sock, &msg, len); > - if (unlikely(err < 0)) { > + if (unlikely(err == -EAGAIN)) { > vhost_discard_vq_desc(vq, 1); > vhost_net_enable_vq(net, vq); > break; > - } > - if (err != len) > - pr_debug("Truncated TX packet: len %d != %zd\n", > - err, len); > + } else if (unlikely(err != len)) > + vq_err(vq, "Fail to sending packets err : %d, len : %zd\n", err, len); sending -> send Even though vq_err is a wrapper around pr_debug, I agree with Michael that such a change should be a separate patch to net-next, does not belong in a fix. More importantly, the error message is now the same for persistent errors and for truncated packets. But on truncation the packet was sent, so that is not entirely correct. > done: > vq->heads[nvq->done_idx].id = cpu_to_vhost32(vq, head); > vq->heads[nvq->done_idx].len = 0; > @@ -922,7 +919,6 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) > msg.msg_flags &= ~MSG_MORE; > } > > - /* TODO: Check specific error and bomb out unless ENOBUFS? */ > err = sock->ops->sendmsg(sock, &msg, len); > if (unlikely(err < 0)) { > if (zcopy_used) { > @@ -931,13 +927,14 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) > nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) > % UIO_MAXIOV; > } > - vhost_discard_vq_desc(vq, 1); > - vhost_net_enable_vq(net, vq); > - break; > + if (err == -EAGAIN) { > + vhost_discard_vq_desc(vq, 1); > + vhost_net_enable_vq(net, vq); > + break; > + } > } > if (err != len) > - pr_debug("Truncated TX packet: " > - " len %d != %zd\n", err, len); > + vq_err(vq, "Fail to sending packets err : %d, len : %zd\n", err, len); > if (!zcopy_used) > vhost_add_used_and_signal(&net->dev, vq, head, 0); > else > -- > 2.23.0 > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails 2020-12-21 23:07 ` Willem de Bruijn @ 2020-12-22 4:41 ` Jason Wang 2020-12-22 14:24 ` Willem de Bruijn 0 siblings, 1 reply; 9+ messages in thread From: Jason Wang @ 2020-12-22 4:41 UTC (permalink / raw) To: Willem de Bruijn, wangyunjian Cc: Michael S. Tsirkin, Network Development, Lilijun (Jerry), virtualization, chenchanghu, huangbin (J), xudingke On 2020/12/22 上午7:07, Willem de Bruijn wrote: > On Wed, Dec 16, 2020 at 3:20 AM wangyunjian<wangyunjian@huawei.com> wrote: >> From: Yunjian Wang<wangyunjian@huawei.com> >> >> Currently we break the loop and wake up the vhost_worker when >> sendmsg fails. When the worker wakes up again, we'll meet the >> same error. > The patch is based on the assumption that such error cases always > return EAGAIN. Can it not also be ENOMEM, such as from tun_build_skb? > >> This will cause high CPU load. To fix this issue, >> we can skip this description by ignoring the error. When we >> exceeds sndbuf, the return value of sendmsg is -EAGAIN. In >> the case we don't skip the description and don't drop packet. > the -> that > > here and above: description -> descriptor > > Perhaps slightly revise to more explicitly state that > > 1. in the case of persistent failure (i.e., bad packet), the driver > drops the packet > 2. in the case of transient failure (e.g,. memory pressure) the driver > schedules the worker to try again later If we want to go with this way, we need a better time to wakeup the worker. Otherwise it just produces more stress on the cpu that is what this patch tries to avoid. Thanks > > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails 2020-12-22 4:41 ` Jason Wang @ 2020-12-22 14:24 ` Willem de Bruijn 2020-12-23 2:53 ` Jason Wang 0 siblings, 1 reply; 9+ messages in thread From: Willem de Bruijn @ 2020-12-22 14:24 UTC (permalink / raw) To: Jason Wang Cc: Michael S. Tsirkin, Network Development, wangyunjian, Lilijun (Jerry), virtualization, xudingke, huangbin (J), chenchanghu On Mon, Dec 21, 2020 at 11:41 PM Jason Wang <jasowang@redhat.com> wrote: > > > On 2020/12/22 上午7:07, Willem de Bruijn wrote: > > On Wed, Dec 16, 2020 at 3:20 AM wangyunjian<wangyunjian@huawei.com> wrote: > >> From: Yunjian Wang<wangyunjian@huawei.com> > >> > >> Currently we break the loop and wake up the vhost_worker when > >> sendmsg fails. When the worker wakes up again, we'll meet the > >> same error. > > The patch is based on the assumption that such error cases always > > return EAGAIN. Can it not also be ENOMEM, such as from tun_build_skb? > > > >> This will cause high CPU load. To fix this issue, > >> we can skip this description by ignoring the error. When we > >> exceeds sndbuf, the return value of sendmsg is -EAGAIN. In > >> the case we don't skip the description and don't drop packet. > > the -> that > > > > here and above: description -> descriptor > > > > Perhaps slightly revise to more explicitly state that > > > > 1. in the case of persistent failure (i.e., bad packet), the driver > > drops the packet > > 2. in the case of transient failure (e.g,. memory pressure) the driver > > schedules the worker to try again later > > > If we want to go with this way, we need a better time to wakeup the > worker. Otherwise it just produces more stress on the cpu that is what > this patch tries to avoid. Perhaps I misunderstood the purpose of the patch: is it to drop everything, regardless of transient or persistent failure, until the ring runs out of descriptors? I can understand both a blocking and drop strategy during memory pressure. But partial drop strategy until exceeding ring capacity seems like a peculiar hybrid? _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails 2020-12-22 14:24 ` Willem de Bruijn @ 2020-12-23 2:53 ` Jason Wang [not found] ` <34EFBCA9F01B0748BEB6B629CE643AE60DB8E046@DGGEMM533-MBX.china.huawei.com> 0 siblings, 1 reply; 9+ messages in thread From: Jason Wang @ 2020-12-23 2:53 UTC (permalink / raw) To: Willem de Bruijn Cc: Michael S. Tsirkin, Network Development, wangyunjian, Lilijun (Jerry), virtualization, xudingke, huangbin (J), chenchanghu On 2020/12/22 下午10:24, Willem de Bruijn wrote: > On Mon, Dec 21, 2020 at 11:41 PM Jason Wang <jasowang@redhat.com> wrote: >> >> On 2020/12/22 上午7:07, Willem de Bruijn wrote: >>> On Wed, Dec 16, 2020 at 3:20 AM wangyunjian<wangyunjian@huawei.com> wrote: >>>> From: Yunjian Wang<wangyunjian@huawei.com> >>>> >>>> Currently we break the loop and wake up the vhost_worker when >>>> sendmsg fails. When the worker wakes up again, we'll meet the >>>> same error. >>> The patch is based on the assumption that such error cases always >>> return EAGAIN. Can it not also be ENOMEM, such as from tun_build_skb? >>> >>>> This will cause high CPU load. To fix this issue, >>>> we can skip this description by ignoring the error. When we >>>> exceeds sndbuf, the return value of sendmsg is -EAGAIN. In >>>> the case we don't skip the description and don't drop packet. >>> the -> that >>> >>> here and above: description -> descriptor >>> >>> Perhaps slightly revise to more explicitly state that >>> >>> 1. in the case of persistent failure (i.e., bad packet), the driver >>> drops the packet >>> 2. in the case of transient failure (e.g,. memory pressure) the driver >>> schedules the worker to try again later >> >> If we want to go with this way, we need a better time to wakeup the >> worker. Otherwise it just produces more stress on the cpu that is what >> this patch tries to avoid. > Perhaps I misunderstood the purpose of the patch: is it to drop > everything, regardless of transient or persistent failure, until the > ring runs out of descriptors? My understanding is that the main motivation is to avoid high cpu utilization when sendmsg() fail due to guest reason (e.g bad packet). > > I can understand both a blocking and drop strategy during memory > pressure. But partial drop strategy until exceeding ring capacity > seems like a peculiar hybrid? Yes. So I wonder if we want to be do better when we are in the memory pressure. E.g can we let socket wake up us instead of rescheduling the workers here? At least in this case we know some memory might be freed? Thanks > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 9+ messages in thread
[parent not found: <34EFBCA9F01B0748BEB6B629CE643AE60DB8E046@DGGEMM533-MBX.china.huawei.com>]
* Re: [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails [not found] ` <34EFBCA9F01B0748BEB6B629CE643AE60DB8E046@DGGEMM533-MBX.china.huawei.com> @ 2020-12-23 13:48 ` Willem de Bruijn 0 siblings, 0 replies; 9+ messages in thread From: Willem de Bruijn @ 2020-12-23 13:48 UTC (permalink / raw) To: wangyunjian Cc: Michael S. Tsirkin, Network Development, Lilijun (Jerry), virtualization@lists.linux-foundation.org, xudingke, huangbin (J), chenchanghu On Wed, Dec 23, 2020 at 8:21 AM wangyunjian <wangyunjian@huawei.com> wrote: > > > -----Original Message----- > > From: Jason Wang [mailto:jasowang@redhat.com] > > Sent: Wednesday, December 23, 2020 10:54 AM > > To: Willem de Bruijn <willemdebruijn.kernel@gmail.com> > > Cc: wangyunjian <wangyunjian@huawei.com>; Network Development > > <netdev@vger.kernel.org>; Michael S. Tsirkin <mst@redhat.com>; > > virtualization@lists.linux-foundation.org; Lilijun (Jerry) > > <jerry.lilijun@huawei.com>; chenchanghu <chenchanghu@huawei.com>; > > xudingke <xudingke@huawei.com>; huangbin (J) > > <brian.huangbin@huawei.com> > > Subject: Re: [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails > > > > > > On 2020/12/22 下午10:24, Willem de Bruijn wrote: > > > On Mon, Dec 21, 2020 at 11:41 PM Jason Wang <jasowang@redhat.com> > > wrote: > > >> > > >> On 2020/12/22 上午7:07, Willem de Bruijn wrote: > > >>> On Wed, Dec 16, 2020 at 3:20 AM wangyunjian<wangyunjian@huawei.com> > > wrote: > > >>>> From: Yunjian Wang<wangyunjian@huawei.com> > > >>>> > > >>>> Currently we break the loop and wake up the vhost_worker when > > >>>> sendmsg fails. When the worker wakes up again, we'll meet the same > > >>>> error. > > >>> The patch is based on the assumption that such error cases always > > >>> return EAGAIN. Can it not also be ENOMEM, such as from tun_build_skb? > > >>> > > >>>> This will cause high CPU load. To fix this issue, we can skip this > > >>>> description by ignoring the error. When we exceeds sndbuf, the > > >>>> return value of sendmsg is -EAGAIN. In the case we don't skip the > > >>>> description and don't drop packet. > > >>> the -> that > > >>> > > >>> here and above: description -> descriptor > > >>> > > >>> Perhaps slightly revise to more explicitly state that > > >>> > > >>> 1. in the case of persistent failure (i.e., bad packet), the driver > > >>> drops the packet 2. in the case of transient failure (e.g,. memory > > >>> pressure) the driver schedules the worker to try again later > > >> > > >> If we want to go with this way, we need a better time to wakeup the > > >> worker. Otherwise it just produces more stress on the cpu that is > > >> what this patch tries to avoid. > > > Perhaps I misunderstood the purpose of the patch: is it to drop > > > everything, regardless of transient or persistent failure, until the > > > ring runs out of descriptors? > > > > > > My understanding is that the main motivation is to avoid high cpu utilization > > when sendmsg() fail due to guest reason (e.g bad packet). > > > > My main motivation is to avoid the tx queue stuck. > > Should I describe it like this: > Currently the driver don't drop a packet which can't be send by tun > (e.g bad packet). In this case, the driver will always process the > same packet lead to the tx queue stuck. > > To fix this issue: > 1. in the case of persistent failure (e.g bad packet), the driver can skip > this descriptior by ignoring the error. > 2. in the case of transient failure (e.g -EAGAIN and -ENOMEM), the driver > schedules the worker to try again. That sounds good to me, thanks. > Thanks > > > > > > > > > I can understand both a blocking and drop strategy during memory > > > pressure. But partial drop strategy until exceeding ring capacity > > > seems like a peculiar hybrid? > > > > > > Yes. So I wonder if we want to be do better when we are in the memory > > pressure. E.g can we let socket wake up us instead of rescheduling the > > workers here? At least in this case we know some memory might be freed? I don't know whether a blocking or drop strategy is the better choice. Either way, it probably deserves to be handled separately. _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 9+ messages in thread
[parent not found: <62db7d3d2af50f379ec28452921b3261af33db0b.1608065644.git.wangyunjian@huawei.com>]
* Re: [PATCH net v2 1/2] vhost_net: fix ubuf refcount incorrectly when sendmsg fails [not found] ` <62db7d3d2af50f379ec28452921b3261af33db0b.1608065644.git.wangyunjian@huawei.com> @ 2020-12-16 14:17 ` Willem de Bruijn 2020-12-16 20:56 ` Michael S. Tsirkin 1 sibling, 0 replies; 9+ messages in thread From: Willem de Bruijn @ 2020-12-16 14:17 UTC (permalink / raw) To: wangyunjian Cc: Willem de Bruijn, Michael S. Tsirkin, Network Development, Lilijun (Jerry), virtualization, xudingke, huangbin (J), chenchanghu On Wed, Dec 16, 2020 at 3:26 AM wangyunjian <wangyunjian@huawei.com> wrote: > > From: Yunjian Wang <wangyunjian@huawei.com> > > Currently the vhost_zerocopy_callback() maybe be called to decrease > the refcount when sendmsg fails in tun. The error handling in vhost > handle_tx_zerocopy() will try to decrease the same refcount again. > This is wrong. To fix this issue, we only call vhost_net_ubuf_put() > when vq->heads[nvq->desc].len == VHOST_DMA_IN_PROGRESS. > > Fixes: 0690899b4d45 ("tun: experimental zero copy tx support") > > Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Acked-by: Willem de Bruijn <willemb@google.com> for next time: it's not customary to have an empty line between Fixes and Signed-off-by _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH net v2 1/2] vhost_net: fix ubuf refcount incorrectly when sendmsg fails [not found] ` <62db7d3d2af50f379ec28452921b3261af33db0b.1608065644.git.wangyunjian@huawei.com> 2020-12-16 14:17 ` [PATCH net v2 1/2] vhost_net: fix ubuf refcount incorrectly " Willem de Bruijn @ 2020-12-16 20:56 ` Michael S. Tsirkin 1 sibling, 0 replies; 9+ messages in thread From: Michael S. Tsirkin @ 2020-12-16 20:56 UTC (permalink / raw) To: wangyunjian Cc: willemdebruijn.kernel, netdev, jerry.lilijun, virtualization, xudingke, brian.huangbin, chenchanghu On Wed, Dec 16, 2020 at 04:20:20PM +0800, wangyunjian wrote: > From: Yunjian Wang <wangyunjian@huawei.com> > > Currently the vhost_zerocopy_callback() maybe be called to decrease > the refcount when sendmsg fails in tun. The error handling in vhost > handle_tx_zerocopy() will try to decrease the same refcount again. > This is wrong. To fix this issue, we only call vhost_net_ubuf_put() > when vq->heads[nvq->desc].len == VHOST_DMA_IN_PROGRESS. > > Fixes: 0690899b4d45 ("tun: experimental zero copy tx support") > > Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> > --- > drivers/vhost/net.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 531a00d703cd..c8784dfafdd7 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -863,6 +863,7 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) > size_t len, total_len = 0; > int err; > struct vhost_net_ubuf_ref *ubufs; > + struct ubuf_info *ubuf; > bool zcopy_used; > int sent_pkts = 0; > > @@ -895,9 +896,7 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) > > /* use msg_control to pass vhost zerocopy ubuf info to skb */ > if (zcopy_used) { > - struct ubuf_info *ubuf; > ubuf = nvq->ubuf_info + nvq->upend_idx; > - > vq->heads[nvq->upend_idx].id = cpu_to_vhost32(vq, head); > vq->heads[nvq->upend_idx].len = VHOST_DMA_IN_PROGRESS; > ubuf->callback = vhost_zerocopy_callback; > @@ -927,7 +926,8 @@ static void handle_tx_zerocopy(struct vhost_net *net, struct socket *sock) > err = sock->ops->sendmsg(sock, &msg, len); > if (unlikely(err < 0)) { > if (zcopy_used) { > - vhost_net_ubuf_put(ubufs); > + if (vq->heads[ubuf->desc].len == VHOST_DMA_IN_PROGRESS) > + vhost_net_ubuf_put(ubufs); > nvq->upend_idx = ((unsigned)nvq->upend_idx - 1) > % UIO_MAXIOV; > } > -- > 2.23.0 _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2020-12-23 13:49 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <cover.1608065644.git.wangyunjian@huawei.com> [not found] ` <6b4c5fff8705dc4b5b6a25a45c50f36349350c73.1608065644.git.wangyunjian@huawei.com> 2020-12-16 9:23 ` [PATCH net v2 2/2] vhost_net: fix high cpu load when sendmsg fails Michael S. Tsirkin 2020-12-17 3:19 ` Jason Wang 2020-12-21 23:07 ` Willem de Bruijn 2020-12-22 4:41 ` Jason Wang 2020-12-22 14:24 ` Willem de Bruijn 2020-12-23 2:53 ` Jason Wang [not found] ` <34EFBCA9F01B0748BEB6B629CE643AE60DB8E046@DGGEMM533-MBX.china.huawei.com> 2020-12-23 13:48 ` Willem de Bruijn [not found] ` <62db7d3d2af50f379ec28452921b3261af33db0b.1608065644.git.wangyunjian@huawei.com> 2020-12-16 14:17 ` [PATCH net v2 1/2] vhost_net: fix ubuf refcount incorrectly " Willem de Bruijn 2020-12-16 20:56 ` Michael S. Tsirkin
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).