* Re: [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit [not found] <1617786614.454336-5-xuanzhuo@linux.alibaba.com> @ 2021-04-07 9:16 ` Jason Wang 0 siblings, 0 replies; 2+ messages in thread From: Jason Wang @ 2021-04-07 9:16 UTC (permalink / raw) To: Xuan Zhuo Cc: Song Liu, Martin KaFai Lau, Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin, Yonghong Song, John Fastabend, Alexei Starovoitov, Andrii Nakryiko, netdev, Björn Töpel, Dust Li, Jonathan Lemon, KP Singh, Jakub Kicinski, bpf, virtualization, David S. Miller, Magnus Karlsson 在 2021/4/7 下午5:10, Xuan Zhuo 写道: > On Tue, 6 Apr 2021 15:03:29 +0800, Jason Wang <jasowang@redhat.com> wrote: >> 在 2021/3/31 下午3:11, Xuan Zhuo 写道: >>> poll tx call virtnet_xsk_run, then the data in the xsk tx queue will be >>> continuously consumed by napi. >>> >>> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> >>> Reviewed-by: Dust Li <dust.li@linux.alibaba.com> >> >> I think we need squash this into patch 4, it looks more like a bug fix >> to me. >> >> >>> --- >>> drivers/net/virtio_net.c | 20 +++++++++++++++++--- >>> 1 file changed, 17 insertions(+), 3 deletions(-) >>> >>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c >>> index d7e95f55478d..fac7d0020013 100644 >>> --- a/drivers/net/virtio_net.c >>> +++ b/drivers/net/virtio_net.c >>> @@ -264,6 +264,9 @@ struct padded_vnet_hdr { >>> char padding[4]; >>> }; >>> >>> +static int virtnet_xsk_run(struct send_queue *sq, struct xsk_buff_pool *pool, >>> + int budget, bool in_napi); >>> + >>> static bool is_xdp_frame(void *ptr) >>> { >>> return (unsigned long)ptr & VIRTIO_XDP_FLAG; >>> @@ -1553,7 +1556,9 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) >>> struct send_queue *sq = container_of(napi, struct send_queue, napi); >>> struct virtnet_info *vi = sq->vq->vdev->priv; >>> unsigned int index = vq2txq(sq->vq); >>> + struct xsk_buff_pool *pool; >>> struct netdev_queue *txq; >>> + int work = 0; >>> >>> if (unlikely(is_xdp_raw_buffer_queue(vi, index))) { >>> /* We don't need to enable cb for XDP */ >>> @@ -1563,15 +1568,24 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) >>> >>> txq = netdev_get_tx_queue(vi->dev, index); >>> __netif_tx_lock(txq, raw_smp_processor_id()); >>> - free_old_xmit_skbs(sq, true); >>> + rcu_read_lock(); >>> + pool = rcu_dereference(sq->xsk.pool); >>> + if (pool) { >>> + work = virtnet_xsk_run(sq, pool, budget, true); >>> + rcu_read_unlock(); >>> + } else { >>> + rcu_read_unlock(); >>> + free_old_xmit_skbs(sq, true); >>> + } >>> __netif_tx_unlock(txq); >>> >>> - virtqueue_napi_complete(napi, sq->vq, 0); >>> + if (work < budget) >>> + virtqueue_napi_complete(napi, sq->vq, 0); >>> >>> if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) >>> netif_tx_wake_queue(txq); >>> >>> - return 0; >>> + return work; >> >> Need a separate patch to "fix" the budget returned by poll_tx here. > I will merge #5 #7 #8 into #4, which is indeed more reasonable, but maybe this > patch will be too big. > > But I don't understand, what you are talking about here, what is the separate > patch, when this is squashed into patch 4? So you modify the behaviour of NAPI poll to return the number of work now (0 is returned previously). Do we need to do that for non XSK part as well (which seems to be the behaviour of other nic driver)? If yes, this part should be a separated patch to be more bisect friendly. Thanks > >> Thanks >> >> >>> } >>> >>> static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 2+ messages in thread
[parent not found: <20210331071139.15473-1-xuanzhuo@linux.alibaba.com>]
[parent not found: <20210331071139.15473-8-xuanzhuo@linux.alibaba.com>]
* Re: [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit [not found] ` <20210331071139.15473-8-xuanzhuo@linux.alibaba.com> @ 2021-04-06 7:03 ` Jason Wang 0 siblings, 0 replies; 2+ messages in thread From: Jason Wang @ 2021-04-06 7:03 UTC (permalink / raw) To: Xuan Zhuo, netdev Cc: Song Liu, Martin KaFai Lau, Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin, Yonghong Song, John Fastabend, Alexei Starovoitov, Andrii Nakryiko, Björn Töpel, Dust Li, Jonathan Lemon, KP Singh, Jakub Kicinski, bpf, virtualization, David S. Miller, Magnus Karlsson 在 2021/3/31 下午3:11, Xuan Zhuo 写道: > poll tx call virtnet_xsk_run, then the data in the xsk tx queue will be > continuously consumed by napi. > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > Reviewed-by: Dust Li <dust.li@linux.alibaba.com> I think we need squash this into patch 4, it looks more like a bug fix to me. > --- > drivers/net/virtio_net.c | 20 +++++++++++++++++--- > 1 file changed, 17 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index d7e95f55478d..fac7d0020013 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -264,6 +264,9 @@ struct padded_vnet_hdr { > char padding[4]; > }; > > +static int virtnet_xsk_run(struct send_queue *sq, struct xsk_buff_pool *pool, > + int budget, bool in_napi); > + > static bool is_xdp_frame(void *ptr) > { > return (unsigned long)ptr & VIRTIO_XDP_FLAG; > @@ -1553,7 +1556,9 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > struct send_queue *sq = container_of(napi, struct send_queue, napi); > struct virtnet_info *vi = sq->vq->vdev->priv; > unsigned int index = vq2txq(sq->vq); > + struct xsk_buff_pool *pool; > struct netdev_queue *txq; > + int work = 0; > > if (unlikely(is_xdp_raw_buffer_queue(vi, index))) { > /* We don't need to enable cb for XDP */ > @@ -1563,15 +1568,24 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > > txq = netdev_get_tx_queue(vi->dev, index); > __netif_tx_lock(txq, raw_smp_processor_id()); > - free_old_xmit_skbs(sq, true); > + rcu_read_lock(); > + pool = rcu_dereference(sq->xsk.pool); > + if (pool) { > + work = virtnet_xsk_run(sq, pool, budget, true); > + rcu_read_unlock(); > + } else { > + rcu_read_unlock(); > + free_old_xmit_skbs(sq, true); > + } > __netif_tx_unlock(txq); > > - virtqueue_napi_complete(napi, sq->vq, 0); > + if (work < budget) > + virtqueue_napi_complete(napi, sq->vq, 0); > > if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > netif_tx_wake_queue(txq); > > - return 0; > + return work; Need a separate patch to "fix" the budget returned by poll_tx here. Thanks > } > > static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-04-07 9:16 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1617786614.454336-5-xuanzhuo@linux.alibaba.com>
2021-04-07 9:16 ` [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit Jason Wang
[not found] <20210331071139.15473-1-xuanzhuo@linux.alibaba.com>
[not found] ` <20210331071139.15473-8-xuanzhuo@linux.alibaba.com>
2021-04-06 7:03 ` Jason Wang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).