From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
"Michael S. Tsirkin" <mst@redhat.com>,
netdev@vger.kernel.org, John Fastabend <john.fastabend@gmail.com>,
Alexei Starovoitov <ast@kernel.org>,
virtualization@lists.linux-foundation.org,
Marek Majtyka <alardam@gmail.com>,
brouer@redhat.com, Jakub Kicinski <kuba@kernel.org>,
bpf@vger.kernel.org, "David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH v2 net-next] virtio-net: support XDP_TX when not more queues
Date: Thu, 25 Feb 2021 18:01:20 +0100 [thread overview]
Message-ID: <20210225180120.09e8845a@carbon> (raw)
In-Reply-To: <1614241349-77324-1-git-send-email-xuanzhuo@linux.alibaba.com>
On Thu, 25 Feb 2021 16:22:29 +0800
Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> The number of queues implemented by many virtio backends is limited,
> especially some machines have a large number of CPUs. In this case, it
> is often impossible to allocate a separate queue for XDP_TX.
>
> This patch allows XDP_TX to run by reuse the existing SQ with
> __netif_tx_lock() hold when there are not enough queues.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> Reviewed-by: Dust Li <dust.li@linux.alibaba.com>
> ---
> drivers/net/virtio_net.c | 48 ++++++++++++++++++++++++++++++++++++------------
> 1 file changed, 36 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
[...]
> @@ -2416,12 +2441,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
> xdp_qp = nr_cpu_ids;
>
> /* XDP requires extra queues for XDP_TX */
> - if (curr_qp + xdp_qp > vi->max_queue_pairs) {
> - NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings available");
> - netdev_warn(dev, "request %i queues but max is %i\n",
> - curr_qp + xdp_qp, vi->max_queue_pairs);
> - return -ENOMEM;
> - }
> + if (curr_qp + xdp_qp > vi->max_queue_pairs)
> + xdp_qp = 0;
I think we should keep a netdev_warn message, but as a warning (not
error) that this will cause XDP_TX and XDP_REDIRECT to be slower on
this device due to too few free TX rings available.
In the future, we can mark a XDP features flag that this device is
operating in a slower "locked" Tx mode.
>
> old_prog = rtnl_dereference(vi->rq[0].xdp_prog);
> if (!prog && !old_prog)
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
parent reply other threads:[~2021-02-25 17:01 UTC|newest]
Thread overview: expand[flat|nested] mbox.gz Atom feed
[parent not found: <1614241349-77324-1-git-send-email-xuanzhuo@linux.alibaba.com>]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210225180120.09e8845a@carbon \
--to=brouer@redhat.com \
--cc=alardam@gmail.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=hawk@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox