From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: David Ahern <dsahern@kernel.org>,
netdev@vger.kernel.org, davem@davemloft.net, kuba@kernel.org,
David Ahern <dahern@digitalocean.com>
Subject: Re: [PATCH RFC net-next] virtio_net: Relax queue requirement for using XDP
Date: Wed, 26 Feb 2020 02:28:10 -0500 (EST) [thread overview]
Message-ID: <449099311.10687151.1582702090890.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <20200226014333-mutt-send-email-mst@kernel.org>
----- Original Message -----
> On Wed, Feb 26, 2020 at 11:00:40AM +0800, Jason Wang wrote:
> >
> > On 2020/2/26 上午8:57, David Ahern wrote:
> > > From: David Ahern <dahern@digitalocean.com>
> > >
> > > virtio_net currently requires extra queues to install an XDP program,
> > > with the rule being twice as many queues as vcpus. From a host
> > > perspective this means the VM needs to have 2*vcpus vhost threads
> > > for each guest NIC for which XDP is to be allowed. For example, a
> > > 16 vcpu VM with 2 tap devices needs 64 vhost threads.
> > >
> > > The extra queues are only needed in case an XDP program wants to
> > > return XDP_TX. XDP_PASS, XDP_DROP and XDP_REDIRECT do not need
> > > additional queues. Relax the queue requirement and allow XDP
> > > functionality based on resources. If an XDP program is loaded and
> > > there are insufficient queues, then return a warning to the user
> > > and if a program returns XDP_TX just drop the packet. This allows
> > > the use of the rest of the XDP functionality to work without
> > > putting an unreasonable burden on the host.
> > >
> > > Cc: Jason Wang <jasowang@redhat.com>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Signed-off-by: David Ahern <dahern@digitalocean.com>
> > > ---
> > > drivers/net/virtio_net.c | 14 ++++++++++----
> > > 1 file changed, 10 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index 2fe7a3188282..2f4c5b2e674d 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -190,6 +190,8 @@ struct virtnet_info {
> > > /* # of XDP queue pairs currently used by the driver */
> > > u16 xdp_queue_pairs;
> > > + bool can_do_xdp_tx;
> > > +
> > > /* I like... big packets and I cannot lie! */
> > > bool big_packets;
> > > @@ -697,6 +699,8 @@ static struct sk_buff *receive_small(struct
> > > net_device *dev,
> > > len = xdp.data_end - xdp.data;
> > > break;
> > > case XDP_TX:
> > > + if (!vi->can_do_xdp_tx)
> > > + goto err_xdp;
> >
> >
> > I wonder if using spinlock to synchronize XDP_TX is better than dropping
> > here?
> >
> > Thanks
>
> I think it's less a problem with locking, and more a problem
> with queue being potentially full and XDP being unable to
> transmit.
I'm not sure we need care about this. Even XDP_TX with dedicated queue
can meet this. And XDP generic work like this.
>
> From that POV just sharing the queue would already be better than just
> an uncondiitonal drop, however I think this is not what XDP users came
> to expect. So at this point, partitioning the queue might be reasonable.
> When XDP attaches we could block until queue is mostly empty.
This mean XDP_TX have a higher priority which I'm not sure is good.
> However,
> how exactly to partition the queue remains open.
It would be not easy unless we have support from virtio layer.
> Maybe it's reasonable
> to limit number of RX buffers to achieve balance.
>
If I understand this correctly, this can only help to throttle
XDP_TX. But we may have XDP_REDIRECT ...
So consider either dropping or sharing is much better than not enable
XDP, we may start from them.
Thanks
next prev parent reply other threads:[~2020-02-26 7:28 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-26 0:57 [PATCH RFC net-next] virtio_net: Relax queue requirement for using XDP David Ahern
2020-02-26 1:37 ` Jakub Kicinski
2020-02-26 3:00 ` Jason Wang
2020-02-26 3:24 ` David Ahern
2020-02-26 3:29 ` Jason Wang
2020-02-26 3:34 ` David Ahern
2020-02-26 3:52 ` Jason Wang
2020-02-26 4:35 ` David Ahern
2020-02-26 5:51 ` Jason Wang
2020-02-26 8:19 ` Toke Høiland-Jørgensen
2020-02-26 8:23 ` Michael S. Tsirkin
2020-02-26 8:34 ` Toke Høiland-Jørgensen
2020-02-26 8:42 ` Michael S. Tsirkin
2020-02-26 9:02 ` Toke Høiland-Jørgensen
2020-02-26 9:32 ` Michael S. Tsirkin
2020-02-26 15:58 ` David Ahern
2020-02-26 16:21 ` Toke Høiland-Jørgensen
2020-02-26 17:08 ` Michael S. Tsirkin
2020-02-26 21:31 ` Toke Høiland-Jørgensen
2020-02-27 3:27 ` David Ahern
2020-02-27 10:19 ` Toke Høiland-Jørgensen
2020-02-27 10:41 ` Magnus Karlsson
2020-09-28 3:13 ` David Ahern
2020-09-28 14:25 ` Magnus Karlsson
2020-09-29 2:44 ` David Ahern
2020-02-26 6:48 ` Michael S. Tsirkin
2020-02-26 7:28 ` Jason Wang [this message]
2020-02-26 8:21 ` Michael S. Tsirkin
2020-02-26 6:43 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=449099311.10687151.1582702090890.JavaMail.zimbra@redhat.com \
--to=jasowang@redhat.com \
--cc=dahern@digitalocean.com \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=kuba@kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).