From: Joe Damato <jdamato@fastly.com>
To: "Jason Wang" <jasowang@redhat.com>,
netdev@vger.kernel.org, mkarsten@uwaterloo.ca,
gerhard@engleder-embedded.com, xuanzhuo@linux.alibaba.com,
kuba@kernel.org, "Michael S. Tsirkin" <mst@redhat.com>,
"Eugenio Pérez" <eperezma@redhat.com>,
"Andrew Lunn" <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
"Eric Dumazet" <edumazet@google.com>,
"Paolo Abeni" <pabeni@redhat.com>,
"open list:VIRTIO CORE AND NET DRIVERS"
<virtualization@lists.linux.dev>,
"open list" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next v4 3/4] virtio-net: Map NAPIs to queues
Date: Wed, 26 Feb 2025 13:08:49 -0500 [thread overview]
Message-ID: <Z79YseiGrzYxoyvr@LQ3V64L9R2> (raw)
In-Reply-To: <Z79XXQjp9Dz7OYYQ@LQ3V64L9R2>
On Wed, Feb 26, 2025 at 01:03:09PM -0500, Joe Damato wrote:
> On Wed, Feb 26, 2025 at 01:48:50PM +0800, Jason Wang wrote:
> > On Tue, Feb 25, 2025 at 10:05 AM Joe Damato <jdamato@fastly.com> wrote:
> > >
> > > Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
> > > can be accessed by user apps, taking care to hold RTNL as needed.
> >
> > I may miss something but I wonder whether letting the caller hold the
> > lock is better.
>
> Hmm...
>
> Double checking all the paths over again, here's what I see:
> - refill_work, delayed work that needs RTNL so this change seems
> right?
>
> - virtnet_disable_queue_pair, called from virtnet_open and
> virtnet_close. When called via NDO these are safe and hold RTNL,
> but they can be called from power management and need RTNL.
>
> - virtnet_enable_queue_pair called from virtnet_open, safe when
> used via NDO but needs RTNL when used via power management.
>
> - virtnet_rx_pause called in both paths as you mentioned, one
> which needs RTNL and one which doesn't.
Sorry, I missed more paths:
- virtnet_rx_resume
- virtnet_tx_pause and virtnet_tx_resume
which are similar to path you mentioned (virtnet_rx_pause) and need
rtnl in one of two different paths.
Let me know if I missed any paths and what your preferred way to fix
this would be?
I think both options below are possible and I have no strong
preference.
>
> I think there are a couple ways to fix this:
>
> 1. Edit this patch to remove the virtnet_queue_set_napi helper,
> and call netif_queue_set_napi from the napi_enable and
> napi_disable helpers directly. Modify code calling into these
> paths to hold rtnl (or not) as described above.
>
> 2. Modify virtnet_enable_queue_pair, virtnet_disable_queue_pair,
> and virtnet_rx_pause to take a "bool need_rtnl" as an a
> function argument and pass that through.
>
> I'm not sure which is cleaner and I do not have a preference.
>
> Can you let me know which you prefer? I am happy to implement either
> one for the next revision.
>
> [...]
>
> > > ---
> > > drivers/net/virtio_net.c | 73 ++++++++++++++++++++++++++++------------
> > > 1 file changed, 52 insertions(+), 21 deletions(-)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index e578885c1093..13bb4a563073 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -2807,6 +2807,20 @@ static void skb_recv_done(struct virtqueue *rvq)
> > > virtqueue_napi_schedule(&rq->napi, rvq);
> > > }
> > >
> > > +static void virtnet_queue_set_napi(struct net_device *dev,
> > > + struct napi_struct *napi,
> > > + enum netdev_queue_type q_type, int qidx,
> > > + bool need_rtnl)
> > > +{
> > > + if (need_rtnl)
> > > + rtnl_lock();
> > > +
> > > + netif_queue_set_napi(dev, qidx, q_type, napi);
> > > +
> > > + if (need_rtnl)
> > > + rtnl_unlock();
> > > +}
> > > +
> > > static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > struct napi_struct *napi)
> > > {
> > > @@ -2821,15 +2835,21 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
> > > local_bh_enable();
> > > }
> > >
> > > -static void virtnet_napi_enable(struct receive_queue *rq)
> > > +static void virtnet_napi_enable(struct receive_queue *rq, bool need_rtnl)
> > > {
> > > + struct virtnet_info *vi = rq->vq->vdev->priv;
> > > + int qidx = vq2rxq(rq->vq);
> > > +
> > > virtnet_napi_do_enable(rq->vq, &rq->napi);
> > > + virtnet_queue_set_napi(vi->dev, &rq->napi,
> > > + NETDEV_QUEUE_TYPE_RX, qidx, need_rtnl);
> > > }
> > >
> > > -static void virtnet_napi_tx_enable(struct send_queue *sq)
> > > +static void virtnet_napi_tx_enable(struct send_queue *sq, bool need_rtnl)
> > > {
> > > struct virtnet_info *vi = sq->vq->vdev->priv;
> > > struct napi_struct *napi = &sq->napi;
> > > + int qidx = vq2txq(sq->vq);
> > >
> > > if (!napi->weight)
> > > return;
> > > @@ -2843,20 +2863,31 @@ static void virtnet_napi_tx_enable(struct send_queue *sq)
> > > }
> > >
> > > virtnet_napi_do_enable(sq->vq, napi);
> > > + virtnet_queue_set_napi(vi->dev, napi, NETDEV_QUEUE_TYPE_TX, qidx,
> > > + need_rtnl);
> > > }
> > >
> > > -static void virtnet_napi_tx_disable(struct send_queue *sq)
> > > +static void virtnet_napi_tx_disable(struct send_queue *sq, bool need_rtnl)
> > > {
> > > + struct virtnet_info *vi = sq->vq->vdev->priv;
> > > struct napi_struct *napi = &sq->napi;
> > > + int qidx = vq2txq(sq->vq);
> > >
> > > - if (napi->weight)
> > > + if (napi->weight) {
> > > + virtnet_queue_set_napi(vi->dev, NULL, NETDEV_QUEUE_TYPE_TX,
> > > + qidx, need_rtnl);
> > > napi_disable(napi);
> > > + }
> > > }
> > >
> > > -static void virtnet_napi_disable(struct receive_queue *rq)
> > > +static void virtnet_napi_disable(struct receive_queue *rq, bool need_rtnl)
> > > {
> > > + struct virtnet_info *vi = rq->vq->vdev->priv;
> > > struct napi_struct *napi = &rq->napi;
> > > + int qidx = vq2rxq(rq->vq);
> > >
> > > + virtnet_queue_set_napi(vi->dev, NULL, NETDEV_QUEUE_TYPE_TX, qidx,
> > > + need_rtnl);
> > > napi_disable(napi);
> > > }
> > >
> > > @@ -2870,9 +2901,9 @@ static void refill_work(struct work_struct *work)
> > > for (i = 0; i < vi->curr_queue_pairs; i++) {
> > > struct receive_queue *rq = &vi->rq[i];
> > >
> > > - virtnet_napi_disable(rq);
> > > + virtnet_napi_disable(rq, true);
> > > still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> > > - virtnet_napi_enable(rq);
> > > + virtnet_napi_enable(rq, true);
> > >
> > > /* In theory, this can happen: if we don't get any buffers in
> > > * we will *never* try to fill again.
> > > @@ -3069,8 +3100,8 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
> > >
> > > static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
> > > {
> > > - virtnet_napi_tx_disable(&vi->sq[qp_index]);
> > > - virtnet_napi_disable(&vi->rq[qp_index]);
> > > + virtnet_napi_tx_disable(&vi->sq[qp_index], false);
> > > + virtnet_napi_disable(&vi->rq[qp_index], false);
> > > xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
> > > }
> > >
> > > @@ -3089,8 +3120,8 @@ static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index)
> > > if (err < 0)
> > > goto err_xdp_reg_mem_model;
> > >
> > > - virtnet_napi_enable(&vi->rq[qp_index]);
> > > - virtnet_napi_tx_enable(&vi->sq[qp_index]);
> > > + virtnet_napi_enable(&vi->rq[qp_index], false);
> > > + virtnet_napi_tx_enable(&vi->sq[qp_index], false);
> > >
> > > return 0;
> > >
> > > @@ -3342,7 +3373,7 @@ static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
> > > bool running = netif_running(vi->dev);
> > >
> > > if (running) {
> > > - virtnet_napi_disable(rq);
> > > + virtnet_napi_disable(rq, true);
> >
> > During the resize, the rtnl lock has been held on the ethtool path
> >
> > rtnl_lock();
> > rc = __dev_ethtool(net, ifr, useraddr, ethcmd, state);
> > rtnl_unlock();
> >
> > virtnet_rx_resize()
> > virtnet_rx_pause()
> >
> > and in the case of XSK binding, I see ASSERT_RTNL in xp_assign_dev() at least.
>
> Thanks for catching this. I re-read all the paths and I think I've
> outlined a few other issues above.
>
> Please let me know which of the proposed methods above you'd like me
> to implement to get this merged.
>
> Thanks.
>
> ---
> pw-bot: cr
next prev parent reply other threads:[~2025-02-26 18:08 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-25 2:04 [PATCH net-next v4 0/4] virtio-net: Link queues to NAPIs Joe Damato
2025-02-25 2:04 ` [PATCH net-next v4 1/4] virtio-net: Refactor napi_enable paths Joe Damato
2025-02-26 5:49 ` Jason Wang
2025-02-25 2:04 ` [PATCH net-next v4 2/4] virtio-net: Refactor napi_disable paths Joe Damato
2025-02-26 5:49 ` Jason Wang
2025-02-25 2:04 ` [PATCH net-next v4 3/4] virtio-net: Map NAPIs to queues Joe Damato
2025-02-26 5:48 ` Jason Wang
2025-02-26 18:03 ` Joe Damato
2025-02-26 18:08 ` Joe Damato [this message]
2025-02-26 20:27 ` Joe Damato
2025-02-26 22:13 ` Michael S. Tsirkin
2025-02-27 4:18 ` Jason Wang
2025-02-27 4:48 ` Joe Damato
2025-02-26 22:11 ` Michael S. Tsirkin
2025-02-25 2:04 ` [PATCH net-next v4 4/4] virtio-net: Use persistent NAPI config Joe Damato
2025-02-25 15:46 ` [PATCH net-next v4 0/4] virtio-net: Link queues to NAPIs Lei Yang
2025-02-26 9:41 ` Jason Wang
2025-02-25 15:54 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z79YseiGrzYxoyvr@LQ3V64L9R2 \
--to=jdamato@fastly.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=gerhard@engleder-embedded.com \
--cc=jasowang@redhat.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mkarsten@uwaterloo.ca \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).