From: Joe Damato <jdamato@fastly.com>
To: "Jakub Kicinski" <kuba@kernel.org>,
netdev@vger.kernel.org, mkarsten@uwaterloo.ca,
gerhard@engleder-embedded.com, jasowang@redhat.com,
xuanzhuo@linux.alibaba.com, mst@redhat.com, leiyang@redhat.com,
"Eugenio Pérez" <eperezma@redhat.com>,
"Andrew Lunn" <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
"Eric Dumazet" <edumazet@google.com>,
"Paolo Abeni" <pabeni@redhat.com>,
"open list:VIRTIO CORE AND NET DRIVERS"
<virtualization@lists.linux.dev>,
"open list" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next v5 3/4] virtio-net: Map NAPIs to queues
Date: Mon, 3 Mar 2025 13:33:10 -0500 [thread overview]
Message-ID: <Z8X15hxz8t-vXpPU@LQ3V64L9R2> (raw)
In-Reply-To: <Z8XgGrToAD7Bak-I@LQ3V64L9R2>
On Mon, Mar 03, 2025 at 12:00:10PM -0500, Joe Damato wrote:
> On Mon, Mar 03, 2025 at 11:46:10AM -0500, Joe Damato wrote:
> > On Fri, Feb 28, 2025 at 06:27:59PM -0800, Jakub Kicinski wrote:
> > > On Thu, 27 Feb 2025 18:50:13 +0000 Joe Damato wrote:
> > > > @@ -2870,9 +2883,15 @@ static void refill_work(struct work_struct *work)
> > > > for (i = 0; i < vi->curr_queue_pairs; i++) {
> > > > struct receive_queue *rq = &vi->rq[i];
> > > >
> > > > + rtnl_lock();
> > > > virtnet_napi_disable(rq);
> > > > + rtnl_unlock();
> > > > +
> > > > still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> > > > +
> > > > + rtnl_lock();
> > > > virtnet_napi_enable(rq);
> > > > + rtnl_unlock();
> > >
> > > Looks to me like refill_work is cancelled _sync while holding rtnl_lock
> > > from the close path. I think this could deadlock?
> >
> > Good catch, thank you!
> >
> > It looks like this is also the case in the failure path on
> > virtnet_open.
> >
> > Jason: do you have any suggestions?
> >
> > It looks like in both open and close disable_delayed_refill is
> > called first, before the cancel_delayed_work_sync.
> >
> > Would something like this solve the problem?
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 76dcd65ec0f2..457115300f05 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -2880,6 +2880,13 @@ static void refill_work(struct work_struct *work)
> > bool still_empty;
> > int i;
> >
> > + spin_lock(&vi->refill_lock);
> > + if (!vi->refill_enabled) {
> > + spin_unlock(&vi->refill_lock);
> > + return;
> > + }
> > + spin_unlock(&vi->refill_lock);
> > +
> > for (i = 0; i < vi->curr_queue_pairs; i++) {
> > struct receive_queue *rq = &vi->rq[i];
> >
>
> Err, I suppose this also doesn't work because:
>
> CPU0 CPU1
> rtnl_lock (before CPU0 calls disable_delayed_refill)
> virtnet_close refill_work
> rtnl_lock()
> cancel_sync <= deadlock
>
> Need to give this a bit more thought.
How about we don't use the API at all from refill_work?
Patch 4 adds consistent NAPI config state and refill_work isn't a
queue resize maybe we don't need to call the netif_queue_set_napi at
all since the NAPI IDs are persisted in the NAPI config state and
refill_work shouldn't change that?
In which case, we could go back to what refill_work was doing
before and avoid the problem entirely.
What do you think ?
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 76dcd65ec0f2..d6c8fe670005 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2883,15 +2883,9 @@ static void refill_work(struct work_struct *work)
for (i = 0; i < vi->curr_queue_pairs; i++) {
struct receive_queue *rq = &vi->rq[i];
- rtnl_lock();
- virtnet_napi_disable(rq);
- rtnl_unlock();
-
+ napi_disable(&rq->napi);
still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
-
- rtnl_lock();
- virtnet_napi_enable(rq);
- rtnl_unlock();
+ virtnet_napi_do_enable(rq->vq, &rq->napi);
/* In theory, this can happen: if we don't get any buffers in
* we will *never* try to fill again.
next prev parent reply other threads:[~2025-03-03 18:33 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-27 18:50 [PATCH net-next v5 0/4] virtio-net: Link queues to NAPIs Joe Damato
2025-02-27 18:50 ` [PATCH net-next v5 1/4] virtio-net: Refactor napi_enable paths Joe Damato
2025-02-27 18:50 ` [PATCH net-next v5 2/4] virtio-net: Refactor napi_disable paths Joe Damato
2025-02-27 18:50 ` [PATCH net-next v5 3/4] virtio-net: Map NAPIs to queues Joe Damato
2025-02-28 8:14 ` Jason Wang
2025-03-01 2:27 ` Jakub Kicinski
2025-03-03 16:46 ` Joe Damato
2025-03-03 17:00 ` Joe Damato
2025-03-03 18:33 ` Joe Damato [this message]
2025-03-04 0:03 ` Jakub Kicinski
2025-03-04 15:08 ` Joe Damato
2025-03-05 5:11 ` Jason Wang
2025-03-05 16:34 ` Joe Damato
2025-03-06 0:15 ` Jason Wang
2025-03-06 1:42 ` Joe Damato
2025-03-06 2:21 ` Jakub Kicinski
2025-03-06 17:00 ` Joe Damato
2025-03-06 18:21 ` Jakub Kicinski
2025-02-27 18:50 ` [PATCH net-next v5 4/4] virtio_net: Use persistent NAPI config Joe Damato
2025-02-28 14:39 ` [PATCH net-next v5 0/4] virtio-net: Link queues to NAPIs Lei Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z8X15hxz8t-vXpPU@LQ3V64L9R2 \
--to=jdamato@fastly.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eperezma@redhat.com \
--cc=gerhard@engleder-embedded.com \
--cc=jasowang@redhat.com \
--cc=kuba@kernel.org \
--cc=leiyang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mkarsten@uwaterloo.ca \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox