From: "Michael S. Tsirkin" <mst@redhat.com>
To: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: Wenliang Wang <wangwenliang.1995@bytedance.com>,
virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
jasowang@redhat.com, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com
Subject: Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs
Date: Thu, 27 Apr 2023 04:12:44 -0400 [thread overview]
Message-ID: <20230427041206-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <1682579624.5395834-1-xuanzhuo@linux.alibaba.com>
On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> >
> >
> > On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> > > On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> > >> For multi-queue and large rx-ring-size use case, the following error
> > >
> > > Cound you give we one number for example?
> >
> > 128 queues and 16K queue_size is typical.
> >
> > >
> > >> occurred when free_unused_bufs:
> > >> rcu: INFO: rcu_sched self-detected stall on CPU.
> > >>
> > >> Signed-off-by: Wenliang Wang <wangwenliang.1995@bytedance.com>
> > >> ---
> > >> drivers/net/virtio_net.c | 1 +
> > >> 1 file changed, 1 insertion(+)
> > >>
> > >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > >> index ea1bd4bb326d..21d8382fd2c7 100644
> > >> --- a/drivers/net/virtio_net.c
> > >> +++ b/drivers/net/virtio_net.c
> > >> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
> > >> struct virtqueue *vq = vi->rq[i].vq;
> > >> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> > >> virtnet_rq_free_unused_buf(vq, buf);
> > >> + schedule();
> > >
> > > Just for rq?
> > >
> > > Do we need to do the same thing for sq?
> > Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> >
> > Sq unused buffers are much less, so do the same for sq is optional.
>
> I got.
>
> I think we should look for a way, compatible with the less queues or the smaller
> rings. Calling schedule() directly may be not a good way.
>
> Thanks.
Why isn't it a good way?
>
> >
> > >
> > > Thanks.
> > >
> > >
> > >> }
> > >> }
> > >>
> > >> --
> > >> 2.20.1
> > >>
next prev parent reply other threads:[~2023-04-27 8:13 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-27 4:34 [PATCH] virtio_net: suppress cpu stall when free_unused_bufs Wenliang Wang
2023-04-27 6:20 ` Xuan Zhuo
2023-04-27 7:02 ` Wenliang Wang
2023-04-27 7:13 ` Xuan Zhuo
2023-04-27 8:12 ` Michael S. Tsirkin [this message]
2023-04-27 8:13 ` Xuan Zhuo
2023-04-27 8:23 ` Michael S. Tsirkin
2023-04-27 8:49 ` Wenliang Wang
2023-04-27 8:51 ` Xuan Zhuo
2023-04-27 10:46 ` [PATCH v2] " Wenliang Wang
2023-04-28 1:09 ` Michael S. Tsirkin
2023-04-27 10:45 ` [PATCH] " Qi Zheng
2023-04-28 13:56 ` Willem de Bruijn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230427041206-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=jasowang@redhat.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=wangwenliang.1995@bytedance.com \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).