* Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs
[not found] <20230427043433.2594960-1-wangwenliang.1995@bytedance.com>
@ 2023-04-27 6:20 ` Xuan Zhuo
[not found] ` <252ee222-f918-426e-68ef-b3710a60662e@bytedance.com>
0 siblings, 1 reply; 8+ messages in thread
From: Xuan Zhuo @ 2023-04-27 6:20 UTC (permalink / raw)
To: Wenliang Wang
Cc: pabeni, mst, netdev, linux-kernel, virtualization, edumazet, kuba,
Wenliang Wang, davem
On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> For multi-queue and large rx-ring-size use case, the following error
Cound you give we one number for example?
> occurred when free_unused_bufs:
> rcu: INFO: rcu_sched self-detected stall on CPU.
>
> Signed-off-by: Wenliang Wang <wangwenliang.1995@bytedance.com>
> ---
> drivers/net/virtio_net.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index ea1bd4bb326d..21d8382fd2c7 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
> struct virtqueue *vq = vi->rq[i].vq;
> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> virtnet_rq_free_unused_buf(vq, buf);
> + schedule();
Just for rq?
Do we need to do the same thing for sq?
Thanks.
> }
> }
>
> --
> 2.20.1
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs
[not found] ` <252ee222-f918-426e-68ef-b3710a60662e@bytedance.com>
@ 2023-04-27 7:13 ` Xuan Zhuo
2023-04-27 8:12 ` Michael S. Tsirkin
0 siblings, 1 reply; 8+ messages in thread
From: Xuan Zhuo @ 2023-04-27 7:13 UTC (permalink / raw)
To: Wenliang Wang
Cc: mst, netdev, linux-kernel, virtualization, edumazet, kuba, pabeni,
davem
On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
>
>
> On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> > On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> >> For multi-queue and large rx-ring-size use case, the following error
> >
> > Cound you give we one number for example?
>
> 128 queues and 16K queue_size is typical.
>
> >
> >> occurred when free_unused_bufs:
> >> rcu: INFO: rcu_sched self-detected stall on CPU.
> >>
> >> Signed-off-by: Wenliang Wang <wangwenliang.1995@bytedance.com>
> >> ---
> >> drivers/net/virtio_net.c | 1 +
> >> 1 file changed, 1 insertion(+)
> >>
> >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >> index ea1bd4bb326d..21d8382fd2c7 100644
> >> --- a/drivers/net/virtio_net.c
> >> +++ b/drivers/net/virtio_net.c
> >> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
> >> struct virtqueue *vq = vi->rq[i].vq;
> >> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> >> virtnet_rq_free_unused_buf(vq, buf);
> >> + schedule();
> >
> > Just for rq?
> >
> > Do we need to do the same thing for sq?
> Rq buffers are pre-allocated, take seconds to free rq unused buffers.
>
> Sq unused buffers are much less, so do the same for sq is optional.
I got.
I think we should look for a way, compatible with the less queues or the smaller
rings. Calling schedule() directly may be not a good way.
Thanks.
>
> >
> > Thanks.
> >
> >
> >> }
> >> }
> >>
> >> --
> >> 2.20.1
> >>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs
2023-04-27 7:13 ` Xuan Zhuo
@ 2023-04-27 8:12 ` Michael S. Tsirkin
2023-04-27 8:13 ` Xuan Zhuo
0 siblings, 1 reply; 8+ messages in thread
From: Michael S. Tsirkin @ 2023-04-27 8:12 UTC (permalink / raw)
To: Xuan Zhuo
Cc: pabeni, netdev, linux-kernel, virtualization, edumazet, kuba,
Wenliang Wang, davem
On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> >
> >
> > On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> > > On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> > >> For multi-queue and large rx-ring-size use case, the following error
> > >
> > > Cound you give we one number for example?
> >
> > 128 queues and 16K queue_size is typical.
> >
> > >
> > >> occurred when free_unused_bufs:
> > >> rcu: INFO: rcu_sched self-detected stall on CPU.
> > >>
> > >> Signed-off-by: Wenliang Wang <wangwenliang.1995@bytedance.com>
> > >> ---
> > >> drivers/net/virtio_net.c | 1 +
> > >> 1 file changed, 1 insertion(+)
> > >>
> > >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > >> index ea1bd4bb326d..21d8382fd2c7 100644
> > >> --- a/drivers/net/virtio_net.c
> > >> +++ b/drivers/net/virtio_net.c
> > >> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
> > >> struct virtqueue *vq = vi->rq[i].vq;
> > >> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> > >> virtnet_rq_free_unused_buf(vq, buf);
> > >> + schedule();
> > >
> > > Just for rq?
> > >
> > > Do we need to do the same thing for sq?
> > Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> >
> > Sq unused buffers are much less, so do the same for sq is optional.
>
> I got.
>
> I think we should look for a way, compatible with the less queues or the smaller
> rings. Calling schedule() directly may be not a good way.
>
> Thanks.
Why isn't it a good way?
>
> >
> > >
> > > Thanks.
> > >
> > >
> > >> }
> > >> }
> > >>
> > >> --
> > >> 2.20.1
> > >>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs
2023-04-27 8:12 ` Michael S. Tsirkin
@ 2023-04-27 8:13 ` Xuan Zhuo
2023-04-27 8:23 ` Michael S. Tsirkin
0 siblings, 1 reply; 8+ messages in thread
From: Xuan Zhuo @ 2023-04-27 8:13 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: pabeni, netdev, linux-kernel, virtualization, edumazet, kuba,
Wenliang Wang, davem
On Thu, 27 Apr 2023 04:12:44 -0400, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> > On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> > >
> > >
> > > On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> > > > On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> > > >> For multi-queue and large rx-ring-size use case, the following error
> > > >
> > > > Cound you give we one number for example?
> > >
> > > 128 queues and 16K queue_size is typical.
> > >
> > > >
> > > >> occurred when free_unused_bufs:
> > > >> rcu: INFO: rcu_sched self-detected stall on CPU.
> > > >>
> > > >> Signed-off-by: Wenliang Wang <wangwenliang.1995@bytedance.com>
> > > >> ---
> > > >> drivers/net/virtio_net.c | 1 +
> > > >> 1 file changed, 1 insertion(+)
> > > >>
> > > >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > >> index ea1bd4bb326d..21d8382fd2c7 100644
> > > >> --- a/drivers/net/virtio_net.c
> > > >> +++ b/drivers/net/virtio_net.c
> > > >> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
> > > >> struct virtqueue *vq = vi->rq[i].vq;
> > > >> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> > > >> virtnet_rq_free_unused_buf(vq, buf);
> > > >> + schedule();
> > > >
> > > > Just for rq?
> > > >
> > > > Do we need to do the same thing for sq?
> > > Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> > >
> > > Sq unused buffers are much less, so do the same for sq is optional.
> >
> > I got.
> >
> > I think we should look for a way, compatible with the less queues or the smaller
> > rings. Calling schedule() directly may be not a good way.
> >
> > Thanks.
>
> Why isn't it a good way?
For the small ring, I don't think it is a good way, maybe we only deal with one
buf, then call schedule().
We can call the schedule() after processing a certain number of buffers,
or check need_resched () first.
Thanks.
>
> >
> > >
> > > >
> > > > Thanks.
> > > >
> > > >
> > > >> }
> > > >> }
> > > >>
> > > >> --
> > > >> 2.20.1
> > > >>
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs
2023-04-27 8:13 ` Xuan Zhuo
@ 2023-04-27 8:23 ` Michael S. Tsirkin
[not found] ` <c2f6512e-cef6-04d5-8457-0408f12ca7a9@bytedance.com>
[not found] ` <32eb2826-6322-2f3e-9c48-7fd9afc33615@bytedance.com>
0 siblings, 2 replies; 8+ messages in thread
From: Michael S. Tsirkin @ 2023-04-27 8:23 UTC (permalink / raw)
To: Xuan Zhuo
Cc: pabeni, netdev, linux-kernel, virtualization, edumazet, kuba,
Wenliang Wang, davem
On Thu, Apr 27, 2023 at 04:13:45PM +0800, Xuan Zhuo wrote:
> On Thu, 27 Apr 2023 04:12:44 -0400, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> > > On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> > > >
> > > >
> > > > On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> > > > > On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> > > > >> For multi-queue and large rx-ring-size use case, the following error
> > > > >
> > > > > Cound you give we one number for example?
> > > >
> > > > 128 queues and 16K queue_size is typical.
> > > >
> > > > >
> > > > >> occurred when free_unused_bufs:
> > > > >> rcu: INFO: rcu_sched self-detected stall on CPU.
> > > > >>
> > > > >> Signed-off-by: Wenliang Wang <wangwenliang.1995@bytedance.com>
> > > > >> ---
> > > > >> drivers/net/virtio_net.c | 1 +
> > > > >> 1 file changed, 1 insertion(+)
> > > > >>
> > > > >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > >> index ea1bd4bb326d..21d8382fd2c7 100644
> > > > >> --- a/drivers/net/virtio_net.c
> > > > >> +++ b/drivers/net/virtio_net.c
> > > > >> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
> > > > >> struct virtqueue *vq = vi->rq[i].vq;
> > > > >> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> > > > >> virtnet_rq_free_unused_buf(vq, buf);
> > > > >> + schedule();
> > > > >
> > > > > Just for rq?
> > > > >
> > > > > Do we need to do the same thing for sq?
> > > > Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> > > >
> > > > Sq unused buffers are much less, so do the same for sq is optional.
> > >
> > > I got.
> > >
> > > I think we should look for a way, compatible with the less queues or the smaller
> > > rings. Calling schedule() directly may be not a good way.
> > >
> > > Thanks.
> >
> > Why isn't it a good way?
>
> For the small ring, I don't think it is a good way, maybe we only deal with one
> buf, then call schedule().
>
> We can call the schedule() after processing a certain number of buffers,
> or check need_resched () first.
>
> Thanks.
Wenliang, does
if (need_resched())
schedule();
fix the issue for you?
>
>
> >
> > >
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > >
> > > > >> }
> > > > >> }
> > > > >>
> > > > >> --
> > > > >> 2.20.1
> > > > >>
> >
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs
[not found] ` <c2f6512e-cef6-04d5-8457-0408f12ca7a9@bytedance.com>
@ 2023-04-27 8:51 ` Xuan Zhuo
[not found] ` <20230427104618.3297348-1-wangwenliang.1995@bytedance.com>
0 siblings, 1 reply; 8+ messages in thread
From: Xuan Zhuo @ 2023-04-27 8:51 UTC (permalink / raw)
To: Wenliang Wang
Cc: Michael S. Tsirkin, netdev, linux-kernel, virtualization,
edumazet, kuba, pabeni, davem
On Thu, 27 Apr 2023 16:49:58 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> On 4/27/23 4:23 PM, Michael S. Tsirkin wrote:
> > On Thu, Apr 27, 2023 at 04:13:45PM +0800, Xuan Zhuo wrote:
> >> On Thu, 27 Apr 2023 04:12:44 -0400, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> >>> On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> >>>> On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> >>>>>
> >>>>>
> >>>>> On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> >>>>>> On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> >>>>>>> For multi-queue and large rx-ring-size use case, the following error
> >>>>>>
> >>>>>> Cound you give we one number for example?
> >>>>>
> >>>>> 128 queues and 16K queue_size is typical.
> >>>>>
> >>>>>>
> >>>>>>> occurred when free_unused_bufs:
> >>>>>>> rcu: INFO: rcu_sched self-detected stall on CPU.
> >>>>>>>
> >>>>>>> Signed-off-by: Wenliang Wang <wangwenliang.1995@bytedance.com>
> >>>>>>> ---
> >>>>>>> drivers/net/virtio_net.c | 1 +
> >>>>>>> 1 file changed, 1 insertion(+)
> >>>>>>>
> >>>>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >>>>>>> index ea1bd4bb326d..21d8382fd2c7 100644
> >>>>>>> --- a/drivers/net/virtio_net.c
> >>>>>>> +++ b/drivers/net/virtio_net.c
> >>>>>>> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
> >>>>>>> struct virtqueue *vq = vi->rq[i].vq;
> >>>>>>> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> >>>>>>> virtnet_rq_free_unused_buf(vq, buf);
> >>>>>>> + schedule();
> >>>>>>
> >>>>>> Just for rq?
> >>>>>>
> >>>>>> Do we need to do the same thing for sq?
> >>>>> Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> >>>>>
> >>>>> Sq unused buffers are much less, so do the same for sq is optional.
> >>>>
> >>>> I got.
> >>>>
> >>>> I think we should look for a way, compatible with the less queues or the smaller
> >>>> rings. Calling schedule() directly may be not a good way.
> >>>>
> >>>> Thanks.
> >>>
> >>> Why isn't it a good way?
> >>
> >> For the small ring, I don't think it is a good way, maybe we only deal with one
> >> buf, then call schedule().
> >>
> >> We can call the schedule() after processing a certain number of buffers,
> >> or check need_resched () first.
> >>
> >> Thanks.
> >
> >
> > Wenliang, does
> > if (need_resched())
> > schedule();
> > fix the issue for you?
> >
> Yeah, it works better.
I prefer to use it in combination with a fixed number(such as 256).
Every time 256 buffers are processed, check need_resched().
This can accommodate large rings and small rings.
Also, it is necessary to add similar logic to sq. Although the possibility is
low, it is possible that the same problem will occur.
Thanks.
> >
> >>
> >>
> >>>
> >>>>
> >>>>>
> >>>>>>
> >>>>>> Thanks.
> >>>>>>
> >>>>>>
> >>>>>>> }
> >>>>>>> }
> >>>>>>>
> >>>>>>> --
> >>>>>>> 2.20.1
> >>>>>>>
> >>>
> >
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2] virtio_net: suppress cpu stall when free_unused_bufs
[not found] ` <20230427104618.3297348-1-wangwenliang.1995@bytedance.com>
@ 2023-04-28 1:09 ` Michael S. Tsirkin
0 siblings, 0 replies; 8+ messages in thread
From: Michael S. Tsirkin @ 2023-04-28 1:09 UTC (permalink / raw)
To: Wenliang Wang
Cc: netdev, linux-kernel, virtualization, edumazet, kuba, pabeni,
davem
On Thu, Apr 27, 2023 at 06:46:18PM +0800, Wenliang Wang wrote:
> For multi-queue and large ring-size use case, the following error
> occurred when free_unused_bufs:
> rcu: INFO: rcu_sched self-detected stall on CPU.
>
> Signed-off-by: Wenliang Wang <wangwenliang.1995@bytedance.com>
pls send vN+1 as a new thread not as a reply in existing thread of vN.
> ---
> v2:
> -add need_resched check.
> -apply same logic to sq.
> ---
> drivers/net/virtio_net.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index ea1bd4bb326d..573558b69a60 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -3559,12 +3559,16 @@ static void free_unused_bufs(struct virtnet_info *vi)
> struct virtqueue *vq = vi->sq[i].vq;
> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> virtnet_sq_free_unused_buf(vq, buf);
> + if (need_resched())
> + schedule();
> }
>
> for (i = 0; i < vi->max_queue_pairs; i++) {
> struct virtqueue *vq = vi->rq[i].vq;
> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> virtnet_rq_free_unused_buf(vq, buf);
> + if (need_resched())
> + schedule();
> }
> }
>
> --
> 2.20.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs
[not found] ` <32eb2826-6322-2f3e-9c48-7fd9afc33615@bytedance.com>
@ 2023-04-28 13:56 ` Willem de Bruijn
0 siblings, 0 replies; 8+ messages in thread
From: Willem de Bruijn @ 2023-04-28 13:56 UTC (permalink / raw)
To: Qi Zheng, Michael S. Tsirkin, Xuan Zhuo
Cc: pabeni, netdev, linux-kernel, virtualization, edumazet, kuba,
Wenliang Wang, davem
Qi Zheng wrote:
>
>
> On 2023/4/27 16:23, Michael S. Tsirkin wrote:
> > On Thu, Apr 27, 2023 at 04:13:45PM +0800, Xuan Zhuo wrote:
> >> On Thu, 27 Apr 2023 04:12:44 -0400, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> >>> On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> >>>> On Thu, 27 Apr 2023 15:02:26 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> >>>>>
> >>>>>
> >>>>> On 4/27/23 2:20 PM, Xuan Zhuo wrote:
> >>>>>> On Thu, 27 Apr 2023 12:34:33 +0800, Wenliang Wang <wangwenliang.1995@bytedance.com> wrote:
> >>>>>>> For multi-queue and large rx-ring-size use case, the following error
> >>>>>>
> >>>>>> Cound you give we one number for example?
> >>>>>
> >>>>> 128 queues and 16K queue_size is typical.
> >>>>>
> >>>>>>
> >>>>>>> occurred when free_unused_bufs:
> >>>>>>> rcu: INFO: rcu_sched self-detected stall on CPU.
> >>>>>>>
> >>>>>>> Signed-off-by: Wenliang Wang <wangwenliang.1995@bytedance.com>
> >>>>>>> ---
> >>>>>>> drivers/net/virtio_net.c | 1 +
> >>>>>>> 1 file changed, 1 insertion(+)
> >>>>>>>
> >>>>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >>>>>>> index ea1bd4bb326d..21d8382fd2c7 100644
> >>>>>>> --- a/drivers/net/virtio_net.c
> >>>>>>> +++ b/drivers/net/virtio_net.c
> >>>>>>> @@ -3565,6 +3565,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
> >>>>>>> struct virtqueue *vq = vi->rq[i].vq;
> >>>>>>> while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
> >>>>>>> virtnet_rq_free_unused_buf(vq, buf);
> >>>>>>> + schedule();
> >>>>>>
> >>>>>> Just for rq?
> >>>>>>
> >>>>>> Do we need to do the same thing for sq?
> >>>>> Rq buffers are pre-allocated, take seconds to free rq unused buffers.
> >>>>>
> >>>>> Sq unused buffers are much less, so do the same for sq is optional.
> >>>>
> >>>> I got.
> >>>>
> >>>> I think we should look for a way, compatible with the less queues or the smaller
> >>>> rings. Calling schedule() directly may be not a good way.
> >>>>
> >>>> Thanks.
> >>>
> >>> Why isn't it a good way?
> >>
> >> For the small ring, I don't think it is a good way, maybe we only deal with one
> >> buf, then call schedule().
> >>
> >> We can call the schedule() after processing a certain number of buffers,
> >> or check need_resched () first.
> >>
> >> Thanks.
> >
> >
> > Wenliang, does
> > if (need_resched())
> > schedule();
>
> Can we just use cond_resched()?
I believe that is preferred. But v2 still calls schedule directly.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2023-04-28 13:56 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20230427043433.2594960-1-wangwenliang.1995@bytedance.com>
2023-04-27 6:20 ` [PATCH] virtio_net: suppress cpu stall when free_unused_bufs Xuan Zhuo
[not found] ` <252ee222-f918-426e-68ef-b3710a60662e@bytedance.com>
2023-04-27 7:13 ` Xuan Zhuo
2023-04-27 8:12 ` Michael S. Tsirkin
2023-04-27 8:13 ` Xuan Zhuo
2023-04-27 8:23 ` Michael S. Tsirkin
[not found] ` <c2f6512e-cef6-04d5-8457-0408f12ca7a9@bytedance.com>
2023-04-27 8:51 ` Xuan Zhuo
[not found] ` <20230427104618.3297348-1-wangwenliang.1995@bytedance.com>
2023-04-28 1:09 ` [PATCH v2] " Michael S. Tsirkin
[not found] ` <32eb2826-6322-2f3e-9c48-7fd9afc33615@bytedance.com>
2023-04-28 13:56 ` [PATCH] " Willem de Bruijn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).