public inbox for virtualization@lists.linux-foundation.org
 help / color / mirror / Atom feed
From: Bui Quang Minh <minhquangbui99@gmail.com>
To: Jason Xing <kerneljasonxing@gmail.com>
Cc: netdev@vger.kernel.org, "Michael S. Tsirkin" <mst@redhat.com>,
	"Jason Wang" <jasowang@redhat.com>,
	"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
	"Eugenio Pérez" <eperezma@redhat.com>,
	"Andrew Lunn" <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	"Eric Dumazet" <edumazet@google.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>,
	virtualization@lists.linux.dev, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net-next] virtio-net: xsk: Support wakeup on RX side
Date: Sat, 28 Feb 2026 13:25:13 +0700	[thread overview]
Message-ID: <829f4263-dfd2-4664-b66c-0a65f35aa57a@gmail.com> (raw)
In-Reply-To: <CAL+tcoAy7YjkODWp=nCagEJwOzhqt_BL4kcfxpj=FSiAXP_+qg@mail.gmail.com>

On 2/28/26 13:16, Jason Xing wrote:
> On Sat, Feb 28, 2026 at 1:03 PM Bui Quang Minh <minhquangbui99@gmail.com> wrote:
>> Hi Jason,
>>
>> On 2/28/26 10:49, Jason Xing wrote:
>>> Hi Bui,
>>>
>>> On Fri, Feb 27, 2026 at 11:20 PM Bui Quang Minh
>>> <minhquangbui99@gmail.com> wrote:
>>>> When XDP_USE_NEED_WAKEUP is used and the fill ring is empty so no buffer
>>>> is allocated on RX side, allow RX NAPI to be descheduled. This avoids
>>>> wasting CPU cycles on polling. Users will be notified and they need to
>>>> make a wakeup call after refilling the ring.
>>>>
>>>> Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
>>>> ---
>>>>    drivers/net/virtio_net.c | 38 ++++++++++++++++++++++++++++++--------
>>>>    1 file changed, 30 insertions(+), 8 deletions(-)
>>>>
>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>>> index db88dcaefb20..494acc904b2c 100644
>>>> --- a/drivers/net/virtio_net.c
>>>> +++ b/drivers/net/virtio_net.c
>>>> @@ -1454,8 +1454,19 @@ static int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct receive_queue
>>>>           xsk_buffs = rq->xsk_buffs;
>>>>
>>>>           num = xsk_buff_alloc_batch(pool, xsk_buffs, rq->vq->num_free);
>>>> -       if (!num)
>>>> +       if (!num) {
>>>> +               if (xsk_uses_need_wakeup(pool)) {
>>>> +                       xsk_set_rx_need_wakeup(pool);
>>>> +                       /* Return 0 instead of -ENOMEM so that NAPI is
>>>> +                        * descheduled.
>>>> +                        */
>>>> +                       return 0;
>>>> +               }
>>>> +
>>>>                   return -ENOMEM;
>>>> +       } else {
>>>> +               xsk_clear_rx_need_wakeup(pool);
>>>> +       }
>>>>
>>>>           len = xsk_pool_get_rx_frame_size(pool) + vi->hdr_len;
>>>>
>>>> @@ -1588,20 +1599,21 @@ static bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
>>>>           return sent;
>>>>    }
>>>>
>>>> -static void xsk_wakeup(struct send_queue *sq)
>>>> +static void xsk_wakeup(struct napi_struct *napi, struct virtqueue *vq)
>>>>    {
>>>> -       if (napi_if_scheduled_mark_missed(&sq->napi))
>>>> +       if (napi_if_scheduled_mark_missed(napi))
>>>>                   return;
>>>>
>>>>           local_bh_disable();
>>>> -       virtqueue_napi_schedule(&sq->napi, sq->vq);
>>>> +       virtqueue_napi_schedule(napi, vq);
>>>>           local_bh_enable();
>>>>    }
>>>>
>>>>    static int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag)
>>>>    {
>>>>           struct virtnet_info *vi = netdev_priv(dev);
>>>> -       struct send_queue *sq;
>>>> +       struct napi_struct *napi;
>>>> +       struct virtqueue *vq;
>>>>
>>>>           if (!netif_running(dev))
>>>>                   return -ENETDOWN;
>>>> @@ -1609,9 +1621,19 @@ static int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag)
>>>>           if (qid >= vi->curr_queue_pairs)
>>>>                   return -EINVAL;
>>>>
>>>> -       sq = &vi->sq[qid];
>>>> +       if (flag == XDP_WAKEUP_TX) {
>>> Better use &?
>> Sorry, I don't get your point. Can you elaborate more?
> Oh, I meant using 'flag & XDP_WAKEUP_TX' is preferrable. IIUC, since
> this patch provides a way to enable the RX flag, for virtio_net, this
> flag can be set by TX and RX altogether? Please see this call trace:
> xsk_poll()->xsk_wakeup(xs, pool->cached_need_wakeup).

Oh, you're right. I'll fix it in the next version.

Thanks,
Quang Minh.


      reply	other threads:[~2026-02-28  6:25 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-27 15:09 [PATCH net-next] virtio-net: xsk: Support wakeup on RX side Bui Quang Minh
2026-02-28  1:52 ` Xuan Zhuo
2026-02-28  3:49 ` Jason Xing
2026-02-28  5:03   ` Bui Quang Minh
2026-02-28  6:16     ` Jason Xing
2026-02-28  6:25       ` Bui Quang Minh [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=829f4263-dfd2-4664-b66c-0a65f35aa57a@gmail.com \
    --to=minhquangbui99@gmail.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kerneljasonxing@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=virtualization@lists.linux.dev \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox