Linux virtualization list
 help / color / mirror / Atom feed
* [PATCH 0/2] fix virtio_net/virtio when in busy-polling
@ 2026-03-31 10:26 Longjun Tang
  2026-03-31 10:26 ` [PATCH 1/2] virtio_net: disable cb " Longjun Tang
  2026-03-31 10:26 ` [PATCH 2/2] virtio_ring: return IRQ_HANDLED for stale interrupts when cb disabled Longjun Tang
  0 siblings, 2 replies; 7+ messages in thread
From: Longjun Tang @ 2026-03-31 10:26 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, edumazet; +Cc: lange_tang, tanglongjun, virtualization

From: Longjun Tang <tanglongjun@kylinos.cn>

hi,
about this two patch:
1. virtio_net: disable cb when in busy-polling
When in a busy-polling context, it's possible to be in an enable cb 
state,even though it doesn't need to be. This can lead to vring_interrupt
returning IRQ_NONE. so, This patch proactively disables the cb when
in busy_polling.

2. virtio_ring: return IRQ_HANDLED for stale interrupts when cb disabled
In the vring_interrupt, if the used ring is empty, IRQ_NONE is returned.
However,Sometimes, such as with busy-polling, buffers might be consumed
from the used ring before an stale interrupt notification arrives. it
leading to return IRQ_NONE.The kernel's spurious-IRQ detector counts 
consecutive IRQ_NONE returnsand will permanently disable the interrupt 
line if 99,900 out of 100,000 interrupts go unhandled. so, this patch
add is_cb_disabled() to virtqueue_ops and, when more_used() is false but
cb are suppressed, return IRQ_HANDLED instead of IRQ_NONE so the spurious
counter does not accumulate.

stale interrupts: the device posted this notification before it observed
the cb suppression;


Longjun Tang (2):
  virtio_net: disable cb when in busy-polling
  virtio_ring: return IRQ_HANDLED for stale interrupts when cb disabled

 drivers/net/virtio_net.c     |  3 +++
 drivers/virtio/virtio_ring.c | 29 +++++++++++++++++++++++++++++
 2 files changed, 32 insertions(+)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] virtio_net: disable cb when in busy-polling
  2026-03-31 10:26 [PATCH 0/2] fix virtio_net/virtio when in busy-polling Longjun Tang
@ 2026-03-31 10:26 ` Longjun Tang
  2026-04-03  2:57   ` Jason Wang
  2026-03-31 10:26 ` [PATCH 2/2] virtio_ring: return IRQ_HANDLED for stale interrupts when cb disabled Longjun Tang
  1 sibling, 1 reply; 7+ messages in thread
From: Longjun Tang @ 2026-03-31 10:26 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, edumazet; +Cc: lange_tang, tanglongjun, virtualization

From: Longjun Tang <tanglongjun@kylinos.cn>

virtnet_poll() is shared between NAPI softirq context and the busy-poll
path (sk_busy_loop).  In the busy-poll path the caller drives the loop
directly and does not rely on interrupts to schedule NAPI; keeping
callbacks enabled is wasteful and risks a flood of IRQ_NONE returns that
can trigger the kernel's spurious-IRQ detector.

Signed-off-by: Longjun Tang <tanglongjun@kylinos.cn>
---
 drivers/net/virtio_net.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 72d6a9c6a5a2..df365fba2c40 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -3078,6 +3078,9 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
 	unsigned int xdp_xmit = 0;
 	bool napi_complete;
 
+	if (READ_ONCE(napi->state) & NAPIF_STATE_IN_BUSY_POLL)
+		virtqueue_disable_cb(rq->vq);
+
 	virtnet_poll_cleantx(rq, budget);
 
 	received = virtnet_receive(rq, budget, &xdp_xmit);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] virtio_ring: return IRQ_HANDLED for stale interrupts when cb disabled
  2026-03-31 10:26 [PATCH 0/2] fix virtio_net/virtio when in busy-polling Longjun Tang
  2026-03-31 10:26 ` [PATCH 1/2] virtio_net: disable cb " Longjun Tang
@ 2026-03-31 10:26 ` Longjun Tang
  2026-04-03  2:55   ` Jason Wang
  1 sibling, 1 reply; 7+ messages in thread
From: Longjun Tang @ 2026-03-31 10:26 UTC (permalink / raw)
  To: mst, jasowang, xuanzhuo, edumazet; +Cc: lange_tang, tanglongjun, virtualization

From: Longjun Tang <tanglongjun@kylinos.cn>

In the vring_interrupt, if the used ring is empty, IRQ_NONE is returned.
However,Sometimes, such as with busy-polling, buffers might be consumed
from the used ring before an stale interrupt notification arrives. it
leading to return IRQ_NONE.

The kernel's spurious-IRQ detector counts consecutive IRQ_NONE returns
and will permanently disable the interrupt line if 99,900 out of 100,000
interrupts go unhandled.

Add is_cb_disabled() to virtqueue_ops and, when more_used() is false but
cb are suppressed, return IRQ_HANDLED instead of IRQ_NONE so the spurious
counter does not accumulate.

Signed-off-by: Longjun Tang <tanglongjun@kylinos.cn>
---
 drivers/virtio/virtio_ring.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 335692d41617..52df932fc4a2 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -185,6 +185,7 @@ struct virtqueue_ops {
 		     unsigned int last_used_idx);
 	void *(*detach_unused_buf)(struct vring_virtqueue *vq);
 	bool (*more_used)(const struct vring_virtqueue *vq);
+	bool (*is_cb_disabled)(const struct vring_virtqueue *vq);
 	int (*resize)(struct vring_virtqueue *vq, u32 num);
 	void (*reset)(struct vring_virtqueue *vq);
 };
@@ -1063,6 +1064,12 @@ static void virtqueue_disable_cb_split(struct vring_virtqueue *vq)
 	}
 }
 
+static bool is_cb_disabled_split(const struct vring_virtqueue *vq)
+{
+	return !!(data_race(vq->split.avail_flags_shadow) &
+		  VRING_AVAIL_F_NO_INTERRUPT);
+}
+
 static unsigned int virtqueue_enable_cb_prepare_split(struct vring_virtqueue *vq)
 {
 	u16 last_used_idx;
@@ -2227,6 +2234,12 @@ static void virtqueue_disable_cb_packed(struct vring_virtqueue *vq)
 	}
 }
 
+static bool is_cb_disabled_packed(const struct vring_virtqueue *vq)
+{
+	return data_race(vq->packed.event_flags_shadow) ==
+	       VRING_PACKED_EVENT_FLAG_DISABLE;
+}
+
 static unsigned int virtqueue_enable_cb_prepare_packed(struct vring_virtqueue *vq)
 {
 	START_USE(vq);
@@ -2644,6 +2657,7 @@ static const struct virtqueue_ops split_ops = {
 	.poll = virtqueue_poll_split,
 	.detach_unused_buf = virtqueue_detach_unused_buf_split,
 	.more_used = more_used_split,
+	.is_cb_disabled = is_cb_disabled_split,
 	.resize = virtqueue_resize_split,
 	.reset = virtqueue_reset_split,
 };
@@ -2658,6 +2672,7 @@ static const struct virtqueue_ops packed_ops = {
 	.poll = virtqueue_poll_packed,
 	.detach_unused_buf = virtqueue_detach_unused_buf_packed,
 	.more_used = more_used_packed,
+	.is_cb_disabled = is_cb_disabled_packed,
 	.resize = virtqueue_resize_packed,
 	.reset = virtqueue_reset_packed,
 };
@@ -2672,6 +2687,7 @@ static const struct virtqueue_ops split_in_order_ops = {
 	.poll = virtqueue_poll_split,
 	.detach_unused_buf = virtqueue_detach_unused_buf_split,
 	.more_used = more_used_split_in_order,
+	.is_cb_disabled = is_cb_disabled_split,
 	.resize = virtqueue_resize_split,
 	.reset = virtqueue_reset_split,
 };
@@ -2686,6 +2702,7 @@ static const struct virtqueue_ops packed_in_order_ops = {
 	.poll = virtqueue_poll_packed,
 	.detach_unused_buf = virtqueue_detach_unused_buf_packed,
 	.more_used = more_used_packed_in_order,
+	.is_cb_disabled = is_cb_disabled_packed,
 	.resize = virtqueue_resize_packed,
 	.reset = virtqueue_reset_packed,
 };
@@ -3231,6 +3248,18 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
 	if (!more_used(vq)) {
+		/*
+		 * Stale interrupt: the device posted this notification
+		 * before it observed the callback suppression;
+		 * When more_used returns empty, IRQ_HANDLED should be
+		 * returned for stale interrupts.
+		 */
+		if (VIRTQUEUE_CALL(vq, is_cb_disabled)) {
+			if (vq->event)
+				data_race(vq->event_triggered = true);
+			pr_debug("virtqueue stale interrupt (callbacks disabled) for %p\n", vq);
+			return IRQ_HANDLED;
+		}
 		pr_debug("virtqueue interrupt with no work for %p\n", vq);
 		return IRQ_NONE;
 	}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] virtio_ring: return IRQ_HANDLED for stale interrupts when cb disabled
  2026-03-31 10:26 ` [PATCH 2/2] virtio_ring: return IRQ_HANDLED for stale interrupts when cb disabled Longjun Tang
@ 2026-04-03  2:55   ` Jason Wang
  2026-04-07  7:28     ` Lange Tang
  0 siblings, 1 reply; 7+ messages in thread
From: Jason Wang @ 2026-04-03  2:55 UTC (permalink / raw)
  To: Longjun Tang; +Cc: mst, xuanzhuo, edumazet, tanglongjun, virtualization

On Tue, Mar 31, 2026 at 6:27 PM Longjun Tang <lange_tang@163.com> wrote:
>
> From: Longjun Tang <tanglongjun@kylinos.cn>
>
> In the vring_interrupt, if the used ring is empty, IRQ_NONE is returned.
> However,Sometimes, such as with busy-polling, buffers might be consumed
> from the used ring before an stale interrupt notification arrives. it
> leading to return IRQ_NONE.
>
> The kernel's spurious-IRQ detector counts consecutive IRQ_NONE returns
> and will permanently disable the interrupt line if 99,900 out of 100,000
> interrupts go unhandled.
>
> Add is_cb_disabled() to virtqueue_ops and, when more_used() is false but
> cb are suppressed, return IRQ_HANDLED instead of IRQ_NONE so the spurious
> counter does not accumulate.
>
> Signed-off-by: Longjun Tang <tanglongjun@kylinos.cn>
> ---
>  drivers/virtio/virtio_ring.c | 29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 335692d41617..52df932fc4a2 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -185,6 +185,7 @@ struct virtqueue_ops {
>                      unsigned int last_used_idx);
>         void *(*detach_unused_buf)(struct vring_virtqueue *vq);
>         bool (*more_used)(const struct vring_virtqueue *vq);
> +       bool (*is_cb_disabled)(const struct vring_virtqueue *vq);
>         int (*resize)(struct vring_virtqueue *vq, u32 num);
>         void (*reset)(struct vring_virtqueue *vq);
>  };
> @@ -1063,6 +1064,12 @@ static void virtqueue_disable_cb_split(struct vring_virtqueue *vq)
>         }
>  }
>
> +static bool is_cb_disabled_split(const struct vring_virtqueue *vq)
> +{
> +       return !!(data_race(vq->split.avail_flags_shadow) &
> +                 VRING_AVAIL_F_NO_INTERRUPT);
> +}
> +
>  static unsigned int virtqueue_enable_cb_prepare_split(struct vring_virtqueue *vq)
>  {
>         u16 last_used_idx;
> @@ -2227,6 +2234,12 @@ static void virtqueue_disable_cb_packed(struct vring_virtqueue *vq)
>         }
>  }
>
> +static bool is_cb_disabled_packed(const struct vring_virtqueue *vq)
> +{
> +       return data_race(vq->packed.event_flags_shadow) ==
> +              VRING_PACKED_EVENT_FLAG_DISABLE;
> +}
> +
>  static unsigned int virtqueue_enable_cb_prepare_packed(struct vring_virtqueue *vq)
>  {
>         START_USE(vq);
> @@ -2644,6 +2657,7 @@ static const struct virtqueue_ops split_ops = {
>         .poll = virtqueue_poll_split,
>         .detach_unused_buf = virtqueue_detach_unused_buf_split,
>         .more_used = more_used_split,
> +       .is_cb_disabled = is_cb_disabled_split,
>         .resize = virtqueue_resize_split,
>         .reset = virtqueue_reset_split,
>  };
> @@ -2658,6 +2672,7 @@ static const struct virtqueue_ops packed_ops = {
>         .poll = virtqueue_poll_packed,
>         .detach_unused_buf = virtqueue_detach_unused_buf_packed,
>         .more_used = more_used_packed,
> +       .is_cb_disabled = is_cb_disabled_packed,
>         .resize = virtqueue_resize_packed,
>         .reset = virtqueue_reset_packed,
>  };
> @@ -2672,6 +2687,7 @@ static const struct virtqueue_ops split_in_order_ops = {
>         .poll = virtqueue_poll_split,
>         .detach_unused_buf = virtqueue_detach_unused_buf_split,
>         .more_used = more_used_split_in_order,
> +       .is_cb_disabled = is_cb_disabled_split,
>         .resize = virtqueue_resize_split,
>         .reset = virtqueue_reset_split,
>  };
> @@ -2686,6 +2702,7 @@ static const struct virtqueue_ops packed_in_order_ops = {
>         .poll = virtqueue_poll_packed,
>         .detach_unused_buf = virtqueue_detach_unused_buf_packed,
>         .more_used = more_used_packed_in_order,
> +       .is_cb_disabled = is_cb_disabled_packed,
>         .resize = virtqueue_resize_packed,
>         .reset = virtqueue_reset_packed,
>  };
> @@ -3231,6 +3248,18 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
>         struct vring_virtqueue *vq = to_vvq(_vq);
>
>         if (!more_used(vq)) {
> +               /*
> +                * Stale interrupt: the device posted this notification
> +                * before it observed the callback suppression;
> +                * When more_used returns empty, IRQ_HANDLED should be
> +                * returned for stale interrupts.
> +                */
> +               if (VIRTQUEUE_CALL(vq, is_cb_disabled)) {
> +                       if (vq->event)
> +                               data_race(vq->event_triggered = true);

Why event idx is special here?

Btw, looking at the comment of virtqueue_disable_cb_split:

                /*
                 * If device triggered an event already it won't
trigger one again:
                 * no need to disable.
                 */
        if (vq->event_triggered)
                        return;

It makes sense only for event index.

Thanks

> +                       pr_debug("virtqueue stale interrupt (callbacks disabled) for %p\n", vq);
> +                       return IRQ_HANDLED;
> +               }
>                 pr_debug("virtqueue interrupt with no work for %p\n", vq);
>                 return IRQ_NONE;
>         }
> --
> 2.43.0
>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] virtio_net: disable cb when in busy-polling
  2026-03-31 10:26 ` [PATCH 1/2] virtio_net: disable cb " Longjun Tang
@ 2026-04-03  2:57   ` Jason Wang
  2026-04-07  7:02     ` Lange Tang
  0 siblings, 1 reply; 7+ messages in thread
From: Jason Wang @ 2026-04-03  2:57 UTC (permalink / raw)
  To: Longjun Tang; +Cc: mst, xuanzhuo, edumazet, tanglongjun, virtualization

On Tue, Mar 31, 2026 at 6:27 PM Longjun Tang <lange_tang@163.com> wrote:
>
> From: Longjun Tang <tanglongjun@kylinos.cn>
>
> virtnet_poll() is shared between NAPI softirq context and the busy-poll
> path (sk_busy_loop).  In the busy-poll path the caller drives the loop
> directly and does not rely on interrupts to schedule NAPI; keeping
> callbacks enabled is wasteful and risks a flood of IRQ_NONE returns that
> can trigger the kernel's spurious-IRQ detector.
>
> Signed-off-by: Longjun Tang <tanglongjun@kylinos.cn>
> ---
>  drivers/net/virtio_net.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 72d6a9c6a5a2..df365fba2c40 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -3078,6 +3078,9 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
>         unsigned int xdp_xmit = 0;
>         bool napi_complete;
>
> +       if (READ_ONCE(napi->state) & NAPIF_STATE_IN_BUSY_POLL)
> +               virtqueue_disable_cb(rq->vq);
> +

I guess the root cause is virtuqeue_disable_cb() doesn't disable cb in
this case?

static void virtqueue_napi_schedule(struct napi_struct *napi,
                                    struct virtqueue *vq)
{
        if (napi_schedule_prep(napi)) {
virtqueue_disable_cb(vq);
                __napi_schedule(napi);
        }
}

If there's an interrupt in the middle of NAPIF_STATE_IN_BUSY_POLL,
napi_schedule_prep() will return false, so we lose the chance to
disable cb.

Thanks

>         virtnet_poll_cleantx(rq, budget);
>
>         received = virtnet_receive(rq, budget, &xdp_xmit);
> --
> 2.43.0
>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re:Re: [PATCH 1/2] virtio_net: disable cb when in busy-polling
  2026-04-03  2:57   ` Jason Wang
@ 2026-04-07  7:02     ` Lange Tang
  0 siblings, 0 replies; 7+ messages in thread
From: Lange Tang @ 2026-04-07  7:02 UTC (permalink / raw)
  To: jasowang@redhat.com
  Cc: mst@redhat.com, xuanzhuo@linux.alibaba.com, edumazet@google.com,
	Tang Longjun, virtualization@lists.linux.dev

At 2026-04-03 10:57:58, "Jason Wang" <jasowang@redhat.com> wrote:
>On Tue, Mar 31, 2026 at 6:27 PM Longjun Tang <lange_tang@163.com> wrote:
>>
>> From: Longjun Tang <tanglongjun@kylinos.cn>
>>
>> virtnet_poll() is shared between NAPI softirq context and the busy-poll
>> path (sk_busy_loop).  In the busy-poll path the caller drives the loop
>> directly and does not rely on interrupts to schedule NAPI; keeping
>> callbacks enabled is wasteful and risks a flood of IRQ_NONE returns that
>> can trigger the kernel's spurious-IRQ detector.
>>
>> Signed-off-by: Longjun Tang <tanglongjun@kylinos.cn>
>> ---
>>  drivers/net/virtio_net.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>> index 72d6a9c6a5a2..df365fba2c40 100644
>> --- a/drivers/net/virtio_net.c
>> +++ b/drivers/net/virtio_net.c
>> @@ -3078,6 +3078,9 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
>>         unsigned int xdp_xmit = 0;
>>         bool napi_complete;
>>
>> +       if (READ_ONCE(napi->state) & NAPIF_STATE_IN_BUSY_POLL)
>> +               virtqueue_disable_cb(rq->vq);
>> +
>
>I guess the root cause is virtuqeue_disable_cb() doesn't disable cb in
>this case?
>
>static void virtqueue_napi_schedule(struct napi_struct *napi,
>                                    struct virtqueue *vq)
>{
>        if (napi_schedule_prep(napi)) {
>virtqueue_disable_cb(vq);
>                __napi_schedule(napi);
>        }
>}
>
>If there's an interrupt in the middle of NAPIF_STATE_IN_BUSY_POLL,
>napi_schedule_prep() will return false, so we lose the chance to
>disable cb.
>

Sorry for the late reply!

Regarding the root cause you mentioned, the chance to disable cb is indeed lost 
when the following three conditions are met:
1. In NAPIF_STATE_IN_BUSY_POLL
2. Interrupt occurs
3. more_used returns true

However, in another scenario:
1. In NAPIF_STATE_IN_BUSY_POLL
2. Interrupt occurs
3. more_used returns false
In this second scenario, there is no chance to disable cb during the execution 
of the virtnet_poll until virtqueue_napi_complete.

>Thanks
>
>>         virtnet_poll_cleantx(rq, budget);
>>
>>         received = virtnet_receive(rq, budget, &xdp_xmit);
>> --
>> 2.43.0
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re:Re: [PATCH 2/2] virtio_ring: return IRQ_HANDLED for stale interrupts when cb disabled
  2026-04-03  2:55   ` Jason Wang
@ 2026-04-07  7:28     ` Lange Tang
  0 siblings, 0 replies; 7+ messages in thread
From: Lange Tang @ 2026-04-07  7:28 UTC (permalink / raw)
  To: jasowang@redhat.com
  Cc: mst@redhat.com, xuanzhuo@linux.alibaba.com, edumazet@google.com,
	Tang Longjun, virtualization@lists.linux.dev

At 2026-04-03 10:55:58, "Jason Wang" <jasowang@redhat.com> wrote:
>On Tue, Mar 31, 2026 at 6:27 PM Longjun Tang <lange_tang@163.com> wrote:
>>
>> From: Longjun Tang <tanglongjun@kylinos.cn>
>>
>> In the vring_interrupt, if the used ring is empty, IRQ_NONE is returned.
>> However,Sometimes, such as with busy-polling, buffers might be consumed
>> from the used ring before an stale interrupt notification arrives. it
>> leading to return IRQ_NONE.
>>
>> The kernel's spurious-IRQ detector counts consecutive IRQ_NONE returns
>> and will permanently disable the interrupt line if 99,900 out of 100,000
>> interrupts go unhandled.
>>
>> Add is_cb_disabled() to virtqueue_ops and, when more_used() is false but
>> cb are suppressed, return IRQ_HANDLED instead of IRQ_NONE so the spurious
>> counter does not accumulate.
>>
>> Signed-off-by: Longjun Tang <tanglongjun@kylinos.cn>
>> ---
>>  drivers/virtio/virtio_ring.c | 29 +++++++++++++++++++++++++++++
>>  1 file changed, 29 insertions(+)
>>
>> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>> index 335692d41617..52df932fc4a2 100644
>> --- a/drivers/virtio/virtio_ring.c
>> +++ b/drivers/virtio/virtio_ring.c
>> @@ -185,6 +185,7 @@ struct virtqueue_ops {
>>                      unsigned int last_used_idx);
>>         void *(*detach_unused_buf)(struct vring_virtqueue *vq);
>>         bool (*more_used)(const struct vring_virtqueue *vq);
>> +       bool (*is_cb_disabled)(const struct vring_virtqueue *vq);
>>         int (*resize)(struct vring_virtqueue *vq, u32 num);
>>         void (*reset)(struct vring_virtqueue *vq);
>>  };
>> @@ -1063,6 +1064,12 @@ static void virtqueue_disable_cb_split(struct vring_virtqueue *vq)
>>         }
>>  }
>>
>> +static bool is_cb_disabled_split(const struct vring_virtqueue *vq)
>> +{
>> +       return !!(data_race(vq->split.avail_flags_shadow) &
>> +                 VRING_AVAIL_F_NO_INTERRUPT);
>> +}
>> +
>>  static unsigned int virtqueue_enable_cb_prepare_split(struct vring_virtqueue *vq)
>>  {
>>         u16 last_used_idx;
>> @@ -2227,6 +2234,12 @@ static void virtqueue_disable_cb_packed(struct vring_virtqueue *vq)
>>         }
>>  }
>>
>> +static bool is_cb_disabled_packed(const struct vring_virtqueue *vq)
>> +{
>> +       return data_race(vq->packed.event_flags_shadow) ==
>> +              VRING_PACKED_EVENT_FLAG_DISABLE;
>> +}
>> +
>>  static unsigned int virtqueue_enable_cb_prepare_packed(struct vring_virtqueue *vq)
>>  {
>>         START_USE(vq);
>> @@ -2644,6 +2657,7 @@ static const struct virtqueue_ops split_ops = {
>>         .poll = virtqueue_poll_split,
>>         .detach_unused_buf = virtqueue_detach_unused_buf_split,
>>         .more_used = more_used_split,
>> +       .is_cb_disabled = is_cb_disabled_split,
>>         .resize = virtqueue_resize_split,
>>         .reset = virtqueue_reset_split,
>>  };
>> @@ -2658,6 +2672,7 @@ static const struct virtqueue_ops packed_ops = {
>>         .poll = virtqueue_poll_packed,
>>         .detach_unused_buf = virtqueue_detach_unused_buf_packed,
>>         .more_used = more_used_packed,
>> +       .is_cb_disabled = is_cb_disabled_packed,
>>         .resize = virtqueue_resize_packed,
>>         .reset = virtqueue_reset_packed,
>>  };
>> @@ -2672,6 +2687,7 @@ static const struct virtqueue_ops split_in_order_ops = {
>>         .poll = virtqueue_poll_split,
>>         .detach_unused_buf = virtqueue_detach_unused_buf_split,
>>         .more_used = more_used_split_in_order,
>> +       .is_cb_disabled = is_cb_disabled_split,
>>         .resize = virtqueue_resize_split,
>>         .reset = virtqueue_reset_split,
>>  };
>> @@ -2686,6 +2702,7 @@ static const struct virtqueue_ops packed_in_order_ops = {
>>         .poll = virtqueue_poll_packed,
>>         .detach_unused_buf = virtqueue_detach_unused_buf_packed,
>>         .more_used = more_used_packed_in_order,
>> +       .is_cb_disabled = is_cb_disabled_packed,
>>         .resize = virtqueue_resize_packed,
>>         .reset = virtqueue_reset_packed,
>>  };
>> @@ -3231,6 +3248,18 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
>>         struct vring_virtqueue *vq = to_vvq(_vq);
>>
>>         if (!more_used(vq)) {
>> +               /*
>> +                * Stale interrupt: the device posted this notification
>> +                * before it observed the callback suppression;
>> +                * When more_used returns empty, IRQ_HANDLED should be
>> +                * returned for stale interrupts.
>> +                */
>> +               if (VIRTQUEUE_CALL(vq, is_cb_disabled)) {
>> +                       if (vq->event)
>> +                               data_race(vq->event_triggered = true);
>
>Why event idx is special here?
>
>Btw, looking at the comment of virtqueue_disable_cb_split:
>
>                /*
>                 * If device triggered an event already it won't
>trigger one again:
>                 * no need to disable.
>                 */
>        if (vq->event_triggered)
>                        return;
>
>It makes sense only for event index.

yes, I will remove this part in the next version.

>
>Thanks
>
>> +                       pr_debug("virtqueue stale interrupt (callbacks disabled) for %p\n", vq);
>> +                       return IRQ_HANDLED;
>> +               }
>>                 pr_debug("virtqueue interrupt with no work for %p\n", vq);
>>                 return IRQ_NONE;
>>         }
>> --
>> 2.43.0
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-04-07  7:28 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 10:26 [PATCH 0/2] fix virtio_net/virtio when in busy-polling Longjun Tang
2026-03-31 10:26 ` [PATCH 1/2] virtio_net: disable cb " Longjun Tang
2026-04-03  2:57   ` Jason Wang
2026-04-07  7:02     ` Lange Tang
2026-03-31 10:26 ` [PATCH 2/2] virtio_ring: return IRQ_HANDLED for stale interrupts when cb disabled Longjun Tang
2026-04-03  2:55   ` Jason Wang
2026-04-07  7:28     ` Lange Tang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox