* [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker
2026-01-06 15:04 [PATCH net v3 0/3] virtio-net: fix the deadlock when disabling rx NAPI Bui Quang Minh
@ 2026-01-06 15:04 ` Bui Quang Minh
2026-01-06 15:29 ` Michael S. Tsirkin
2026-01-10 2:12 ` Jakub Kicinski
2026-01-06 15:04 ` [PATCH net v3 2/3] virtio-net: remove unused " Bui Quang Minh
` (2 subsequent siblings)
3 siblings, 2 replies; 11+ messages in thread
From: Bui Quang Minh @ 2026-01-06 15:04 UTC (permalink / raw)
To: netdev
Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
Jesper Dangaard Brouer, John Fastabend, Stanislav Fomichev,
virtualization, linux-kernel, bpf, Bui Quang Minh, stable
When we fail to refill the receive buffers, we schedule a delayed worker
to retry later. However, this worker creates some concurrency issues.
For example, when the worker runs concurrently with virtnet_xdp_set,
both need to temporarily disable queue's NAPI before enabling again.
Without proper synchronization, a deadlock can happen when
napi_disable() is called on an already disabled NAPI. That
napi_disable() call will be stuck and so will the subsequent
napi_enable() call.
To simplify the logic and avoid further problems, we will instead retry
refilling in the next NAPI poll.
Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx")
Reported-by: Paolo Abeni <pabeni@redhat.com>
Closes: https://netdev-ctrl.bots.linux.dev/logs/vmksft/drv-hw-dbg/results/400961/3-xdp-py/stderr
Cc: stable@vger.kernel.org
Suggested-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
---
drivers/net/virtio_net.c | 48 +++++++++++++++++++++-------------------
1 file changed, 25 insertions(+), 23 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 1bb3aeca66c6..f986abf0c236 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -3046,16 +3046,16 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
else
packets = virtnet_receive_packets(vi, rq, budget, xdp_xmit, &stats);
+ u64_stats_set(&stats.packets, packets);
if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) {
- if (!try_fill_recv(vi, rq, GFP_ATOMIC)) {
- spin_lock(&vi->refill_lock);
- if (vi->refill_enabled)
- schedule_delayed_work(&vi->refill, 0);
- spin_unlock(&vi->refill_lock);
- }
+ if (!try_fill_recv(vi, rq, GFP_ATOMIC))
+ /* We need to retry refilling in the next NAPI poll so
+ * we must return budget to make sure the NAPI is
+ * repolled.
+ */
+ packets = budget;
}
- u64_stats_set(&stats.packets, packets);
u64_stats_update_begin(&rq->stats.syncp);
for (i = 0; i < ARRAY_SIZE(virtnet_rq_stats_desc); i++) {
size_t offset = virtnet_rq_stats_desc[i].offset;
@@ -3230,9 +3230,10 @@ static int virtnet_open(struct net_device *dev)
for (i = 0; i < vi->max_queue_pairs; i++) {
if (i < vi->curr_queue_pairs)
- /* Make sure we have some buffers: if oom use wq. */
- if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
- schedule_delayed_work(&vi->refill, 0);
+ /* Pre-fill rq agressively, to make sure we are ready to
+ * get packets immediately.
+ */
+ try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
err = virtnet_enable_queue_pair(vi, i);
if (err < 0)
@@ -3472,16 +3473,15 @@ static void __virtnet_rx_resume(struct virtnet_info *vi,
struct receive_queue *rq,
bool refill)
{
- bool running = netif_running(vi->dev);
- bool schedule_refill = false;
+ if (netif_running(vi->dev)) {
+ /* Pre-fill rq agressively, to make sure we are ready to get
+ * packets immediately.
+ */
+ if (refill)
+ try_fill_recv(vi, rq, GFP_KERNEL);
- if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))
- schedule_refill = true;
- if (running)
virtnet_napi_enable(rq);
-
- if (schedule_refill)
- schedule_delayed_work(&vi->refill, 0);
+ }
}
static void virtnet_rx_resume_all(struct virtnet_info *vi)
@@ -3829,11 +3829,13 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
}
succ:
vi->curr_queue_pairs = queue_pairs;
- /* virtnet_open() will refill when device is going to up. */
- spin_lock_bh(&vi->refill_lock);
- if (dev->flags & IFF_UP && vi->refill_enabled)
- schedule_delayed_work(&vi->refill, 0);
- spin_unlock_bh(&vi->refill_lock);
+ if (dev->flags & IFF_UP) {
+ local_bh_disable();
+ for (int i = 0; i < vi->curr_queue_pairs; ++i)
+ virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq);
+
+ local_bh_enable();
+ }
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker
2026-01-06 15:04 ` [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker Bui Quang Minh
@ 2026-01-06 15:29 ` Michael S. Tsirkin
2026-01-06 15:39 ` Bui Quang Minh
2026-01-10 2:12 ` Jakub Kicinski
1 sibling, 1 reply; 11+ messages in thread
From: Michael S. Tsirkin @ 2026-01-06 15:29 UTC (permalink / raw)
To: Bui Quang Minh
Cc: netdev, Jason Wang, Xuan Zhuo, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
John Fastabend, Stanislav Fomichev, virtualization, linux-kernel,
bpf, stable
On Tue, Jan 06, 2026 at 10:04:36PM +0700, Bui Quang Minh wrote:
> When we fail to refill the receive buffers, we schedule a delayed worker
> to retry later. However, this worker creates some concurrency issues.
> For example, when the worker runs concurrently with virtnet_xdp_set,
> both need to temporarily disable queue's NAPI before enabling again.
> Without proper synchronization, a deadlock can happen when
> napi_disable() is called on an already disabled NAPI. That
> napi_disable() call will be stuck and so will the subsequent
> napi_enable() call.
>
> To simplify the logic and avoid further problems, we will instead retry
> refilling in the next NAPI poll.
>
> Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx")
> Reported-by: Paolo Abeni <pabeni@redhat.com>
> Closes: https://netdev-ctrl.bots.linux.dev/logs/vmksft/drv-hw-dbg/results/400961/3-xdp-py/stderr
> Cc: stable@vger.kernel.org
> Suggested-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
and CC stable I think. Can you do that pls?
> ---
> drivers/net/virtio_net.c | 48 +++++++++++++++++++++-------------------
> 1 file changed, 25 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 1bb3aeca66c6..f986abf0c236 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -3046,16 +3046,16 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
> else
> packets = virtnet_receive_packets(vi, rq, budget, xdp_xmit, &stats);
>
> + u64_stats_set(&stats.packets, packets);
> if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) {
> - if (!try_fill_recv(vi, rq, GFP_ATOMIC)) {
> - spin_lock(&vi->refill_lock);
> - if (vi->refill_enabled)
> - schedule_delayed_work(&vi->refill, 0);
> - spin_unlock(&vi->refill_lock);
> - }
> + if (!try_fill_recv(vi, rq, GFP_ATOMIC))
> + /* We need to retry refilling in the next NAPI poll so
> + * we must return budget to make sure the NAPI is
> + * repolled.
> + */
> + packets = budget;
> }
>
> - u64_stats_set(&stats.packets, packets);
> u64_stats_update_begin(&rq->stats.syncp);
> for (i = 0; i < ARRAY_SIZE(virtnet_rq_stats_desc); i++) {
> size_t offset = virtnet_rq_stats_desc[i].offset;
> @@ -3230,9 +3230,10 @@ static int virtnet_open(struct net_device *dev)
>
> for (i = 0; i < vi->max_queue_pairs; i++) {
> if (i < vi->curr_queue_pairs)
> - /* Make sure we have some buffers: if oom use wq. */
> - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
> - schedule_delayed_work(&vi->refill, 0);
> + /* Pre-fill rq agressively, to make sure we are ready to
> + * get packets immediately.
> + */
> + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
>
> err = virtnet_enable_queue_pair(vi, i);
> if (err < 0)
> @@ -3472,16 +3473,15 @@ static void __virtnet_rx_resume(struct virtnet_info *vi,
> struct receive_queue *rq,
> bool refill)
> {
> - bool running = netif_running(vi->dev);
> - bool schedule_refill = false;
> + if (netif_running(vi->dev)) {
> + /* Pre-fill rq agressively, to make sure we are ready to get
> + * packets immediately.
> + */
> + if (refill)
> + try_fill_recv(vi, rq, GFP_KERNEL);
>
> - if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))
> - schedule_refill = true;
> - if (running)
> virtnet_napi_enable(rq);
> -
> - if (schedule_refill)
> - schedule_delayed_work(&vi->refill, 0);
> + }
> }
>
> static void virtnet_rx_resume_all(struct virtnet_info *vi)
> @@ -3829,11 +3829,13 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
> }
> succ:
> vi->curr_queue_pairs = queue_pairs;
> - /* virtnet_open() will refill when device is going to up. */
> - spin_lock_bh(&vi->refill_lock);
> - if (dev->flags & IFF_UP && vi->refill_enabled)
> - schedule_delayed_work(&vi->refill, 0);
> - spin_unlock_bh(&vi->refill_lock);
> + if (dev->flags & IFF_UP) {
> + local_bh_disable();
> + for (int i = 0; i < vi->curr_queue_pairs; ++i)
> + virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq);
> +
> + local_bh_enable();
> + }
>
> return 0;
> }
> --
> 2.43.0
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker
2026-01-06 15:29 ` Michael S. Tsirkin
@ 2026-01-06 15:39 ` Bui Quang Minh
0 siblings, 0 replies; 11+ messages in thread
From: Bui Quang Minh @ 2026-01-06 15:39 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: netdev, Jason Wang, Xuan Zhuo, Eugenio Pérez, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
John Fastabend, Stanislav Fomichev, virtualization, linux-kernel,
bpf, stable
On 1/6/26 22:29, Michael S. Tsirkin wrote:
> On Tue, Jan 06, 2026 at 10:04:36PM +0700, Bui Quang Minh wrote:
>> When we fail to refill the receive buffers, we schedule a delayed worker
>> to retry later. However, this worker creates some concurrency issues.
>> For example, when the worker runs concurrently with virtnet_xdp_set,
>> both need to temporarily disable queue's NAPI before enabling again.
>> Without proper synchronization, a deadlock can happen when
>> napi_disable() is called on an already disabled NAPI. That
>> napi_disable() call will be stuck and so will the subsequent
>> napi_enable() call.
>>
>> To simplify the logic and avoid further problems, we will instead retry
>> refilling in the next NAPI poll.
>>
>> Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx")
>> Reported-by: Paolo Abeni <pabeni@redhat.com>
>> Closes: https://netdev-ctrl.bots.linux.dev/logs/vmksft/drv-hw-dbg/results/400961/3-xdp-py/stderr
>> Cc: stable@vger.kernel.org
>> Suggested-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
>> Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
> and CC stable I think. Can you do that pls?
I've added Cc stable already.
Thanks for you review.
>
>> ---
>> drivers/net/virtio_net.c | 48 +++++++++++++++++++++-------------------
>> 1 file changed, 25 insertions(+), 23 deletions(-)
>>
>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>> index 1bb3aeca66c6..f986abf0c236 100644
>> --- a/drivers/net/virtio_net.c
>> +++ b/drivers/net/virtio_net.c
>> @@ -3046,16 +3046,16 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
>> else
>> packets = virtnet_receive_packets(vi, rq, budget, xdp_xmit, &stats);
>>
>> + u64_stats_set(&stats.packets, packets);
>> if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) {
>> - if (!try_fill_recv(vi, rq, GFP_ATOMIC)) {
>> - spin_lock(&vi->refill_lock);
>> - if (vi->refill_enabled)
>> - schedule_delayed_work(&vi->refill, 0);
>> - spin_unlock(&vi->refill_lock);
>> - }
>> + if (!try_fill_recv(vi, rq, GFP_ATOMIC))
>> + /* We need to retry refilling in the next NAPI poll so
>> + * we must return budget to make sure the NAPI is
>> + * repolled.
>> + */
>> + packets = budget;
>> }
>>
>> - u64_stats_set(&stats.packets, packets);
>> u64_stats_update_begin(&rq->stats.syncp);
>> for (i = 0; i < ARRAY_SIZE(virtnet_rq_stats_desc); i++) {
>> size_t offset = virtnet_rq_stats_desc[i].offset;
>> @@ -3230,9 +3230,10 @@ static int virtnet_open(struct net_device *dev)
>>
>> for (i = 0; i < vi->max_queue_pairs; i++) {
>> if (i < vi->curr_queue_pairs)
>> - /* Make sure we have some buffers: if oom use wq. */
>> - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
>> - schedule_delayed_work(&vi->refill, 0);
>> + /* Pre-fill rq agressively, to make sure we are ready to
>> + * get packets immediately.
>> + */
>> + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
>>
>> err = virtnet_enable_queue_pair(vi, i);
>> if (err < 0)
>> @@ -3472,16 +3473,15 @@ static void __virtnet_rx_resume(struct virtnet_info *vi,
>> struct receive_queue *rq,
>> bool refill)
>> {
>> - bool running = netif_running(vi->dev);
>> - bool schedule_refill = false;
>> + if (netif_running(vi->dev)) {
>> + /* Pre-fill rq agressively, to make sure we are ready to get
>> + * packets immediately.
>> + */
>> + if (refill)
>> + try_fill_recv(vi, rq, GFP_KERNEL);
>>
>> - if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))
>> - schedule_refill = true;
>> - if (running)
>> virtnet_napi_enable(rq);
>> -
>> - if (schedule_refill)
>> - schedule_delayed_work(&vi->refill, 0);
>> + }
>> }
>>
>> static void virtnet_rx_resume_all(struct virtnet_info *vi)
>> @@ -3829,11 +3829,13 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
>> }
>> succ:
>> vi->curr_queue_pairs = queue_pairs;
>> - /* virtnet_open() will refill when device is going to up. */
>> - spin_lock_bh(&vi->refill_lock);
>> - if (dev->flags & IFF_UP && vi->refill_enabled)
>> - schedule_delayed_work(&vi->refill, 0);
>> - spin_unlock_bh(&vi->refill_lock);
>> + if (dev->flags & IFF_UP) {
>> + local_bh_disable();
>> + for (int i = 0; i < vi->curr_queue_pairs; ++i)
>> + virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq);
>> +
>> + local_bh_enable();
>> + }
>>
>> return 0;
>> }
>> --
>> 2.43.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker
2026-01-06 15:04 ` [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker Bui Quang Minh
2026-01-06 15:29 ` Michael S. Tsirkin
@ 2026-01-10 2:12 ` Jakub Kicinski
2026-01-10 8:23 ` Bui Quang Minh
2026-01-10 10:14 ` Michael S. Tsirkin
1 sibling, 2 replies; 11+ messages in thread
From: Jakub Kicinski @ 2026-01-10 2:12 UTC (permalink / raw)
To: Bui Quang Minh
Cc: netdev, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
Jesper Dangaard Brouer, John Fastabend, Stanislav Fomichev,
virtualization, linux-kernel, bpf, stable
On Tue, 6 Jan 2026 22:04:36 +0700 Bui Quang Minh wrote:
> When we fail to refill the receive buffers, we schedule a delayed worker
> to retry later. However, this worker creates some concurrency issues.
> For example, when the worker runs concurrently with virtnet_xdp_set,
> both need to temporarily disable queue's NAPI before enabling again.
> Without proper synchronization, a deadlock can happen when
> napi_disable() is called on an already disabled NAPI. That
> napi_disable() call will be stuck and so will the subsequent
> napi_enable() call.
>
> To simplify the logic and avoid further problems, we will instead retry
> refilling in the next NAPI poll.
Happy to see this go FWIW. If it causes issues we should consider
adding some retry logic in the core (NAPI) rather than locally in
the driver..
> Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx")
> Reported-by: Paolo Abeni <pabeni@redhat.com>
> Closes: https://netdev-ctrl.bots.linux.dev/logs/vmksft/drv-hw-dbg/results/400961/3-xdp-py/stderr
The Closes should probably point to Paolo's report. We'll wipe these CI
logs sooner or later but the lore archive will stick around.
> @@ -3230,9 +3230,10 @@ static int virtnet_open(struct net_device *dev)
>
> for (i = 0; i < vi->max_queue_pairs; i++) {
> if (i < vi->curr_queue_pairs)
> - /* Make sure we have some buffers: if oom use wq. */
> - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
> - schedule_delayed_work(&vi->refill, 0);
> + /* Pre-fill rq agressively, to make sure we are ready to
> + * get packets immediately.
> + */
> + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
We should enforce _some_ minimal fill level at the time of open().
If the ring is completely empty no traffic will ever flow, right?
Perhaps I missed scheduling the NAPI somewhere..
> err = virtnet_enable_queue_pair(vi, i);
> if (err < 0)
> @@ -3472,16 +3473,15 @@ static void __virtnet_rx_resume(struct virtnet_info *vi,
> struct receive_queue *rq,
> bool refill)
> {
> - bool running = netif_running(vi->dev);
> - bool schedule_refill = false;
> + if (netif_running(vi->dev)) {
> + /* Pre-fill rq agressively, to make sure we are ready to get
> + * packets immediately.
> + */
> + if (refill)
> + try_fill_recv(vi, rq, GFP_KERNEL);
Similar thing here? Tho not sure we can fail here..
> - if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))
> - schedule_refill = true;
> - if (running)
> virtnet_napi_enable(rq);
> -
> - if (schedule_refill)
> - schedule_delayed_work(&vi->refill, 0);
> + }
> }
>
> static void virtnet_rx_resume_all(struct virtnet_info *vi)
> @@ -3829,11 +3829,13 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
> }
> succ:
> vi->curr_queue_pairs = queue_pairs;
> - /* virtnet_open() will refill when device is going to up. */
> - spin_lock_bh(&vi->refill_lock);
> - if (dev->flags & IFF_UP && vi->refill_enabled)
> - schedule_delayed_work(&vi->refill, 0);
> - spin_unlock_bh(&vi->refill_lock);
> + if (dev->flags & IFF_UP) {
> + local_bh_disable();
> + for (int i = 0; i < vi->curr_queue_pairs; ++i)
> + virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq);
> +
nit: spurious new line
> + local_bh_enable();
> + }
>
> return 0;
> }
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker
2026-01-10 2:12 ` Jakub Kicinski
@ 2026-01-10 8:23 ` Bui Quang Minh
2026-01-10 19:10 ` Jakub Kicinski
2026-01-10 10:14 ` Michael S. Tsirkin
1 sibling, 1 reply; 11+ messages in thread
From: Bui Quang Minh @ 2026-01-10 8:23 UTC (permalink / raw)
To: Jakub Kicinski
Cc: netdev, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
Jesper Dangaard Brouer, John Fastabend, Stanislav Fomichev,
virtualization, linux-kernel, bpf, stable
On 1/10/26 09:12, Jakub Kicinski wrote:
> On Tue, 6 Jan 2026 22:04:36 +0700 Bui Quang Minh wrote:
>> When we fail to refill the receive buffers, we schedule a delayed worker
>> to retry later. However, this worker creates some concurrency issues.
>> For example, when the worker runs concurrently with virtnet_xdp_set,
>> both need to temporarily disable queue's NAPI before enabling again.
>> Without proper synchronization, a deadlock can happen when
>> napi_disable() is called on an already disabled NAPI. That
>> napi_disable() call will be stuck and so will the subsequent
>> napi_enable() call.
>>
>> To simplify the logic and avoid further problems, we will instead retry
>> refilling in the next NAPI poll.
> Happy to see this go FWIW. If it causes issues we should consider
> adding some retry logic in the core (NAPI) rather than locally in
> the driver..
>
>> Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx")
>> Reported-by: Paolo Abeni <pabeni@redhat.com>
>> Closes: https://netdev-ctrl.bots.linux.dev/logs/vmksft/drv-hw-dbg/results/400961/3-xdp-py/stderr
> The Closes should probably point to Paolo's report. We'll wipe these CI
> logs sooner or later but the lore archive will stick around.
I'll fix it in the next version.
>
>> @@ -3230,9 +3230,10 @@ static int virtnet_open(struct net_device *dev)
>>
>> for (i = 0; i < vi->max_queue_pairs; i++) {
>> if (i < vi->curr_queue_pairs)
>> - /* Make sure we have some buffers: if oom use wq. */
>> - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
>> - schedule_delayed_work(&vi->refill, 0);
>> + /* Pre-fill rq agressively, to make sure we are ready to
>> + * get packets immediately.
>> + */
>> + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
> We should enforce _some_ minimal fill level at the time of open().
> If the ring is completely empty no traffic will ever flow, right?
> Perhaps I missed scheduling the NAPI somewhere..
The NAPI is enabled and scheduled in virtnet_napi_enable(). The code
path is like this
virtnet_enable_queue_pair
-> virtnet_napi_enable
-> virtnet_napi_do_enable
-> virtqueue_napi_schedule
The same happens in __virtnet_rx_resume().
>
>> err = virtnet_enable_queue_pair(vi, i);
>> if (err < 0)
>> @@ -3472,16 +3473,15 @@ static void __virtnet_rx_resume(struct virtnet_info *vi,
>> struct receive_queue *rq,
>> bool refill)
>> {
>> - bool running = netif_running(vi->dev);
>> - bool schedule_refill = false;
>> + if (netif_running(vi->dev)) {
>> + /* Pre-fill rq agressively, to make sure we are ready to get
>> + * packets immediately.
>> + */
>> + if (refill)
>> + try_fill_recv(vi, rq, GFP_KERNEL);
> Similar thing here? Tho not sure we can fail here..
>
>> - if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))
>> - schedule_refill = true;
>> - if (running)
>> virtnet_napi_enable(rq);
>> -
>> - if (schedule_refill)
>> - schedule_delayed_work(&vi->refill, 0);
>> + }
>> }
>>
>> static void virtnet_rx_resume_all(struct virtnet_info *vi)
>> @@ -3829,11 +3829,13 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
>> }
>> succ:
>> vi->curr_queue_pairs = queue_pairs;
>> - /* virtnet_open() will refill when device is going to up. */
>> - spin_lock_bh(&vi->refill_lock);
>> - if (dev->flags & IFF_UP && vi->refill_enabled)
>> - schedule_delayed_work(&vi->refill, 0);
>> - spin_unlock_bh(&vi->refill_lock);
>> + if (dev->flags & IFF_UP) {
>> + local_bh_disable();
>> + for (int i = 0; i < vi->curr_queue_pairs; ++i)
>> + virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq);
>> +
> nit: spurious new line
I'll delete it in the next version.
>
>> + local_bh_enable();
>> + }
>>
>> return 0;
>> }
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker
2026-01-10 8:23 ` Bui Quang Minh
@ 2026-01-10 19:10 ` Jakub Kicinski
0 siblings, 0 replies; 11+ messages in thread
From: Jakub Kicinski @ 2026-01-10 19:10 UTC (permalink / raw)
To: Bui Quang Minh
Cc: netdev, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
Jesper Dangaard Brouer, John Fastabend, Stanislav Fomichev,
virtualization, linux-kernel, bpf, stable
On Sat, 10 Jan 2026 15:23:36 +0700 Bui Quang Minh wrote:
> >> @@ -3230,9 +3230,10 @@ static int virtnet_open(struct net_device *dev)
> >>
> >> for (i = 0; i < vi->max_queue_pairs; i++) {
> >> if (i < vi->curr_queue_pairs)
> >> - /* Make sure we have some buffers: if oom use wq. */
> >> - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
> >> - schedule_delayed_work(&vi->refill, 0);
> >> + /* Pre-fill rq agressively, to make sure we are ready to
> >> + * get packets immediately.
> >> + */
> >> + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
> > We should enforce _some_ minimal fill level at the time of open().
> > If the ring is completely empty no traffic will ever flow, right?
> > Perhaps I missed scheduling the NAPI somewhere..
>
> The NAPI is enabled and scheduled in virtnet_napi_enable(). The code
> path is like this
>
> virtnet_enable_queue_pair
> -> virtnet_napi_enable
> -> virtnet_napi_do_enable
> -> virtqueue_napi_schedule
>
> The same happens in __virtnet_rx_resume().
I see. Alright, let me fix the nits while applying, no need to respin.
Kinda want this in the tree for a few days before shipping off to Linus.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker
2026-01-10 2:12 ` Jakub Kicinski
2026-01-10 8:23 ` Bui Quang Minh
@ 2026-01-10 10:14 ` Michael S. Tsirkin
1 sibling, 0 replies; 11+ messages in thread
From: Michael S. Tsirkin @ 2026-01-10 10:14 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Bui Quang Minh, netdev, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Andrew Lunn, David S. Miller, Eric Dumazet, Paolo Abeni,
Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
John Fastabend, Stanislav Fomichev, virtualization, linux-kernel,
bpf, stable
On Fri, Jan 09, 2026 at 06:12:39PM -0800, Jakub Kicinski wrote:
> On Tue, 6 Jan 2026 22:04:36 +0700 Bui Quang Minh wrote:
> > When we fail to refill the receive buffers, we schedule a delayed worker
> > to retry later. However, this worker creates some concurrency issues.
> > For example, when the worker runs concurrently with virtnet_xdp_set,
> > both need to temporarily disable queue's NAPI before enabling again.
> > Without proper synchronization, a deadlock can happen when
> > napi_disable() is called on an already disabled NAPI. That
> > napi_disable() call will be stuck and so will the subsequent
> > napi_enable() call.
> >
> > To simplify the logic and avoid further problems, we will instead retry
> > refilling in the next NAPI poll.
>
> Happy to see this go FWIW. If it causes issues we should consider
> adding some retry logic in the core (NAPI) rather than locally in
> the driver..
>
> > Fixes: 4bc12818b363 ("virtio-net: disable delayed refill when pausing rx")
> > Reported-by: Paolo Abeni <pabeni@redhat.com>
> > Closes: https://netdev-ctrl.bots.linux.dev/logs/vmksft/drv-hw-dbg/results/400961/3-xdp-py/stderr
>
> The Closes should probably point to Paolo's report. We'll wipe these CI
> logs sooner or later but the lore archive will stick around.
>
> > @@ -3230,9 +3230,10 @@ static int virtnet_open(struct net_device *dev)
> >
> > for (i = 0; i < vi->max_queue_pairs; i++) {
> > if (i < vi->curr_queue_pairs)
> > - /* Make sure we have some buffers: if oom use wq. */
> > - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
> > - schedule_delayed_work(&vi->refill, 0);
> > + /* Pre-fill rq agressively, to make sure we are ready to
> > + * get packets immediately.
> > + */
> > + try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
>
> We should enforce _some_ minimal fill level at the time of open().
> If the ring is completely empty no traffic will ever flow, right?
> Perhaps I missed scheduling the NAPI somewhere..
Practically, single page allocations with GFP_KERNEL don't
really fail. So I think it's fine.
> > err = virtnet_enable_queue_pair(vi, i);
> > if (err < 0)
> > @@ -3472,16 +3473,15 @@ static void __virtnet_rx_resume(struct virtnet_info *vi,
> > struct receive_queue *rq,
> > bool refill)
> > {
> > - bool running = netif_running(vi->dev);
> > - bool schedule_refill = false;
> > + if (netif_running(vi->dev)) {
> > + /* Pre-fill rq agressively, to make sure we are ready to get
> > + * packets immediately.
> > + */
> > + if (refill)
> > + try_fill_recv(vi, rq, GFP_KERNEL);
>
> Similar thing here? Tho not sure we can fail here..
>
> > - if (refill && !try_fill_recv(vi, rq, GFP_KERNEL))
> > - schedule_refill = true;
> > - if (running)
> > virtnet_napi_enable(rq);
> > -
> > - if (schedule_refill)
> > - schedule_delayed_work(&vi->refill, 0);
> > + }
> > }
> >
> > static void virtnet_rx_resume_all(struct virtnet_info *vi)
> > @@ -3829,11 +3829,13 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
> > }
> > succ:
> > vi->curr_queue_pairs = queue_pairs;
> > - /* virtnet_open() will refill when device is going to up. */
> > - spin_lock_bh(&vi->refill_lock);
> > - if (dev->flags & IFF_UP && vi->refill_enabled)
> > - schedule_delayed_work(&vi->refill, 0);
> > - spin_unlock_bh(&vi->refill_lock);
> > + if (dev->flags & IFF_UP) {
> > + local_bh_disable();
> > + for (int i = 0; i < vi->curr_queue_pairs; ++i)
> > + virtqueue_napi_schedule(&vi->rq[i].napi, vi->rq[i].vq);
> > +
>
> nit: spurious new line
>
> > + local_bh_enable();
> > + }
> >
> > return 0;
> > }
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH net v3 2/3] virtio-net: remove unused delayed refill worker
2026-01-06 15:04 [PATCH net v3 0/3] virtio-net: fix the deadlock when disabling rx NAPI Bui Quang Minh
2026-01-06 15:04 ` [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker Bui Quang Minh
@ 2026-01-06 15:04 ` Bui Quang Minh
2026-01-06 15:04 ` [PATCH net v3 3/3] virtio-net: clean up __virtnet_rx_pause/resume Bui Quang Minh
2026-01-12 20:56 ` [PATCH net v3 0/3] virtio-net: fix the deadlock when disabling rx NAPI patchwork-bot+netdevbpf
3 siblings, 0 replies; 11+ messages in thread
From: Bui Quang Minh @ 2026-01-06 15:04 UTC (permalink / raw)
To: netdev
Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
Jesper Dangaard Brouer, John Fastabend, Stanislav Fomichev,
virtualization, linux-kernel, bpf, Bui Quang Minh
Since we switched to retry refilling receive buffer in NAPI poll instead
of delayed worker, remove all now unused delayed refill worker code.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
---
drivers/net/virtio_net.c | 86 ----------------------------------------
1 file changed, 86 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index f986abf0c236..a4dbc958689b 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -441,9 +441,6 @@ struct virtnet_info {
/* Packet virtio header size */
u8 hdr_len;
- /* Work struct for delayed refilling if we run low on memory. */
- struct delayed_work refill;
-
/* UDP tunnel support */
bool tx_tnl;
@@ -451,12 +448,6 @@ struct virtnet_info {
bool rx_tnl_csum;
- /* Is delayed refill enabled? */
- bool refill_enabled;
-
- /* The lock to synchronize the access to refill_enabled */
- spinlock_t refill_lock;
-
/* Work struct for config space updates */
struct work_struct config_work;
@@ -720,20 +711,6 @@ static void virtnet_rq_free_buf(struct virtnet_info *vi,
put_page(virt_to_head_page(buf));
}
-static void enable_delayed_refill(struct virtnet_info *vi)
-{
- spin_lock_bh(&vi->refill_lock);
- vi->refill_enabled = true;
- spin_unlock_bh(&vi->refill_lock);
-}
-
-static void disable_delayed_refill(struct virtnet_info *vi)
-{
- spin_lock_bh(&vi->refill_lock);
- vi->refill_enabled = false;
- spin_unlock_bh(&vi->refill_lock);
-}
-
static void enable_rx_mode_work(struct virtnet_info *vi)
{
rtnl_lock();
@@ -2948,42 +2925,6 @@ static void virtnet_napi_disable(struct receive_queue *rq)
napi_disable(napi);
}
-static void refill_work(struct work_struct *work)
-{
- struct virtnet_info *vi =
- container_of(work, struct virtnet_info, refill.work);
- bool still_empty;
- int i;
-
- for (i = 0; i < vi->curr_queue_pairs; i++) {
- struct receive_queue *rq = &vi->rq[i];
-
- /*
- * When queue API support is added in the future and the call
- * below becomes napi_disable_locked, this driver will need to
- * be refactored.
- *
- * One possible solution would be to:
- * - cancel refill_work with cancel_delayed_work (note:
- * non-sync)
- * - cancel refill_work with cancel_delayed_work_sync in
- * virtnet_remove after the netdev is unregistered
- * - wrap all of the work in a lock (perhaps the netdev
- * instance lock)
- * - check netif_running() and return early to avoid a race
- */
- napi_disable(&rq->napi);
- still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
- virtnet_napi_do_enable(rq->vq, &rq->napi);
-
- /* In theory, this can happen: if we don't get any buffers in
- * we will *never* try to fill again.
- */
- if (still_empty)
- schedule_delayed_work(&vi->refill, HZ/2);
- }
-}
-
static int virtnet_receive_xsk_bufs(struct virtnet_info *vi,
struct receive_queue *rq,
int budget,
@@ -3226,8 +3167,6 @@ static int virtnet_open(struct net_device *dev)
struct virtnet_info *vi = netdev_priv(dev);
int i, err;
- enable_delayed_refill(vi);
-
for (i = 0; i < vi->max_queue_pairs; i++) {
if (i < vi->curr_queue_pairs)
/* Pre-fill rq agressively, to make sure we are ready to
@@ -3252,9 +3191,6 @@ static int virtnet_open(struct net_device *dev)
return 0;
err_enable_qp:
- disable_delayed_refill(vi);
- cancel_delayed_work_sync(&vi->refill);
-
for (i--; i >= 0; i--) {
virtnet_disable_queue_pair(vi, i);
virtnet_cancel_dim(vi, &vi->rq[i].dim);
@@ -3448,24 +3384,12 @@ static void virtnet_rx_pause_all(struct virtnet_info *vi)
{
int i;
- /*
- * Make sure refill_work does not run concurrently to
- * avoid napi_disable race which leads to deadlock.
- */
- disable_delayed_refill(vi);
- cancel_delayed_work_sync(&vi->refill);
for (i = 0; i < vi->max_queue_pairs; i++)
__virtnet_rx_pause(vi, &vi->rq[i]);
}
static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
{
- /*
- * Make sure refill_work does not run concurrently to
- * avoid napi_disable race which leads to deadlock.
- */
- disable_delayed_refill(vi);
- cancel_delayed_work_sync(&vi->refill);
__virtnet_rx_pause(vi, rq);
}
@@ -3488,7 +3412,6 @@ static void virtnet_rx_resume_all(struct virtnet_info *vi)
{
int i;
- enable_delayed_refill(vi);
for (i = 0; i < vi->max_queue_pairs; i++) {
if (i < vi->curr_queue_pairs)
__virtnet_rx_resume(vi, &vi->rq[i], true);
@@ -3499,7 +3422,6 @@ static void virtnet_rx_resume_all(struct virtnet_info *vi)
static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)
{
- enable_delayed_refill(vi);
__virtnet_rx_resume(vi, rq, true);
}
@@ -3845,10 +3767,6 @@ static int virtnet_close(struct net_device *dev)
struct virtnet_info *vi = netdev_priv(dev);
int i;
- /* Make sure NAPI doesn't schedule refill work */
- disable_delayed_refill(vi);
- /* Make sure refill_work doesn't re-enable napi! */
- cancel_delayed_work_sync(&vi->refill);
/* Prevent the config change callback from changing carrier
* after close
*/
@@ -5804,7 +5722,6 @@ static int virtnet_restore_up(struct virtio_device *vdev)
virtio_device_ready(vdev);
- enable_delayed_refill(vi);
enable_rx_mode_work(vi);
if (netif_running(vi->dev)) {
@@ -6561,7 +6478,6 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
if (!vi->rq)
goto err_rq;
- INIT_DELAYED_WORK(&vi->refill, refill_work);
for (i = 0; i < vi->max_queue_pairs; i++) {
vi->rq[i].pages = NULL;
netif_napi_add_config(vi->dev, &vi->rq[i].napi, virtnet_poll,
@@ -6903,7 +6819,6 @@ static int virtnet_probe(struct virtio_device *vdev)
INIT_WORK(&vi->config_work, virtnet_config_changed_work);
INIT_WORK(&vi->rx_mode_work, virtnet_rx_mode_work);
- spin_lock_init(&vi->refill_lock);
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) {
vi->mergeable_rx_bufs = true;
@@ -7167,7 +7082,6 @@ static int virtnet_probe(struct virtio_device *vdev)
net_failover_destroy(vi->failover);
free_vqs:
virtio_reset_device(vdev);
- cancel_delayed_work_sync(&vi->refill);
free_receive_page_frags(vi);
virtnet_del_vqs(vi);
free:
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH net v3 3/3] virtio-net: clean up __virtnet_rx_pause/resume
2026-01-06 15:04 [PATCH net v3 0/3] virtio-net: fix the deadlock when disabling rx NAPI Bui Quang Minh
2026-01-06 15:04 ` [PATCH net v3 1/3] virtio-net: don't schedule delayed refill worker Bui Quang Minh
2026-01-06 15:04 ` [PATCH net v3 2/3] virtio-net: remove unused " Bui Quang Minh
@ 2026-01-06 15:04 ` Bui Quang Minh
2026-01-12 20:56 ` [PATCH net v3 0/3] virtio-net: fix the deadlock when disabling rx NAPI patchwork-bot+netdevbpf
3 siblings, 0 replies; 11+ messages in thread
From: Bui Quang Minh @ 2026-01-06 15:04 UTC (permalink / raw)
To: netdev
Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
Jesper Dangaard Brouer, John Fastabend, Stanislav Fomichev,
virtualization, linux-kernel, bpf, Bui Quang Minh
The delayed refill worker is removed which makes virtnet_rx_pause/resume
quite the same as __virtnet_rx_pause/resume. So remove
__virtnet_rx_pause/resume and move the code to virtnet_rx_pause/resume.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com>
---
drivers/net/virtio_net.c | 30 ++++++++++--------------------
1 file changed, 10 insertions(+), 20 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index a4dbc958689b..745bae756920 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -3369,8 +3369,8 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_OK;
}
-static void __virtnet_rx_pause(struct virtnet_info *vi,
- struct receive_queue *rq)
+static void virtnet_rx_pause(struct virtnet_info *vi,
+ struct receive_queue *rq)
{
bool running = netif_running(vi->dev);
@@ -3385,17 +3385,12 @@ static void virtnet_rx_pause_all(struct virtnet_info *vi)
int i;
for (i = 0; i < vi->max_queue_pairs; i++)
- __virtnet_rx_pause(vi, &vi->rq[i]);
+ virtnet_rx_pause(vi, &vi->rq[i]);
}
-static void virtnet_rx_pause(struct virtnet_info *vi, struct receive_queue *rq)
-{
- __virtnet_rx_pause(vi, rq);
-}
-
-static void __virtnet_rx_resume(struct virtnet_info *vi,
- struct receive_queue *rq,
- bool refill)
+static void virtnet_rx_resume(struct virtnet_info *vi,
+ struct receive_queue *rq,
+ bool refill)
{
if (netif_running(vi->dev)) {
/* Pre-fill rq agressively, to make sure we are ready to get
@@ -3414,17 +3409,12 @@ static void virtnet_rx_resume_all(struct virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
if (i < vi->curr_queue_pairs)
- __virtnet_rx_resume(vi, &vi->rq[i], true);
+ virtnet_rx_resume(vi, &vi->rq[i], true);
else
- __virtnet_rx_resume(vi, &vi->rq[i], false);
+ virtnet_rx_resume(vi, &vi->rq[i], false);
}
}
-static void virtnet_rx_resume(struct virtnet_info *vi, struct receive_queue *rq)
-{
- __virtnet_rx_resume(vi, rq, true);
-}
-
static int virtnet_rx_resize(struct virtnet_info *vi,
struct receive_queue *rq, u32 ring_num)
{
@@ -3438,7 +3428,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
if (err)
netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err);
- virtnet_rx_resume(vi, rq);
+ virtnet_rx_resume(vi, rq, true);
return err;
}
@@ -5811,7 +5801,7 @@ static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queu
rq->xsk_pool = pool;
- virtnet_rx_resume(vi, rq);
+ virtnet_rx_resume(vi, rq, true);
if (pool)
return 0;
--
2.43.0
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH net v3 0/3] virtio-net: fix the deadlock when disabling rx NAPI
2026-01-06 15:04 [PATCH net v3 0/3] virtio-net: fix the deadlock when disabling rx NAPI Bui Quang Minh
` (2 preceding siblings ...)
2026-01-06 15:04 ` [PATCH net v3 3/3] virtio-net: clean up __virtnet_rx_pause/resume Bui Quang Minh
@ 2026-01-12 20:56 ` patchwork-bot+netdevbpf
3 siblings, 0 replies; 11+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-01-12 20:56 UTC (permalink / raw)
To: Bui Quang Minh
Cc: netdev, mst, jasowang, xuanzhuo, eperezma, andrew+netdev, davem,
edumazet, kuba, pabeni, ast, daniel, hawk, john.fastabend, sdf,
virtualization, linux-kernel, bpf
Hello:
This series was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Tue, 6 Jan 2026 22:04:35 +0700 you wrote:
> Calling napi_disable() on an already disabled napi can cause the
> deadlock. In commit 4bc12818b363 ("virtio-net: disable delayed refill
> when pausing rx"), to avoid the deadlock, when pausing the RX in
> virtnet_rx_pause[_all](), we disable and cancel the delayed refill work.
> However, in the virtnet_rx_resume_all(), we enable the delayed refill
> work too early before enabling all the receive queue napis.
>
> [...]
Here is the summary with links:
- [net,v3,1/3] virtio-net: don't schedule delayed refill worker
https://git.kernel.org/netdev/net/c/fcdef3bcbb2c
- [net,v3,2/3] virtio-net: remove unused delayed refill worker
https://git.kernel.org/netdev/net/c/1e7b90aa7988
- [net,v3,3/3] virtio-net: clean up __virtnet_rx_pause/resume
https://git.kernel.org/netdev/net/c/a0c159647e66
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 11+ messages in thread