From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jinhui Guo <guojinhui.liam@bytedance.com>
Cc: "Eugenio Pérez" <eperezma@redhat.com>,
"Jason Wang" <jasowang@redhat.com>,
"Jiri Pirko" <jiri@resnulli.us>,
"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
linux-kernel@vger.kernel.org, stable@vger.kernel.org,
virtualization@lists.linux.dev
Subject: Re: [PATCH] virtio_pci_modern: Use GFP_ATOMIC with spin_lock_irqsave held in virtqueue_exec_admin_cmd()
Date: Wed, 13 May 2026 08:10:49 -0400 [thread overview]
Message-ID: <20260513080456-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20260413100013.32399-1-guojinhui.liam@bytedance.com>
On Mon, Apr 13, 2026 at 06:00:13PM +0800, Jinhui Guo wrote:
> On Mon, Apr 13, 2026 at 03:45:20 -0400, "Michael S. Tsirkin" wrote:
> > GFP_ATOMIC allocations can and will fail. If using them, one must
> > retry, not just propagate failures.
> > Or just switch admin_vq->lock to a mutex?
>
> Hi Michael,
>
> Thank you for the review.
>
> Regarding the suggestion to switch admin_vq->lock to a mutex:
>
> The virtqueue callback vp_modern_avq_done() holds admin_vq->lock and
> runs in an interrupt handler context, making it impractical to replace
> the spinlock with a mutex directly.
>
> I considered deferring the completion to a workqueue so we could safely
> use a mutex, but since this is a bug fix destined for stable@vger.kernel.org,
> doing so would introduce significant code churn (e.g., handling INIT_WORK,
> cancel_work_sync during cleanup, etc.) and increase the risk for backports.
This is not how we do kernel development here. Please fix the bug
upstream first then we will consider backporting strategies.
> Therefore, using GFP_ATOMIC with the existing spinlock seems to be the most
> minimal and safest approach for a fix.
>
> However, just replacing GFP_KERNEL with GFP_ATOMIC isn't entirely safe
> because of how virtqueue_add_sgs() handles allocation failures. If kmalloc()
> fails under memory pressure with GFP_ATOMIC, the function falls back to using
> direct descriptors. If there are not enough free direct descriptors, it
> ultimately returns -ENOSPC.
>
> In the current code, -ENOSPC is handled with a busy loop:
>
> if (ret == -ENOSPC) {
> spin_unlock_irqrestore(&admin_vq->lock, flags);
> cpu_relax();
> goto again;
> }
>
> If the -ENOSPC is actually caused by a GFP_ATOMIC allocation failure under
> memory pressure, this cpu_relax() loop will never yield the CPU to memory
> reclaim mechanisms (like kswapd), potentially leading to a soft lockup.
>
> To properly handle both actual queue-full conditions and GFP_ATOMIC failures,
> I propose replacing cpu_relax() with a sleep (e.g., usleep_range(10, 100)).
> This allows memory reclaim to run while we wait.
>
> I plan to send out a v2 patch with this modification:
>
> --- a/drivers/virtio/virtio_pci_modern.c
> +++ b/drivers/virtio/virtio_pci_modern.c
> @@ -101,11 +101,11 @@ static int virtqueue_exec_admin_cmd(struct virtio_pci_admin_vq *admin_vq,
> return -EIO;
>
> spin_lock_irqsave(&admin_vq->lock, flags);
> - ret = virtqueue_add_sgs(vq, sgs, out_num, in_num, cmd, GFP_KERNEL);
> + ret = virtqueue_add_sgs(vq, sgs, out_num, in_num, cmd, GFP_ATOMIC);
> if (ret < 0) {
> if (ret == -ENOSPC) {
> spin_unlock_irqrestore(&admin_vq->lock, flags);
> - cpu_relax();
> + usleep_range(10, 100);
> goto again;
> }
> goto unlock_err;
>
> Does this approach align with your expectations for the fix?
>
> Thanks,
> Jinhui
Nope.
I think we need to get out of peephole mode where we are just looking
at the warnings and "fix" them by 1 line tweaks and actually analyze the
codepaths. GFP is just for indirect allocations and VQ already
falls back to using direct when that fails.
The question is:
- what is going on with VQ ring state, can we actually get into a situation
where indirect would succeed but direct fails?
- how can callers either prevent failures or get notified when buffers
have been used?
And it is quite possible that the fix in the end is exactly your v1 but
with the analysis in the commit log explaining why this fixes the
problem and does not paper over it.
--
MST
prev parent reply other threads:[~2026-05-13 12:10 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-13 7:22 [PATCH] virtio_pci_modern: Use GFP_ATOMIC with spin_lock_irqsave held in virtqueue_exec_admin_cmd() Jinhui Guo
2026-04-13 7:45 ` Michael S. Tsirkin
2026-04-13 9:17 ` David Laight
2026-04-13 12:22 ` Jinhui Guo
2026-04-13 13:33 ` David Laight
2026-04-13 14:14 ` Eugenio Perez Martin
2026-05-13 12:34 ` Michael S. Tsirkin
2026-04-13 10:00 ` Jinhui Guo
2026-05-13 12:10 ` Michael S. Tsirkin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260513080456-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=eperezma@redhat.com \
--cc=guojinhui.liam@bytedance.com \
--cc=jasowang@redhat.com \
--cc=jiri@resnulli.us \
--cc=linux-kernel@vger.kernel.org \
--cc=stable@vger.kernel.org \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox