* Re: [PATCH] Fix race bug in nvme_poll_irqdisable()
[not found] <20260307194636.2755443-2-iam@sung-woo.kim>
@ 2026-05-13 18:27 ` Bjorn Helgaas
0 siblings, 0 replies; only message in thread
From: Bjorn Helgaas @ 2026-05-13 18:27 UTC (permalink / raw)
To: Sungwoo Kim
Cc: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
Chao Shi, Weidong Zhu, Dave Tian, linux-nvme, linux-kernel
On Sat, Mar 07, 2026 at 02:46:36PM -0500, Sungwoo Kim wrote:
> In the following scenario, pdev can be disabled between (1) and (3) by
> (2). This sets pdev->msix_enabled = 0. Then, pci_irq_vector() will
> return MSI-X IRQ(>15) for (1) whereas return INTx IRQ(<=15) for (2).
> This causes IRQ warning because it tries to enable INTx IRQ that has
> never been disabled before.
>
> To fix this, save IRQ number into a local variable and ensure
> disable_irq() and enable_irq() operate on the same IRQ number.
> Even if pci_free_irq_vectors() frees the IRQ concurrently, disable_irq()
> and enable_irq() on a stale IRQ number is still valid and safe, and the
> depth accounting reamins balanced.
>
> task 1:
> nvme_poll_irqdisable()
> disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector)) ...(1)
> enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector)) ...(3)
>
> task 2:
> nvme_reset_work()
> nvme_dev_disable()
> pdev->msix_enable = 0; ...(2)
> ...
> static void nvme_poll_irqdisable(struct nvme_queue *nvmeq)
> {
> struct pci_dev *pdev = to_pci_dev(nvmeq->dev->dev);
> + int irq;
>
> WARN_ON_ONCE(test_bit(NVMEQ_POLLED, &nvmeq->flags));
>
> - disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> + irq = pci_irq_vector(pdev, nvmeq->cq_vector);
> + disable_irq(irq);
> spin_lock(&nvmeq->cq_poll_lock);
> nvme_poll_cq(nvmeq, NULL);
> spin_unlock(&nvmeq->cq_poll_lock);
> - enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> + enable_irq(irq);
An internal run of sashiko complained about this, and I think it's
right. As the commit log mentions, the cached IRQ number is stale if
nvme_reset_work() frees all the vectors between (1) and (3). It's
likely the pci_alloc_irq_vectors() in nvme_pci_enable() will get the
same IRQ number, but it would be a coincidence, and it doesn't feel
like a good idea to rely on it.
First sashiko review:
This commit caches the IRQ number in nvme_poll_irqdisable() to
ensure disable_irq() and enable_irq() operate on the same IRQ
number, preventing an unbalanced enable warning if the device is
concurrently disabled.
If pci_free_irq_vectors() frees the IRQ concurrently, is it possible
for the IRQ number to be reallocated to a completely different
device before enable_irq() is called? Could this cause an unbalanced
enable or incorrectly unmask the interrupt for the new device?
Second sashiko review:
The commit caches the return value of pci_irq_vector() into a local
variable to ensure disable_irq() and enable_irq() operate on the
same IRQ number, preventing an unbalanced enable warning when
pdev->msix_enabled is cleared concurrently.
If pdev->msix_enabled is cleared concurrently and nvmeq->cq_vector >
0, pci_irq_vector() will return -EINVAL. Since disable_irq() takes
an unsigned int, does this result in passing a very large unsigned
number (like 0xffffffea) to the IRQ subsystem?
The commit message notes that operating on a stale IRQ number is
valid and safe. However, if pci_free_irq_vectors() frees the MSI-X
vector concurrently, couldn't this cached irq number be recycled and
reallocated to a completely different device before enable_irq(irq)
executes? If the new device had explicitly disabled its IRQ, does
this code inadvertently enable it, breaking the other device's
synchronization?
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-05-13 18:27 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260307194636.2755443-2-iam@sung-woo.kim>
2026-05-13 18:27 ` [PATCH] Fix race bug in nvme_poll_irqdisable() Bjorn Helgaas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox