All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] nvme pci: two fixes on nvme_setup_irqs
@ 2018-12-26 10:37 Ming Lei
  2018-12-26 10:37 ` [PATCH 1/2] nvme pci: fix nvme_setup_irqs() Ming Lei
  2018-12-26 10:37 ` [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL Ming Lei
  0 siblings, 2 replies; 12+ messages in thread
From: Ming Lei @ 2018-12-26 10:37 UTC (permalink / raw)


Hi,

The 1st one fixes the case that -ENOSPC is returned from
pci_alloc_irq_vectors_affinity().

The 2nd one fixes the case that -EINVAL is returned from
pci_alloc_irq_vectors_affinity().


Ming Lei (2):
  nvme pci: fix nvme_setup_irqs()
  nvme pci: try to allocate multiple irq vectors again in case of
    -EINVAL

 drivers/nvme/host/pci.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

Cc: Keith Busch <keith.busch at intel.com>
Cc: Jens Axboe <axboe at fb.com>

-- 
2.9.5

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] nvme pci: fix nvme_setup_irqs()
  2018-12-26 10:37 [PATCH 0/2] nvme pci: two fixes on nvme_setup_irqs Ming Lei
@ 2018-12-26 10:37 ` Ming Lei
  2018-12-26 10:37 ` [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL Ming Lei
  1 sibling, 0 replies; 12+ messages in thread
From: Ming Lei @ 2018-12-26 10:37 UTC (permalink / raw)


When -ENOSPC is returned from pci_alloc_irq_vectors_affinity(),
we still try to allocate multiple irq vectors again, so irq queues
covers the admin queue acturally. But we don't consider that, then
number of the allocated irq vector may be same with sum of
io_queues[HCTX_TYPE_DEFAULT] and io_queues[HCTX_TYPE_READ], this way
is wrong, and finally breaks nvme_pci_map_queues(), and triggeres
warning from pci_irq_get_affinity().

Irq queues should cover admin queues, this patch makes this
point explicitely in nvme_calc_io_queues().

Cc: Keith Busch <keith.busch at intel.com>
Cc: Jens Axboe <axboe at fb.com>
Signed-off-by: Ming Lei <ming.lei at redhat.com>
---
 drivers/nvme/host/pci.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 5a0bf6a24d50..584ea7a57122 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2028,14 +2028,18 @@ static int nvme_setup_host_mem(struct nvme_dev *dev)
 	return ret;
 }
 
+/* irq_queues covers admin queue */
 static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)
 {
 	unsigned int this_w_queues = write_queues;
 
+	WARN_ON(!irq_queues);
+
 	/*
-	 * Setup read/write queue split
+	 * Setup read/write queue split, assign admin queue one independent
+	 * irq vector if irq_queues is > 1.
 	 */
-	if (irq_queues == 1) {
+	if (irq_queues <= 2) {
 		dev->io_queues[HCTX_TYPE_DEFAULT] = 1;
 		dev->io_queues[HCTX_TYPE_READ] = 0;
 		return;
@@ -2043,21 +2047,21 @@ static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)
 
 	/*
 	 * If 'write_queues' is set, ensure it leaves room for at least
-	 * one read queue
+	 * one read queue and one admin queue
 	 */
 	if (this_w_queues >= irq_queues)
-		this_w_queues = irq_queues - 1;
+		this_w_queues = irq_queues - 2;
 
 	/*
 	 * If 'write_queues' is set to zero, reads and writes will share
 	 * a queue set.
 	 */
 	if (!this_w_queues) {
-		dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues;
+		dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues - 1;
 		dev->io_queues[HCTX_TYPE_READ] = 0;
 	} else {
 		dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues;
-		dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues;
+		dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues - 1;
 	}
 }
 
@@ -2082,7 +2086,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
 		this_p_queues = nr_io_queues - 1;
 		irq_queues = 1;
 	} else {
-		irq_queues = nr_io_queues - this_p_queues;
+		irq_queues = nr_io_queues - this_p_queues + 1;
 	}
 	dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;
 
@@ -2102,8 +2106,9 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
 		 * If we got a failure and we're down to asking for just
 		 * 1 + 1 queues, just ask for a single vector. We'll share
 		 * that between the single IO queue and the admin queue.
+		 * Otherwise, we assign one independent vector to admin queue.
 		 */
-		if (result >= 0 && irq_queues > 1)
+		if (irq_queues > 1)
 			irq_queues = irq_sets[0] + irq_sets[1] + 1;
 
 		result = pci_alloc_irq_vectors_affinity(pdev, irq_queues,
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
  2018-12-26 10:37 [PATCH 0/2] nvme pci: two fixes on nvme_setup_irqs Ming Lei
  2018-12-26 10:37 ` [PATCH 1/2] nvme pci: fix nvme_setup_irqs() Ming Lei
@ 2018-12-26 10:37 ` Ming Lei
  2018-12-26 18:20   ` Christoph Hellwig
  1 sibling, 1 reply; 12+ messages in thread
From: Ming Lei @ 2018-12-26 10:37 UTC (permalink / raw)


It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
returns -EINVAL when the requested number is too big(such as 64).

However, the allocation can be done successfully if we ask for
a smaller number, such as 63.

So reduce queue number and reallocate again in case of -EINVAL.

Cc: Keith Busch <keith.busch at intel.com>
Cc: Jens Axboe <axboe at fb.com>
Signed-off-by: Ming Lei <ming.lei at redhat.com>
---
 drivers/nvme/host/pci.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 584ea7a57122..d839bbe408c3 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2121,14 +2121,11 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
 		 * to decrease our ask. If we get EINVAL, the platform
 		 * likely does not. Back down to ask for just one vector.
 		 */
-		if (result == -ENOSPC) {
+		if (result == -ENOSPC || result == -EINVAL) {
 			irq_queues--;
 			if (!irq_queues)
 				return result;
 			continue;
-		} else if (result == -EINVAL) {
-			irq_queues = 1;
-			continue;
 		} else if (result <= 0)
 			return -EIO;
 		break;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
  2018-12-26 10:37 ` [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL Ming Lei
@ 2018-12-26 18:20   ` Christoph Hellwig
  2018-12-27  8:21       ` Ming Lei
  0 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2018-12-26 18:20 UTC (permalink / raw)


On Wed, Dec 26, 2018@06:37:55PM +0800, Ming Lei wrote:
> It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
> returns -EINVAL when the requested number is too big(such as 64).

Which is not how this API is supposed to work and documented to work.

We need to fix pci_alloc_irq_vectors_affinity to not return a spurious
error and just return the allocated number of vectors instead of
hacking around that in drivers.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
  2018-12-26 18:20   ` Christoph Hellwig
@ 2018-12-27  8:21       ` Ming Lei
  0 siblings, 0 replies; 12+ messages in thread
From: Ming Lei @ 2018-12-27  8:21 UTC (permalink / raw)


On Wed, Dec 26, 2018@07:20:27PM +0100, Christoph Hellwig wrote:
> On Wed, Dec 26, 2018@06:37:55PM +0800, Ming Lei wrote:
> > It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
> > returns -EINVAL when the requested number is too big(such as 64).
> 
> Which is not how this API is supposed to work and documented to work.
> 
> We need to fix pci_alloc_irq_vectors_affinity to not return a spurious
> error and just return the allocated number of vectors instead of
> hacking around that in drivers.

Yeah, you are right.

The issue is that QEMU nvme-pci is MSIX-capable only, and hasn't MSI
capability.

__pci_enable_msix_range() actually returns -ENOSPC, but __pci_enable_msi_range()
returns -EINVAL because dev->msi_cap is zero.

Maybe we need the following fix?

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 265ed3e4c920..b0bf260dc154 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1186,7 +1186,7 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
                        return vecs;
        }
 
-       if (flags & PCI_IRQ_MSI) {
+       if ((flags & PCI_IRQ_MSI) && dev->msi_cap) {
                vecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, affd);
                if (vecs > 0)
                        return vecs;


Thanks,
Ming

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
@ 2018-12-27  8:21       ` Ming Lei
  0 siblings, 0 replies; 12+ messages in thread
From: Ming Lei @ 2018-12-27  8:21 UTC (permalink / raw)
  To: Christoph Hellwig, Bjorn Helgaas
  Cc: Keith Busch, Jens Axboe, linux-nvme, linux-pci

On Wed, Dec 26, 2018 at 07:20:27PM +0100, Christoph Hellwig wrote:
> On Wed, Dec 26, 2018 at 06:37:55PM +0800, Ming Lei wrote:
> > It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
> > returns -EINVAL when the requested number is too big(such as 64).
> 
> Which is not how this API is supposed to work and documented to work.
> 
> We need to fix pci_alloc_irq_vectors_affinity to not return a spurious
> error and just return the allocated number of vectors instead of
> hacking around that in drivers.

Yeah, you are right.

The issue is that QEMU nvme-pci is MSIX-capable only, and hasn't MSI
capability.

__pci_enable_msix_range() actually returns -ENOSPC, but __pci_enable_msi_range()
returns -EINVAL because dev->msi_cap is zero.

Maybe we need the following fix?

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 265ed3e4c920..b0bf260dc154 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1186,7 +1186,7 @@ int pci_alloc_irq_vectors_affinity(struct pci_dev *dev, unsigned int min_vecs,
                        return vecs;
        }
 
-       if (flags & PCI_IRQ_MSI) {
+       if ((flags & PCI_IRQ_MSI) && dev->msi_cap) {
                vecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, affd);
                if (vecs > 0)
                        return vecs;


Thanks,
Ming

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
  2018-12-27  8:21       ` Ming Lei
@ 2018-12-27 13:08         ` Christoph Hellwig
  -1 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2018-12-27 13:08 UTC (permalink / raw)


On Thu, Dec 27, 2018@04:21:38PM +0800, Ming Lei wrote:
> On Wed, Dec 26, 2018@07:20:27PM +0100, Christoph Hellwig wrote:
> > On Wed, Dec 26, 2018@06:37:55PM +0800, Ming Lei wrote:
> > > It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
> > > returns -EINVAL when the requested number is too big(such as 64).
> > 
> > Which is not how this API is supposed to work and documented to work.
> > 
> > We need to fix pci_alloc_irq_vectors_affinity to not return a spurious
> > error and just return the allocated number of vectors instead of
> > hacking around that in drivers.
> 
> Yeah, you are right.
> 
> The issue is that QEMU nvme-pci is MSIX-capable only, and hasn't MSI
> capability.
> 
> __pci_enable_msix_range() actually returns -ENOSPC, but __pci_enable_msi_range()
> returns -EINVAL because dev->msi_cap is zero.
> 
> Maybe we need the following fix?

Should it matter?  We still get a negative vecs back, and still fall
back to the next option.  Unless ther are no irqs available at all
for the selected types pci_alloc_irq_vectors_affinity should never
return an error.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
@ 2018-12-27 13:08         ` Christoph Hellwig
  0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2018-12-27 13:08 UTC (permalink / raw)
  To: Ming Lei
  Cc: Christoph Hellwig, Bjorn Helgaas, Keith Busch, Jens Axboe,
	linux-nvme, linux-pci

On Thu, Dec 27, 2018 at 04:21:38PM +0800, Ming Lei wrote:
> On Wed, Dec 26, 2018 at 07:20:27PM +0100, Christoph Hellwig wrote:
> > On Wed, Dec 26, 2018 at 06:37:55PM +0800, Ming Lei wrote:
> > > It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
> > > returns -EINVAL when the requested number is too big(such as 64).
> > 
> > Which is not how this API is supposed to work and documented to work.
> > 
> > We need to fix pci_alloc_irq_vectors_affinity to not return a spurious
> > error and just return the allocated number of vectors instead of
> > hacking around that in drivers.
> 
> Yeah, you are right.
> 
> The issue is that QEMU nvme-pci is MSIX-capable only, and hasn't MSI
> capability.
> 
> __pci_enable_msix_range() actually returns -ENOSPC, but __pci_enable_msi_range()
> returns -EINVAL because dev->msi_cap is zero.
> 
> Maybe we need the following fix?

Should it matter?  We still get a negative vecs back, and still fall
back to the next option.  Unless ther are no irqs available at all
for the selected types pci_alloc_irq_vectors_affinity should never
return an error.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
  2018-12-27 13:08         ` Christoph Hellwig
@ 2018-12-27 22:16           ` Ming Lei
  -1 siblings, 0 replies; 12+ messages in thread
From: Ming Lei @ 2018-12-27 22:16 UTC (permalink / raw)


On Thu, Dec 27, 2018@02:08:34PM +0100, Christoph Hellwig wrote:
> On Thu, Dec 27, 2018@04:21:38PM +0800, Ming Lei wrote:
> > On Wed, Dec 26, 2018@07:20:27PM +0100, Christoph Hellwig wrote:
> > > On Wed, Dec 26, 2018@06:37:55PM +0800, Ming Lei wrote:
> > > > It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
> > > > returns -EINVAL when the requested number is too big(such as 64).
> > > 
> > > Which is not how this API is supposed to work and documented to work.
> > > 
> > > We need to fix pci_alloc_irq_vectors_affinity to not return a spurious
> > > error and just return the allocated number of vectors instead of
> > > hacking around that in drivers.
> > 
> > Yeah, you are right.
> > 
> > The issue is that QEMU nvme-pci is MSIX-capable only, and hasn't MSI
> > capability.
> > 
> > __pci_enable_msix_range() actually returns -ENOSPC, but __pci_enable_msi_range()
> > returns -EINVAL because dev->msi_cap is zero.
> > 
> > Maybe we need the following fix?
> 
> Should it matter?  We still get a negative vecs back, and still fall
> back to the next option.  Unless ther are no irqs available at all
> for the selected types pci_alloc_irq_vectors_affinity should never
> return an error.

The patch in last email does fix this issue.

In this case, the number of NVMe PCI's MSI-X table entries is 64, so
__pci_enable_msix_range() return -ENOSPC when we ask for 65.

However, the following __pci_enable_msi_range() returns -EINVAL because
the NVMe PCI isn't capable of MSI, then this error is returned from
pci_alloc_irq_vectors_affinity() finally to NVMe driver.

Of course, -EINVAL makes a difference because the current code only
tries to assign one irq vector in this case, and it shouldn't be
returned from pci_alloc_irq_vectors_affinity(), given there is enough
msix entries for fallback, right?

Thanks,
Ming

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
@ 2018-12-27 22:16           ` Ming Lei
  0 siblings, 0 replies; 12+ messages in thread
From: Ming Lei @ 2018-12-27 22:16 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, linux-pci, linux-nvme, Keith Busch, Bjorn Helgaas

On Thu, Dec 27, 2018 at 02:08:34PM +0100, Christoph Hellwig wrote:
> On Thu, Dec 27, 2018 at 04:21:38PM +0800, Ming Lei wrote:
> > On Wed, Dec 26, 2018 at 07:20:27PM +0100, Christoph Hellwig wrote:
> > > On Wed, Dec 26, 2018 at 06:37:55PM +0800, Ming Lei wrote:
> > > > It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
> > > > returns -EINVAL when the requested number is too big(such as 64).
> > > 
> > > Which is not how this API is supposed to work and documented to work.
> > > 
> > > We need to fix pci_alloc_irq_vectors_affinity to not return a spurious
> > > error and just return the allocated number of vectors instead of
> > > hacking around that in drivers.
> > 
> > Yeah, you are right.
> > 
> > The issue is that QEMU nvme-pci is MSIX-capable only, and hasn't MSI
> > capability.
> > 
> > __pci_enable_msix_range() actually returns -ENOSPC, but __pci_enable_msi_range()
> > returns -EINVAL because dev->msi_cap is zero.
> > 
> > Maybe we need the following fix?
> 
> Should it matter?  We still get a negative vecs back, and still fall
> back to the next option.  Unless ther are no irqs available at all
> for the selected types pci_alloc_irq_vectors_affinity should never
> return an error.

The patch in last email does fix this issue.

In this case, the number of NVMe PCI's MSI-X table entries is 64, so
__pci_enable_msix_range() return -ENOSPC when we ask for 65.

However, the following __pci_enable_msi_range() returns -EINVAL because
the NVMe PCI isn't capable of MSI, then this error is returned from
pci_alloc_irq_vectors_affinity() finally to NVMe driver.

Of course, -EINVAL makes a difference because the current code only
tries to assign one irq vector in this case, and it shouldn't be
returned from pci_alloc_irq_vectors_affinity(), given there is enough
msix entries for fallback, right?

Thanks,
Ming

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
  2018-12-27 13:08         ` Christoph Hellwig
@ 2018-12-31 21:51           ` Bjorn Helgaas
  -1 siblings, 0 replies; 12+ messages in thread
From: Bjorn Helgaas @ 2018-12-31 21:51 UTC (permalink / raw)


On Thu, Dec 27, 2018@02:08:34PM +0100, Christoph Hellwig wrote:
> On Thu, Dec 27, 2018@04:21:38PM +0800, Ming Lei wrote:
> > On Wed, Dec 26, 2018@07:20:27PM +0100, Christoph Hellwig wrote:
> > > On Wed, Dec 26, 2018@06:37:55PM +0800, Ming Lei wrote:
> > > > It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
> > > > returns -EINVAL when the requested number is too big(such as 64).
> > > 
> > > Which is not how this API is supposed to work and documented to work.
> > > 
> > > We need to fix pci_alloc_irq_vectors_affinity to not return a spurious
> > > error and just return the allocated number of vectors instead of
> > > hacking around that in drivers.
> > 
> > Yeah, you are right.
> > 
> > The issue is that QEMU nvme-pci is MSIX-capable only, and hasn't MSI
> > capability.
> > 
> > __pci_enable_msix_range() actually returns -ENOSPC, but __pci_enable_msi_range()
> > returns -EINVAL because dev->msi_cap is zero.
> > 
> > Maybe we need the following fix?
> 
> Should it matter?  We still get a negative vecs back, and still fall
> back to the next option.  

I'm not sure how it matters either, since
pci_alloc_irq_vectors_affinity() will fail either way.  It *would* be
nice to return the correct error in case the caller uses it to emit a
message.

But if the caller wants to use -ENOSPC to reduce @min_vecs and try
again, that sounds like an incorrect use of the interface -- the
caller should have just used the smaller @min_vecs the first time
around.

> Unless ther are no irqs available at all
> for the selected types pci_alloc_irq_vectors_affinity should never
> return an error.

I don't quite understand this last sentence.  If @min_vecs == 5 and
the device only supports 4 MSI-X and 4 MSI vectors, the function
comment says we should fail with -ENOSPC.

Bjorn

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL
@ 2018-12-31 21:51           ` Bjorn Helgaas
  0 siblings, 0 replies; 12+ messages in thread
From: Bjorn Helgaas @ 2018-12-31 21:51 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Ming Lei, Jens Axboe, linux-pci, linux-nvme, Keith Busch

On Thu, Dec 27, 2018 at 02:08:34PM +0100, Christoph Hellwig wrote:
> On Thu, Dec 27, 2018 at 04:21:38PM +0800, Ming Lei wrote:
> > On Wed, Dec 26, 2018 at 07:20:27PM +0100, Christoph Hellwig wrote:
> > > On Wed, Dec 26, 2018 at 06:37:55PM +0800, Ming Lei wrote:
> > > > It is observed on QEMU that pci_alloc_irq_vectors_affinity() may
> > > > returns -EINVAL when the requested number is too big(such as 64).
> > > 
> > > Which is not how this API is supposed to work and documented to work.
> > > 
> > > We need to fix pci_alloc_irq_vectors_affinity to not return a spurious
> > > error and just return the allocated number of vectors instead of
> > > hacking around that in drivers.
> > 
> > Yeah, you are right.
> > 
> > The issue is that QEMU nvme-pci is MSIX-capable only, and hasn't MSI
> > capability.
> > 
> > __pci_enable_msix_range() actually returns -ENOSPC, but __pci_enable_msi_range()
> > returns -EINVAL because dev->msi_cap is zero.
> > 
> > Maybe we need the following fix?
> 
> Should it matter?  We still get a negative vecs back, and still fall
> back to the next option.  

I'm not sure how it matters either, since
pci_alloc_irq_vectors_affinity() will fail either way.  It *would* be
nice to return the correct error in case the caller uses it to emit a
message.

But if the caller wants to use -ENOSPC to reduce @min_vecs and try
again, that sounds like an incorrect use of the interface -- the
caller should have just used the smaller @min_vecs the first time
around.

> Unless ther are no irqs available at all
> for the selected types pci_alloc_irq_vectors_affinity should never
> return an error.

I don't quite understand this last sentence.  If @min_vecs == 5 and
the device only supports 4 MSI-X and 4 MSI vectors, the function
comment says we should fail with -ENOSPC.

Bjorn

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-12-31 21:51 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-12-26 10:37 [PATCH 0/2] nvme pci: two fixes on nvme_setup_irqs Ming Lei
2018-12-26 10:37 ` [PATCH 1/2] nvme pci: fix nvme_setup_irqs() Ming Lei
2018-12-26 10:37 ` [PATCH 2/2] nvme pci: try to allocate multiple irq vectors again in case of -EINVAL Ming Lei
2018-12-26 18:20   ` Christoph Hellwig
2018-12-27  8:21     ` Ming Lei
2018-12-27  8:21       ` Ming Lei
2018-12-27 13:08       ` Christoph Hellwig
2018-12-27 13:08         ` Christoph Hellwig
2018-12-27 22:16         ` Ming Lei
2018-12-27 22:16           ` Ming Lei
2018-12-31 21:51         ` Bjorn Helgaas
2018-12-31 21:51           ` Bjorn Helgaas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.