From: keith.busch@intel.com (Keith Busch)
Subject: [PATCHv2 2/4] nvme-pci: Distribute io queue types after creation
Date: Fri, 4 Jan 2019 11:35:27 -0700 [thread overview]
Message-ID: <20190104183527.GB12508@localhost.localdomain> (raw)
In-Reply-To: <20190104181726.GA25730@lst.de>
On Fri, Jan 04, 2019@07:17:26PM +0100, Christoph Hellwig wrote:
> On Fri, Jan 04, 2019@08:53:24AM -0700, Keith Busch wrote:
> > On Fri, Jan 04, 2019@03:21:07PM +0800, Ming Lei wrote:
> > > Thinking about the patch further: after pci_alloc_irq_vectors_affinity()
> > > is returned, queue number for non-polled queues can't be changed at will,
> > > because we have to make sure to spread all CPUs on each queue type, and
> > > the mapping has been fixed by pci_alloc_irq_vectors_affinity() already.
> > >
> > > So looks the approach in this patch may be wrong.
> >
> > That's a bit of a problem, and not a new one. We always had to allocate
> > vectors before creating IRQ driven CQ's, but the vector affinity is
> > created before we know if the queue-pair can be created. Should the
> > queue creation fail, there may be CPUs that don't have a queue.
> >
> > Does this mean the pci msi API is wrong? It seems like we'd need to
> > initially allocate vectors without PCI_IRQ_AFFINITY, then have the
> > kernel set affinity only after completing the queue-pair setup.
>
> We can't just easily do that, as we want to allocate the memory for
> the descriptors on the correct node. But we can just free the
> vectors and try again if we have to.
I've come to the same realization that switching modes after allocation
can't be easily accomodated. Teardown and retry with a reduced queue
count looks like the easiest solution.
next prev parent reply other threads:[~2019-01-04 18:35 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-03 22:50 [PATCHv2 0/4] NVMe IRQ sets fixups Keith Busch
2019-01-03 22:50 ` [PATCHv2 1/4] nvme-pci: Set tagset nr_maps just once Keith Busch
2019-01-04 1:46 ` Ming Lei
2019-01-03 22:50 ` [PATCHv2 2/4] nvme-pci: Distribute io queue types after creation Keith Busch
2019-01-04 2:31 ` Ming Lei
2019-01-04 7:21 ` Ming Lei
2019-01-04 15:53 ` Keith Busch
2019-01-04 18:17 ` Christoph Hellwig
2019-01-04 18:35 ` Keith Busch [this message]
2019-01-06 2:56 ` Ming Lei
2019-01-03 22:50 ` [PATCHv2 3/4] PCI/MSI: Handle vector reduce and retry Keith Busch
2019-01-04 2:45 ` Ming Lei
2019-01-04 22:35 ` Bjorn Helgaas
2019-01-04 22:56 ` Keith Busch
2019-01-03 22:50 ` [PATCHv2 4/4] nvme-pci: Use PCI to handle IRQ " Keith Busch
2019-01-04 2:41 ` Ming Lei
2019-01-04 18:19 ` Christoph Hellwig
2019-01-04 18:33 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190104183527.GB12508@localhost.localdomain \
--to=keith.busch@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox