linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] PCI/MSI: don't try to apply MSI(-X) affinity for single vectors
@ 2017-07-26 20:17 Christoph Hellwig
  2017-08-02 18:24 ` Bjorn Helgaas
  0 siblings, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2017-07-26 20:17 UTC (permalink / raw)
  To: helgaas; +Cc: linux-pci

We'll always get NULL back in that case, so skip the call and the
resulting warning.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/pci/msi.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 253d92409bb3..19653e5cb68f 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -538,7 +538,7 @@ msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd)
 	struct msi_desc *entry;
 	u16 control;
 
-	if (affd) {
+	if (affd && nvec > 1) {
 		masks = irq_create_affinity_masks(nvec, affd);
 		if (!masks)
 			dev_err(&dev->dev, "can't allocate MSI affinity masks for %d vectors\n",
@@ -679,7 +679,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
 	struct msi_desc *entry;
 	int ret, i;
 
-	if (affd) {
+	if (affd && nvec > 1) {
 		masks = irq_create_affinity_masks(nvec, affd);
 		if (!masks)
 			dev_err(&dev->dev, "can't allocate MSI-X affinity masks for %d vectors\n",
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] PCI/MSI: don't try to apply MSI(-X) affinity for single vectors
  2017-07-26 20:17 [PATCH] PCI/MSI: don't try to apply MSI(-X) affinity for single vectors Christoph Hellwig
@ 2017-08-02 18:24 ` Bjorn Helgaas
  2017-08-14 20:33   ` Bjorn Helgaas
  2017-08-21 18:39   ` Christoph Hellwig
  0 siblings, 2 replies; 5+ messages in thread
From: Bjorn Helgaas @ 2017-08-02 18:24 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-pci

On Wed, Jul 26, 2017 at 10:17:41PM +0200, Christoph Hellwig wrote:
> We'll always get NULL back in that case, so skip the call and the
> resulting warning.

1. I'm not sure PCI_IRQ_AFFINITY was the right name.  IIUC, a
MSI/MSI-X vector is always basically bound to CPU, so we always have
affinity.  The only difference with PCI_IRQ_AFFINITY is that instead
of binding them all to the same CPU, we spread them around.  Maybe
PCI_IRQ_SPREAD would be more suggestive.  But whatever, it is what it
is, and I'll expand the changelog something like this:

  Calling pci_alloc_irq_vectors() with PCI_IRQ_AFFINITY indicates
  that we should spread the MSI vectors around the available CPUs.
  But if we're only allocating one vector, there's nothing to spread
  around.

2. The patch makes sense in that if we're only allocating a single
vector, there's nothing to spread around and there's no need to
allocate a cpumask.  But I haven't figured out why we get a warning.
I assume it's because we're getting NULL back when we call
irq_create_affinity_masks() with nvecs==1, but that only happens if
affv==0 or the zalloc fails, and I don't see why either would be the
case.

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/pci/msi.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> index 253d92409bb3..19653e5cb68f 100644
> --- a/drivers/pci/msi.c
> +++ b/drivers/pci/msi.c
> @@ -538,7 +538,7 @@ msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd)
>  	struct msi_desc *entry;
>  	u16 control;
>  
> -	if (affd) {
> +	if (affd && nvec > 1) {
>  		masks = irq_create_affinity_masks(nvec, affd);
>  		if (!masks)
>  			dev_err(&dev->dev, "can't allocate MSI affinity masks for %d vectors\n",
> @@ -679,7 +679,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
>  	struct msi_desc *entry;
>  	int ret, i;
>  
> -	if (affd) {
> +	if (affd && nvec > 1) {
>  		masks = irq_create_affinity_masks(nvec, affd);
>  		if (!masks)
>  			dev_err(&dev->dev, "can't allocate MSI-X affinity masks for %d vectors\n",
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] PCI/MSI: don't try to apply MSI(-X) affinity for single vectors
  2017-08-02 18:24 ` Bjorn Helgaas
@ 2017-08-14 20:33   ` Bjorn Helgaas
  2017-08-21 18:39   ` Christoph Hellwig
  1 sibling, 0 replies; 5+ messages in thread
From: Bjorn Helgaas @ 2017-08-14 20:33 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-pci

On Wed, Aug 02, 2017 at 01:24:58PM -0500, Bjorn Helgaas wrote:
> On Wed, Jul 26, 2017 at 10:17:41PM +0200, Christoph Hellwig wrote:
> > We'll always get NULL back in that case, so skip the call and the
> > resulting warning.
> ...

> 2. The patch makes sense in that if we're only allocating a single
> vector, there's nothing to spread around and there's no need to
> allocate a cpumask.  But I haven't figured out why we get a warning.
> I assume it's because we're getting NULL back when we call
> irq_create_affinity_masks() with nvecs==1, but that only happens if
> affv==0 or the zalloc fails, and I don't see why either would be the
> case.

Waiting for clarification on this question...

> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  drivers/pci/msi.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> > index 253d92409bb3..19653e5cb68f 100644
> > --- a/drivers/pci/msi.c
> > +++ b/drivers/pci/msi.c
> > @@ -538,7 +538,7 @@ msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd)
> >  	struct msi_desc *entry;
> >  	u16 control;
> >  
> > -	if (affd) {
> > +	if (affd && nvec > 1) {
> >  		masks = irq_create_affinity_masks(nvec, affd);
> >  		if (!masks)
> >  			dev_err(&dev->dev, "can't allocate MSI affinity masks for %d vectors\n",
> > @@ -679,7 +679,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
> >  	struct msi_desc *entry;
> >  	int ret, i;
> >  
> > -	if (affd) {
> > +	if (affd && nvec > 1) {
> >  		masks = irq_create_affinity_masks(nvec, affd);
> >  		if (!masks)
> >  			dev_err(&dev->dev, "can't allocate MSI-X affinity masks for %d vectors\n",
> > -- 
> > 2.11.0
> > 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] PCI/MSI: don't try to apply MSI(-X) affinity for single vectors
  2017-08-02 18:24 ` Bjorn Helgaas
  2017-08-14 20:33   ` Bjorn Helgaas
@ 2017-08-21 18:39   ` Christoph Hellwig
  2017-08-26  0:02     ` Bjorn Helgaas
  1 sibling, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2017-08-21 18:39 UTC (permalink / raw)
  To: Bjorn Helgaas; +Cc: Christoph Hellwig, linux-pci

On Wed, Aug 02, 2017 at 01:24:58PM -0500, Bjorn Helgaas wrote:
> On Wed, Jul 26, 2017 at 10:17:41PM +0200, Christoph Hellwig wrote:
> > We'll always get NULL back in that case, so skip the call and the
> > resulting warning.
> 
> 1. I'm not sure PCI_IRQ_AFFINITY was the right name.  IIUC, a
> MSI/MSI-X vector is always basically bound to CPU,

This will depend on your architecture.

> so we always have
> affinity.  The only difference with PCI_IRQ_AFFINITY is that instead
> of binding them all to the same CPU, we spread them around.  Maybe
> PCI_IRQ_SPREAD would be more suggestive.  But whatever, it is what it
> is, and I'll expand the changelog something like this:

Yes, that might be a better name.  We don't have that many callers
yet, so we could probably still change it.

> 
>   Calling pci_alloc_irq_vectors() with PCI_IRQ_AFFINITY indicates
>   that we should spread the MSI vectors around the available CPUs.
>   But if we're only allocating one vector, there's nothing to spread
>   around.

Ok.

> 2. The patch makes sense in that if we're only allocating a single
> vector, there's nothing to spread around and there's no need to
> allocate a cpumask.  But I haven't figured out why we get a warning.
> I assume it's because we're getting NULL back when we call
> irq_create_affinity_masks() with nvecs==1, but that only happens if
> affv==0 or the zalloc fails, and I don't see why either would be the
> case.

It happens for the !CONFIG_SMP case.  It also happens for the case where
we pre_vectors or post_vectors reduces the affinity vector count to 1
inside irq_create_affinity_masks, so maybe this patch isn't the best and
the warning should either move into irq_create_affinity_masks or just
remove it entirely.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] PCI/MSI: don't try to apply MSI(-X) affinity for single vectors
  2017-08-21 18:39   ` Christoph Hellwig
@ 2017-08-26  0:02     ` Bjorn Helgaas
  0 siblings, 0 replies; 5+ messages in thread
From: Bjorn Helgaas @ 2017-08-26  0:02 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-pci

On Mon, Aug 21, 2017 at 08:39:05PM +0200, Christoph Hellwig wrote:
> On Wed, Aug 02, 2017 at 01:24:58PM -0500, Bjorn Helgaas wrote:
> > On Wed, Jul 26, 2017 at 10:17:41PM +0200, Christoph Hellwig wrote:
> > > We'll always get NULL back in that case, so skip the call and the
> > > resulting warning.
> > 
> > 1. I'm not sure PCI_IRQ_AFFINITY was the right name.  IIUC, a
> > MSI/MSI-X vector is always basically bound to CPU,
> 
> This will depend on your architecture.
> 
> > so we always have
> > affinity.  The only difference with PCI_IRQ_AFFINITY is that instead
> > of binding them all to the same CPU, we spread them around.  Maybe
> > PCI_IRQ_SPREAD would be more suggestive.  But whatever, it is what it
> > is, and I'll expand the changelog something like this:
> 
> Yes, that might be a better name.  We don't have that many callers
> yet, so we could probably still change it.
> 
> > 
> >   Calling pci_alloc_irq_vectors() with PCI_IRQ_AFFINITY indicates
> >   that we should spread the MSI vectors around the available CPUs.
> >   But if we're only allocating one vector, there's nothing to spread
> >   around.
> 
> Ok.
> 
> > 2. The patch makes sense in that if we're only allocating a single
> > vector, there's nothing to spread around and there's no need to
> > allocate a cpumask.  But I haven't figured out why we get a warning.
> > I assume it's because we're getting NULL back when we call
> > irq_create_affinity_masks() with nvecs==1, but that only happens if
> > affv==0 or the zalloc fails, and I don't see why either would be the
> > case.
> 
> It happens for the !CONFIG_SMP case.  It also happens for the case where
> we pre_vectors or post_vectors reduces the affinity vector count to 1
> inside irq_create_affinity_masks, so maybe this patch isn't the best and
> the warning should either move into irq_create_affinity_masks or just
> remove it entirely.

Oh, thanks, I totally missed the !CONFIG_SMP case, sorry about that.

I applied the follow-on patch, which I think obsoletes this one.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-08-26  0:02 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-26 20:17 [PATCH] PCI/MSI: don't try to apply MSI(-X) affinity for single vectors Christoph Hellwig
2017-08-02 18:24 ` Bjorn Helgaas
2017-08-14 20:33   ` Bjorn Helgaas
2017-08-21 18:39   ` Christoph Hellwig
2017-08-26  0:02     ` Bjorn Helgaas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).