linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Michael Ellerman <michael@ellerman.id.au>
To: Mike Qiu <qiudayu@linux.vnet.ibm.com>
Cc: tglx@linutronix.de, linuxppc-dev@lists.ozlabs.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/3] Enable multiple MSI feature in pSeries
Date: Mon, 04 Feb 2013 16:56:15 +1100	[thread overview]
Message-ID: <1359957375.25414.31.camel@concordia> (raw)
In-Reply-To: <510F2FDB.6020303@linux.vnet.ibm.com>

On Mon, 2013-02-04 at 11:49 +0800, Mike Qiu wrote:
> > On Tue, 2013-01-15 at 15:38 +0800, Mike Qiu wrote:
> > > Currently, multiple MSI feature hasn't been enabled in pSeries,
> > > These patches try to enbale this feature.
> > Hi Mike,
> > 
> > > These patches have been tested by using ipr driver, and the driver patch
> > > has been made by Wen Xiong <wenxiong@linux.vnet.ibm.com>:
> > So who wrote these patches? Normally we would expect the original author
> > to post the patches if at all possible.
> Hi Michael
> 
> These Multiple MSI patches were wrote by myself, you know this feature
> has not enabled
> and it need device driver to test whether it works suitable. So I test
> my patches use 
> Wen Xiong's ipr patches, which has been send out to the maillinglist.
> 
> I'm the original author :)

Ah OK, sorry, that was more or less clear from your mail but I just
misunderstood.

> > > [PATCH 0/7] Add support for new IBM SAS controllers
> > I would like to see the full series, including the driver enablement.
> Yep, but the driver patches were wrote by Wen Xiong and has been send
> out.

OK, you mean this series?

http://thread.gmane.org/gmane.linux.scsi/79639


> I just use her patches to test my patches. all device support Multiple
> MSI can use my feature not only IBM SAS controllers, I also test my
> patches use the broadcom wireless card tg3, and also works OK.

You mean drivers/net/ethernet/broadcom/tg3.c ? I don't see where it
calls pci_enable_msi_block() ?

All devices /can/ use it, but the driver needs to be updated. Currently
we have two drivers that do so (in Linus' tree), plus the updated IPR.

> > > Test platform: One partition of pSeries with one cpu core(4 SMTs) and 
> > >                RAID bus controller: IBM PCI-E IPR SAS Adapter (ASIC) in POWER7
> > > OS version: SUSE Linux Enterprise Server 11 SP2  (ppc64) with 3.8-rc3 kernel 
> > > 
> > > IRQ 21 and 22 are assigned to the ipr device which support 2 mutiple MSI.
> > > 
> > > The test results is shown by 'cat /proc/interrups':
> > >           CPU0       CPU1       CPU2       CPU3       
> > > 21:          6          5          5          5      XICS Level     host1-0
> > > 22:        817        814        816        813      XICS Level     host1-1
> > This shows that you are correctly configuring two MSIs.
> > 
> > But the key advantage of using multiple interrupts is to distribute load
> > across CPUs and improve performance. So I would like to see some
> > performance numbers that show that there is a real benefit for all the
> > extra complexity in the code.

> Yes, the system just has suport two MSIs. Anyway, I will try to do
> some proformance test, to show the real benefit.
> But actually it needs the driver to do so. As the data show above, it
> seems there is some problems in use the interrupt, the irq 21 use few,
> most use 22, I will discuss with the driver author to see why and if
> she fixed, I will give out the proformance result.

Yeah that would be good.

I really dislike that we have a separate API for multi-MSI vs MSI-X, and
pci_enable_msi_block() also pushes the contiguous power-of-2 allocation
into the irq domain layer, which is unpleasant. So if we really must do
multi-MSI I would like to do it differently.

cheers

  reply	other threads:[~2013-02-04  5:56 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-15  7:38 [PATCH 0/3] Enable multiple MSI feature in pSeries Mike Qiu
2013-01-15  7:38 ` [PATCH 1/3] irq: Set multiple MSI descriptor data for multiple IRQs Mike Qiu
2013-06-05 23:03   ` Grant Likely
2013-01-15  7:38 ` [PATCH 2/3] irq: Add hw continuous IRQs map to virtual continuous IRQs support Mike Qiu
2013-03-05  2:23   ` Michael Ellerman
2013-03-05  7:19     ` Mike Qiu
2013-03-06  3:54       ` Michael Ellerman
2013-03-06  5:34         ` Mike Qiu
2013-03-06  5:42           ` Michael Ellerman
2013-03-06  7:02             ` Mike Qiu
2013-03-05  2:41   ` Paul Mundt
2013-03-05  7:44     ` Mike Qiu
2013-01-15  7:38 ` [PATCH 3/3] powerpc/pci: Enable pSeries multiple MSI feature Mike Qiu
2013-01-31  2:10 ` [PATCH 0/3] Enable multiple MSI feature in pSeries Mike
2013-02-04  3:23 ` Michael Ellerman
2013-02-04  3:49   ` Mike Qiu
2013-02-04  5:56     ` Michael Ellerman [this message]
2013-02-04  6:43       ` Mike Qiu
2013-03-01  3:07 ` Mike
2013-03-01  3:08 ` Mike
2013-03-01  3:54   ` Michael Ellerman
2013-03-04  3:14     ` Mike Qiu
2013-03-05  0:28       ` Michael Ellerman
2013-05-21 14:45 ` Alexander Gordeev
2013-05-22  0:15   ` Benjamin Herrenschmidt
2013-05-22  6:16     ` Mike Qiu
2013-05-22  5:57   ` Mike Qiu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1359957375.25414.31.camel@concordia \
    --to=michael@ellerman.id.au \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=qiudayu@linux.vnet.ibm.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).