linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: pratyush.anand@st.com (Pratyush Anand)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v4 0/6] Add Keystone PCIe controller driver
Date: Mon, 14 Jul 2014 11:33:34 +0530	[thread overview]
Message-ID: <20140714060334.GB2930@pratyush-vbox> (raw)
In-Reply-To: <1405110995-24676-1-git-send-email-m-karicheri2@ti.com>

Oh.. I see my reply from gmail is not readable at all. (I forgot to
switch to plain text :( ) Please ignore last mail I am replying again
here.

On Sat, Jul 12, 2014 at 04:36:29AM +0800, Murali Karicheri wrote:

[...]

> Murali Karicheri (6):
>   PCI: designware: add rd[wr]_other_conf API
>   PCI: designware: refactor MSI code to work with v3.65 dw hardware

For  above two you can add my reviewed-by:

>   PCI: designware: refactor host init code to re-use on keystone PCI
>   PCI: designware: enhance dw core driver to support keystone PCI host

In stead of using version number and then changing few functions from
static to global , I would have used same philosophy of adding callbacks
wherever needed. May be maintainers can give their view, instead of patch 
3 and 4 I would have gone with something like this, where kc_pcie can be 
passed through pp->plat_data.
 
diff --git a/drivers/pci/host/pcie-designware.c b/drivers/pci/host/pcie-designware.c
index 905941c..b216192 100644
--- a/drivers/pci/host/pcie-designware.c
+++ b/drivers/pci/host/pcie-designware.c
@@ -490,16 +490,21 @@ int __init dw_pcie_host_init(struct pcie_port *pp)
        }
 
        if (IS_ENABLED(CONFIG_PCI_MSI)) {
-               pp->irq_domain = irq_domain_add_linear(pp->dev->of_node,
-                                       MAX_MSI_IRQS, &msi_domain_ops,
-                                       &dw_pcie_msi_chip);
-               if (!pp->irq_domain) {
-                       dev_err(pp->dev, "irq domain init failed\n");
-                       return -ENXIO;
-               }
+               if (!pp->ops->msi_init) {
+                       pp->irq_domain = irq_domain_add_linear(pp->dev->of_node,
+                                               MAX_MSI_IRQS, &msi_domain_ops,
+                                               &dw_pcie_msi_chip);
+                       if (!pp->irq_domain) {
+                               dev_err(pp->dev, "irq domain init failed\n");
+                               return -ENXIO;
+                       }
 
-               for (i = 0; i < MAX_MSI_IRQS; i++)
-                       irq_create_mapping(pp->irq_domain, i);
+                       for (i = 0; i < MAX_MSI_IRQS; i++)
+                               irq_create_mapping(pp->irq_domain, i);
+               } else {
+                       pp->ops->msi_init(pp, &msi_domain_ops,
+                                               &dw_pcie_msi_chip);
+               }
        }
 
        if (pp->ops->host_init)
@@ -759,6 +764,9 @@ static struct pci_bus *dw_pcie_scan_bus(int nr, struct pci_sys_data *sys)
                BUG();
        }
 
+       if (bus && pp->ops->scan_bus)
+               bus = pp->ops->scan_bus(pp);
+
        return bus;
 }
 
diff --git a/drivers/pci/host/pcie-designware.h b/drivers/pci/host/pcie-designware.h
index 387f69e..39ce496 100644
--- a/drivers/pci/host/pcie-designware.h
+++ b/drivers/pci/host/pcie-designware.h
@@ -52,6 +52,7 @@ struct pcie_port {
        struct irq_domain       *irq_domain;
        unsigned long           msi_data;
        DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
+       void                    *plat_data;
 };
 
 struct pcie_host_ops {
@@ -70,6 +71,9 @@ struct pcie_host_ops {
        void (*msi_set_irq)(struct pcie_port *pp, int irq);
        void (*msi_clear_irq)(struct pcie_port *pp, int irq);
        u32 (*get_msi_data)(struct pcie_port *pp);
+       struct pci_bus *(*scan_bus)(struct pcie_port *pp);
+       void (*msi_init) (struct pcie_port *pp, struct irq_domain_ops *ops,
+                       struct msi_chip *chip);
 };
 
 int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val);


~Pratyush

  parent reply	other threads:[~2014-07-14  6:03 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-11 20:36 [PATCH v4 0/6] Add Keystone PCIe controller driver Murali Karicheri
2014-07-11 20:36 ` [PATCH v4 1/6] PCI: designware: add rd[wr]_other_conf API Murali Karicheri
2014-07-11 20:36 ` [PATCH v4 2/6] PCI: designware: refactor MSI code to work with v3.65 dw hardware Murali Karicheri
2014-07-11 20:36 ` [PATCH v4 3/6] PCI: designware: refactor host init code to re-use on keystone PCI Murali Karicheri
2014-07-11 20:36 ` [PATCH v4 4/6] PCI: designware: enhance dw core driver to support keystone PCI host controller Murali Karicheri
2014-07-11 20:36 ` [PATCH v4 5/6] PCI: add PCI controller for keystone PCIe h/w Murali Karicheri
2014-07-11 20:36 ` [PATCH v4 6/6] PCI: keystone: Update maintainer information Murali Karicheri
2014-07-14  6:03 ` Pratyush Anand [this message]
2014-07-14 12:23   ` [PATCH v4 0/6] Add Keystone PCIe controller driver Jingoo Han
2014-07-14 14:28   ` Murali Karicheri

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140714060334.GB2930@pratyush-vbox \
    --to=pratyush.anand@st.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).