* [PATCH v6 1/9] PCI: dwc: Add IRQ chained API support
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
@ 2018-01-26 16:35 ` Gustavo Pimentel
2018-02-09 11:10 ` Marc Zyngier
2018-01-26 16:35 ` [PATCH v6 2/9] PCI: dwc: exynos: Switch to use the IRQ chained API Gustavo Pimentel
` (8 subsequent siblings)
9 siblings, 1 reply; 18+ messages in thread
From: Gustavo Pimentel @ 2018-01-26 16:35 UTC (permalink / raw)
To: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar,
Gustavo Pimentel
Adds a IRQ chained API to pcie-designware, that aims to replace the current
IRQ domain hierarchy API implemented.
Although the IRQ domain hierarchy API is still available, pcie-designware
will use now the IRQ chained API.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Tested-by: Niklas Cassel <niklas.cassel@axis.com>
---
Change v1->v2:
- num_vectors is now not configurable by the Device Tree. Now it is 32 by
default and can be overridden by any specific SoC driver.
Change v2->v3:
- Nothing changed, just to follow the patch set version.
Change v3->v4:
- Moved Kishon's fixes (PCI end point error and a dra7xx warning) from
v3-0007 patch file to here.
- Undo the change of the function signature to be more coherent with the
host mode specific functions (Thanks Kishon).
- Refactor the added functions in order to used host_data so that getting
pp again back from pci could be avoided. (Thanks Kishon)
- Changed summary line to match the drivers/PCI convention and changelog to
maintain the consistency (thanks Bjorn).
Change v4->v5:
- Implemented Kishon MSI multiple messages fix (thanks Kishon).
Change v5->v6:
- Fixed rebase problem detected by Niklas (thanks Niklas).
drivers/pci/dwc/pcie-designware-host.c | 296 +++++++++++++++++++++++++++++----
drivers/pci/dwc/pcie-designware.h | 18 ++
2 files changed, 286 insertions(+), 28 deletions(-)
diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
index bf558df..f7b2bce 100644
--- a/drivers/pci/dwc/pcie-designware-host.c
+++ b/drivers/pci/dwc/pcie-designware-host.c
@@ -11,6 +11,7 @@
* published by the Free Software Foundation.
*/
+#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/of_address.h>
#include <linux/of_pci.h>
@@ -53,6 +54,36 @@ static struct irq_chip dw_msi_irq_chip = {
.irq_unmask = pci_msi_unmask_irq,
};
+static void dw_msi_ack_irq(struct irq_data *d)
+{
+ irq_chip_ack_parent(d);
+}
+
+static void dw_msi_mask_irq(struct irq_data *d)
+{
+ pci_msi_mask_irq(d);
+ irq_chip_mask_parent(d);
+}
+
+static void dw_msi_unmask_irq(struct irq_data *d)
+{
+ pci_msi_unmask_irq(d);
+ irq_chip_unmask_parent(d);
+}
+
+static struct irq_chip dw_pcie_msi_irq_chip = {
+ .name = "PCI-MSI",
+ .irq_ack = dw_msi_ack_irq,
+ .irq_mask = dw_msi_mask_irq,
+ .irq_unmask = dw_msi_unmask_irq,
+};
+
+static struct msi_domain_info dw_pcie_msi_domain_info = {
+ .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+ MSI_FLAG_PCI_MSIX | MSI_FLAG_MULTI_PCI_MSI),
+ .chip = &dw_pcie_msi_irq_chip,
+};
+
/* MSI int handler */
irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
{
@@ -81,6 +112,196 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
return ret;
}
+/* Chained MSI interrupt service routine */
+static void dw_chained_msi_isr(struct irq_desc *desc)
+{
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+ struct pcie_port *pp;
+
+ chained_irq_enter(chip, desc);
+
+ pp = irq_desc_get_handler_data(desc);
+ dw_handle_msi_irq(pp);
+
+ chained_irq_exit(chip, desc);
+}
+
+static void dw_pci_setup_msi_msg(struct irq_data *data, struct msi_msg *msg)
+{
+ struct pcie_port *pp = irq_data_get_irq_chip_data(data);
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ u64 msi_target;
+
+ if (pp->ops->get_msi_addr)
+ msi_target = pp->ops->get_msi_addr(pp);
+ else
+ msi_target = (u64)pp->msi_data;
+
+ msg->address_lo = lower_32_bits(msi_target);
+ msg->address_hi = upper_32_bits(msi_target);
+
+ if (pp->ops->get_msi_data)
+ msg->data = pp->ops->get_msi_data(pp, data->hwirq);
+ else
+ msg->data = data->hwirq;
+
+ dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n",
+ (int)data->hwirq, msg->address_hi, msg->address_lo);
+}
+
+static int dw_pci_msi_set_affinity(struct irq_data *irq_data,
+ const struct cpumask *mask, bool force)
+{
+ return -EINVAL;
+}
+
+static void dw_pci_bottom_mask(struct irq_data *data)
+{
+ struct pcie_port *pp = irq_data_get_irq_chip_data(data);
+ unsigned int res, bit, ctrl;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pp->lock, flags);
+
+ if (pp->ops->msi_clear_irq)
+ pp->ops->msi_clear_irq(pp, data->hwirq);
+ else {
+ ctrl = data->hwirq / 32;
+ res = ctrl * 12;
+ bit = data->hwirq % 32;
+
+ pp->irq_status[ctrl] &= ~(1 << bit);
+ dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
+ pp->irq_status[ctrl]);
+ }
+
+ spin_unlock_irqrestore(&pp->lock, flags);
+}
+
+static void dw_pci_bottom_unmask(struct irq_data *data)
+{
+ struct pcie_port *pp = irq_data_get_irq_chip_data(data);
+ unsigned int res, bit, ctrl;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pp->lock, flags);
+
+ if (pp->ops->msi_set_irq)
+ pp->ops->msi_set_irq(pp, data->hwirq);
+ else {
+ ctrl = data->hwirq / 32;
+ res = ctrl * 12;
+ bit = data->hwirq % 32;
+
+ pp->irq_status[ctrl] |= 1 << bit;
+ dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
+ pp->irq_status[ctrl]);
+ }
+
+ spin_unlock_irqrestore(&pp->lock, flags);
+}
+
+static void dw_pci_bottom_ack(struct irq_data *d)
+{
+ struct msi_desc *msi = irq_data_get_msi_desc(d);
+ struct pcie_port *pp;
+
+ pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
+
+ if (pp->ops->msi_irq_ack)
+ pp->ops->msi_irq_ack(d->hwirq, pp);
+}
+
+static struct irq_chip dw_pci_msi_bottom_irq_chip = {
+ .name = "DWPCI-MSI",
+ .irq_ack = dw_pci_bottom_ack,
+ .irq_compose_msi_msg = dw_pci_setup_msi_msg,
+ .irq_set_affinity = dw_pci_msi_set_affinity,
+ .irq_mask = dw_pci_bottom_mask,
+ .irq_unmask = dw_pci_bottom_unmask,
+};
+
+static int dw_pcie_irq_domain_alloc(struct irq_domain *domain,
+ unsigned int virq, unsigned int nr_irqs,
+ void *args)
+{
+ struct pcie_port *pp = domain->host_data;
+ unsigned long flags;
+ unsigned long bit;
+ u32 i;
+
+ spin_lock_irqsave(&pp->lock, flags);
+
+ bit = bitmap_find_free_region(pp->msi_irq_in_use, pp->num_vectors,
+ order_base_2(nr_irqs));
+
+ if (bit < 0) {
+ spin_unlock_irqrestore(&pp->lock, flags);
+ return -ENOSPC;
+ }
+
+ spin_unlock_irqrestore(&pp->lock, flags);
+
+ for (i = 0; i < nr_irqs; i++)
+ irq_domain_set_info(domain, virq + i, bit + i,
+ &dw_pci_msi_bottom_irq_chip,
+ pp, handle_level_irq,
+ NULL, NULL);
+
+ return 0;
+}
+
+static void dw_pcie_irq_domain_free(struct irq_domain *domain,
+ unsigned int virq, unsigned int nr_irqs)
+{
+ struct irq_data *data = irq_domain_get_irq_data(domain, virq);
+ struct pcie_port *pp = irq_data_get_irq_chip_data(data);
+ unsigned long flags;
+
+ spin_lock_irqsave(&pp->lock, flags);
+ bitmap_release_region(pp->msi_irq_in_use, data->hwirq,
+ order_base_2(nr_irqs));
+ spin_unlock_irqrestore(&pp->lock, flags);
+}
+
+static const struct irq_domain_ops dw_pcie_msi_domain_ops = {
+ .alloc = dw_pcie_irq_domain_alloc,
+ .free = dw_pcie_irq_domain_free,
+};
+
+int dw_pcie_allocate_domains(struct pcie_port *pp)
+{
+ struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
+ struct fwnode_handle *fwnode = of_node_to_fwnode(pci->dev->of_node);
+
+ pp->irq_domain = irq_domain_create_linear(fwnode, pp->num_vectors,
+ &dw_pcie_msi_domain_ops, pp);
+ if (!pp->irq_domain) {
+ dev_err(pci->dev, "failed to create IRQ domain\n");
+ return -ENOMEM;
+ }
+
+ pp->msi_domain = pci_msi_create_irq_domain(fwnode,
+ &dw_pcie_msi_domain_info,
+ pp->irq_domain);
+ if (!pp->msi_domain) {
+ dev_err(pci->dev, "failed to create MSI domain\n");
+ irq_domain_remove(pp->irq_domain);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+void dw_pcie_free_msi(struct pcie_port *pp)
+{
+ irq_set_chained_handler(pp->msi_irq, NULL);
+ irq_set_handler_data(pp->msi_irq, NULL);
+
+ irq_domain_remove(pp->msi_domain);
+ irq_domain_remove(pp->irq_domain);
+}
+
void dw_pcie_msi_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
@@ -99,20 +320,21 @@ void dw_pcie_msi_init(struct pcie_port *pp)
/* program the msi_data */
dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4,
- (u32)(msi_target & 0xffffffff));
+ lower_32_bits(msi_target));
dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4,
- (u32)(msi_target >> 32 & 0xffffffff));
+ upper_32_bits(msi_target));
}
static void dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
{
- unsigned int res, bit, val;
+ unsigned int res, bit, ctrl;
- res = (irq / 32) * 12;
+ ctrl = irq / 32;
+ res = ctrl * 12;
bit = irq % 32;
- dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
- val &= ~(1 << bit);
- dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
+ pp->irq_status[ctrl] &= ~(1 << bit);
+ dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
+ pp->irq_status[ctrl]);
}
static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base,
@@ -134,13 +356,14 @@ static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base,
static void dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
{
- unsigned int res, bit, val;
+ unsigned int res, bit, ctrl;
- res = (irq / 32) * 12;
+ ctrl = irq / 32;
+ res = ctrl * 12;
bit = irq % 32;
- dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
- val |= 1 << bit;
- dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
+ pp->irq_status[ctrl] |= 1 << bit;
+ dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
+ pp->irq_status[ctrl]);
}
static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
@@ -288,11 +511,13 @@ int dw_pcie_host_init(struct pcie_port *pp)
struct device *dev = pci->dev;
struct device_node *np = dev->of_node;
struct platform_device *pdev = to_platform_device(dev);
+ struct resource_entry *win, *tmp;
struct pci_bus *bus, *child;
struct pci_host_bridge *bridge;
struct resource *cfg_res;
- int i, ret;
- struct resource_entry *win, *tmp;
+ int ret;
+
+ spin_lock_init(&pci->pp.lock);
cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
if (cfg_res) {
@@ -391,18 +616,33 @@ int dw_pcie_host_init(struct pcie_port *pp)
pci->num_viewport = 2;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
- if (!pp->ops->msi_host_init) {
- pp->irq_domain = irq_domain_add_linear(dev->of_node,
- MAX_MSI_IRQS, &msi_domain_ops,
- &dw_pcie_msi_chip);
- if (!pp->irq_domain) {
- dev_err(dev, "irq domain init failed\n");
- ret = -ENXIO;
+ /*
+ * If a specific SoC driver needs to change the
+ * default number of vectors, it needs to implement
+ * the set_num_vectors callback.
+ */
+ if (!pp->ops->set_num_vectors) {
+ pp->num_vectors = MSI_DEF_NUM_VECTORS;
+ } else {
+ pp->ops->set_num_vectors(pp);
+
+ if (pp->num_vectors > MAX_MSI_IRQS ||
+ pp->num_vectors == 0) {
+ dev_err(dev,
+ "Invalid number of vectors\n");
goto error;
}
+ }
- for (i = 0; i < MAX_MSI_IRQS; i++)
- irq_create_mapping(pp->irq_domain, i);
+ if (!pp->ops->msi_host_init) {
+ ret = dw_pcie_allocate_domains(pp);
+ if (ret)
+ goto error;
+
+ if (pp->msi_irq)
+ irq_set_chained_handler_and_data(pp->msi_irq,
+ dw_chained_msi_isr,
+ pp);
} else {
ret = pp->ops->msi_host_init(pp, &dw_pcie_msi_chip);
if (ret < 0)
@@ -424,10 +664,6 @@ int dw_pcie_host_init(struct pcie_port *pp)
bridge->ops = &dw_pcie_ops;
bridge->map_irq = of_irq_parse_and_map_pci;
bridge->swizzle_irq = pci_common_swizzle;
- if (IS_ENABLED(CONFIG_PCI_MSI)) {
- bridge->msi = &dw_pcie_msi_chip;
- dw_pcie_msi_chip.dev = dev;
- }
ret = pci_scan_root_bus_bridge(bridge);
if (ret)
@@ -596,11 +832,15 @@ static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci)
void dw_pcie_setup_rc(struct pcie_port *pp)
{
- u32 val;
+ u32 val, ctrl;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
dw_pcie_setup(pci);
+ /* Initialize IRQ Status array */
+ for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++)
+ dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + (ctrl * 12), 4,
+ &pp->irq_status[ctrl]);
/* setup RC BARs */
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);
diff --git a/drivers/pci/dwc/pcie-designware.h b/drivers/pci/dwc/pcie-designware.h
index 8190465..b603e88 100644
--- a/drivers/pci/dwc/pcie-designware.h
+++ b/drivers/pci/dwc/pcie-designware.h
@@ -117,6 +117,7 @@
*/
#define MAX_MSI_IRQS 32
#define MAX_MSI_CTRLS (MAX_MSI_IRQS / 32)
+#define MSI_DEF_NUM_VECTORS 32
/* Maximum number of inbound/outbound iATUs */
#define MAX_IATU_IN 256
@@ -152,7 +153,9 @@ struct dw_pcie_host_ops {
phys_addr_t (*get_msi_addr)(struct pcie_port *pp);
u32 (*get_msi_data)(struct pcie_port *pp, int pos);
void (*scan_bus)(struct pcie_port *pp);
+ void (*set_num_vectors)(struct pcie_port *pp);
int (*msi_host_init)(struct pcie_port *pp, struct msi_controller *chip);
+ void (*msi_irq_ack)(int irq, struct pcie_port *pp);
};
struct pcie_port {
@@ -177,7 +180,11 @@ struct pcie_port {
const struct dw_pcie_host_ops *ops;
int msi_irq;
struct irq_domain *irq_domain;
+ struct irq_domain *msi_domain;
dma_addr_t msi_data;
+ u32 num_vectors;
+ u32 irq_status[MAX_MSI_CTRLS];
+ spinlock_t lock;
DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
};
@@ -319,8 +326,10 @@ static inline void dw_pcie_dbi_ro_wr_dis(struct dw_pcie *pci)
#ifdef CONFIG_PCIE_DW_HOST
irqreturn_t dw_handle_msi_irq(struct pcie_port *pp);
void dw_pcie_msi_init(struct pcie_port *pp);
+void dw_pcie_free_msi(struct pcie_port *pp);
void dw_pcie_setup_rc(struct pcie_port *pp);
int dw_pcie_host_init(struct pcie_port *pp);
+int dw_pcie_allocate_domains(struct pcie_port *pp);
#else
static inline irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
{
@@ -331,6 +340,10 @@ static inline void dw_pcie_msi_init(struct pcie_port *pp)
{
}
+static inline void dw_pcie_free_msi(struct pcie_port *pp)
+{
+}
+
static inline void dw_pcie_setup_rc(struct pcie_port *pp)
{
}
@@ -339,6 +352,11 @@ static inline int dw_pcie_host_init(struct pcie_port *pp)
{
return 0;
}
+
+static inline int dw_pcie_allocate_domains(struct pcie_port *pp)
+{
+ return 0;
+}
#endif
#ifdef CONFIG_PCIE_DW_EP
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v6 1/9] PCI: dwc: Add IRQ chained API support
2018-01-26 16:35 ` [PATCH v6 1/9] PCI: dwc: Add IRQ chained API support Gustavo Pimentel
@ 2018-02-09 11:10 ` Marc Zyngier
2018-02-12 17:58 ` Gustavo Pimentel
0 siblings, 1 reply; 18+ messages in thread
From: Marc Zyngier @ 2018-02-09 11:10 UTC (permalink / raw)
To: Gustavo Pimentel, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar
Hi Gustavo,
On 26/01/18 16:35, Gustavo Pimentel wrote:
> Adds a IRQ chained API to pcie-designware, that aims to replace the current
> IRQ domain hierarchy API implemented.
>
> Although the IRQ domain hierarchy API is still available, pcie-designware
> will use now the IRQ chained API.
I'm a bit thrown by this comment, as I don't think it reflects the
reality of the code.
What you have here is a domain hierarchy (generic PCI MSI layer and HW
specific domain), *and* a multiplexer (chained interrupt) that funnels
all your MSIs into a single parent interrupt. What you're moving away
from is the msi_controller API.
This has no impact on the code, but it helps to use the correct terminology.
>
> Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
> Tested-by: Niklas Cassel <niklas.cassel@axis.com>
> ---
> Change v1->v2:
> - num_vectors is now not configurable by the Device Tree. Now it is 32 by
> default and can be overridden by any specific SoC driver.
> Change v2->v3:
> - Nothing changed, just to follow the patch set version.
> Change v3->v4:
> - Moved Kishon's fixes (PCI end point error and a dra7xx warning) from
> v3-0007 patch file to here.
> - Undo the change of the function signature to be more coherent with the
> host mode specific functions (Thanks Kishon).
> - Refactor the added functions in order to used host_data so that getting
> pp again back from pci could be avoided. (Thanks Kishon)
> - Changed summary line to match the drivers/PCI convention and changelog to
> maintain the consistency (thanks Bjorn).
> Change v4->v5:
> - Implemented Kishon MSI multiple messages fix (thanks Kishon).
> Change v5->v6:
> - Fixed rebase problem detected by Niklas (thanks Niklas).
>
> drivers/pci/dwc/pcie-designware-host.c | 296 +++++++++++++++++++++++++++++----
> drivers/pci/dwc/pcie-designware.h | 18 ++
> 2 files changed, 286 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
> index bf558df..f7b2bce 100644
> --- a/drivers/pci/dwc/pcie-designware-host.c
> +++ b/drivers/pci/dwc/pcie-designware-host.c
> @@ -11,6 +11,7 @@
> * published by the Free Software Foundation.
> */
>
> +#include <linux/irqchip/chained_irq.h>
> #include <linux/irqdomain.h>
> #include <linux/of_address.h>
> #include <linux/of_pci.h>
> @@ -53,6 +54,36 @@ static struct irq_chip dw_msi_irq_chip = {
> .irq_unmask = pci_msi_unmask_irq,
> };
>
> +static void dw_msi_ack_irq(struct irq_data *d)
> +{
> + irq_chip_ack_parent(d);
> +}
> +
> +static void dw_msi_mask_irq(struct irq_data *d)
> +{
> + pci_msi_mask_irq(d);
> + irq_chip_mask_parent(d);
> +}
> +
> +static void dw_msi_unmask_irq(struct irq_data *d)
> +{
> + pci_msi_unmask_irq(d);
> + irq_chip_unmask_parent(d);
> +}
> +
> +static struct irq_chip dw_pcie_msi_irq_chip = {
> + .name = "PCI-MSI",
> + .irq_ack = dw_msi_ack_irq,
> + .irq_mask = dw_msi_mask_irq,
> + .irq_unmask = dw_msi_unmask_irq,
> +};
> +
> +static struct msi_domain_info dw_pcie_msi_domain_info = {
> + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
> + MSI_FLAG_PCI_MSIX | MSI_FLAG_MULTI_PCI_MSI),
> + .chip = &dw_pcie_msi_irq_chip,
> +};
> +
> /* MSI int handler */
> irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
> {
> @@ -81,6 +112,196 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
> return ret;
> }
>
> +/* Chained MSI interrupt service routine */
> +static void dw_chained_msi_isr(struct irq_desc *desc)
> +{
> + struct irq_chip *chip = irq_desc_get_chip(desc);
> + struct pcie_port *pp;
> +
> + chained_irq_enter(chip, desc);
> +
> + pp = irq_desc_get_handler_data(desc);
> + dw_handle_msi_irq(pp);
> +
> + chained_irq_exit(chip, desc);
> +}
Nit: once you're done with the msi_controller stuff (in patch #8), it'd
be good to move dw_handle_msi_irq() inside this function, as the
irqreturn_t return type doesn't make much sense any more.
> +
> +static void dw_pci_setup_msi_msg(struct irq_data *data, struct msi_msg *msg)
> +{
> + struct pcie_port *pp = irq_data_get_irq_chip_data(data);
> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> + u64 msi_target;
> +
> + if (pp->ops->get_msi_addr)
> + msi_target = pp->ops->get_msi_addr(pp);
> + else
> + msi_target = (u64)pp->msi_data;
> +
> + msg->address_lo = lower_32_bits(msi_target);
> + msg->address_hi = upper_32_bits(msi_target);
> +
> + if (pp->ops->get_msi_data)
> + msg->data = pp->ops->get_msi_data(pp, data->hwirq);
> + else
> + msg->data = data->hwirq;
> +
> + dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n",
> + (int)data->hwirq, msg->address_hi, msg->address_lo);
> +}
> +
> +static int dw_pci_msi_set_affinity(struct irq_data *irq_data,
> + const struct cpumask *mask, bool force)
> +{
> + return -EINVAL;
> +}
> +
> +static void dw_pci_bottom_mask(struct irq_data *data)
> +{
> + struct pcie_port *pp = irq_data_get_irq_chip_data(data);
> + unsigned int res, bit, ctrl;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&pp->lock, flags);
Consider turning this lock (and the corresponding locking primitives) to
a raw spin_lock. This won't have any additional effect for the mainline
kernel, but will make it work correctly if you use the RT patches.
> +
> + if (pp->ops->msi_clear_irq)
> + pp->ops->msi_clear_irq(pp, data->hwirq);
> + else {
Please use { } on both sides of the else.
> + ctrl = data->hwirq / 32;
> + res = ctrl * 12;
> + bit = data->hwirq % 32;
> +
> + pp->irq_status[ctrl] &= ~(1 << bit);
> + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
> + pp->irq_status[ctrl]);
> + }
> +
> + spin_unlock_irqrestore(&pp->lock, flags);
> +}
> +
> +static void dw_pci_bottom_unmask(struct irq_data *data)
> +{
> + struct pcie_port *pp = irq_data_get_irq_chip_data(data);
> + unsigned int res, bit, ctrl;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&pp->lock, flags);
> +
> + if (pp->ops->msi_set_irq)
> + pp->ops->msi_set_irq(pp, data->hwirq);
> + else {
> + ctrl = data->hwirq / 32;
> + res = ctrl * 12;
> + bit = data->hwirq % 32;
> +
> + pp->irq_status[ctrl] |= 1 << bit;
> + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
> + pp->irq_status[ctrl]);
> + }
> +
> + spin_unlock_irqrestore(&pp->lock, flags);
> +}
> +
> +static void dw_pci_bottom_ack(struct irq_data *d)
> +{
> + struct msi_desc *msi = irq_data_get_msi_desc(d);
> + struct pcie_port *pp;
> +
> + pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
msi_desc_to_pci_sysdata returns a void*, so you can remove this cast.
> +
> + if (pp->ops->msi_irq_ack)
> + pp->ops->msi_irq_ack(d->hwirq, pp);
> +}
> +
> +static struct irq_chip dw_pci_msi_bottom_irq_chip = {
> + .name = "DWPCI-MSI",
> + .irq_ack = dw_pci_bottom_ack,
> + .irq_compose_msi_msg = dw_pci_setup_msi_msg,
> + .irq_set_affinity = dw_pci_msi_set_affinity,
> + .irq_mask = dw_pci_bottom_mask,
> + .irq_unmask = dw_pci_bottom_unmask,
> +};
> +
> +static int dw_pcie_irq_domain_alloc(struct irq_domain *domain,
> + unsigned int virq, unsigned int nr_irqs,
> + void *args)
> +{
> + struct pcie_port *pp = domain->host_data;
> + unsigned long flags;
> + unsigned long bit;
> + u32 i;
> +
> + spin_lock_irqsave(&pp->lock, flags);
> +
> + bit = bitmap_find_free_region(pp->msi_irq_in_use, pp->num_vectors,
> + order_base_2(nr_irqs));
> +
> + if (bit < 0) {
> + spin_unlock_irqrestore(&pp->lock, flags);
> + return -ENOSPC;
> + }
> +
> + spin_unlock_irqrestore(&pp->lock, flags);
> +
> + for (i = 0; i < nr_irqs; i++)
> + irq_domain_set_info(domain, virq + i, bit + i,
> + &dw_pci_msi_bottom_irq_chip,
> + pp, handle_level_irq,
Are you sure about this "handle_level_irq"? MSIs are definitely edge
triggered, not level.
> + NULL, NULL);
> +
> + return 0;
> +}
> +
> +static void dw_pcie_irq_domain_free(struct irq_domain *domain,
> + unsigned int virq, unsigned int nr_irqs)
> +{
> + struct irq_data *data = irq_domain_get_irq_data(domain, virq);
> + struct pcie_port *pp = irq_data_get_irq_chip_data(data);
> + unsigned long flags;
> +
> + spin_lock_irqsave(&pp->lock, flags);
> + bitmap_release_region(pp->msi_irq_in_use, data->hwirq,
> + order_base_2(nr_irqs));
> + spin_unlock_irqrestore(&pp->lock, flags);
> +}
> +
> +static const struct irq_domain_ops dw_pcie_msi_domain_ops = {
> + .alloc = dw_pcie_irq_domain_alloc,
> + .free = dw_pcie_irq_domain_free,
> +};
> +
> +int dw_pcie_allocate_domains(struct pcie_port *pp)
> +{
> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> + struct fwnode_handle *fwnode = of_node_to_fwnode(pci->dev->of_node);
> +
> + pp->irq_domain = irq_domain_create_linear(fwnode, pp->num_vectors,
> + &dw_pcie_msi_domain_ops, pp);
> + if (!pp->irq_domain) {
> + dev_err(pci->dev, "failed to create IRQ domain\n");
> + return -ENOMEM;
> + }
> +
> + pp->msi_domain = pci_msi_create_irq_domain(fwnode,
> + &dw_pcie_msi_domain_info,
> + pp->irq_domain);
> + if (!pp->msi_domain) {
> + dev_err(pci->dev, "failed to create MSI domain\n");
> + irq_domain_remove(pp->irq_domain);
> + return -ENOMEM;
> + }
> +
> + return 0;
> +}
> +
> +void dw_pcie_free_msi(struct pcie_port *pp)
> +{
> + irq_set_chained_handler(pp->msi_irq, NULL);
> + irq_set_handler_data(pp->msi_irq, NULL);
> +
> + irq_domain_remove(pp->msi_domain);
> + irq_domain_remove(pp->irq_domain);
> +}
> +
> void dw_pcie_msi_init(struct pcie_port *pp)
> {
> struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> @@ -99,20 +320,21 @@ void dw_pcie_msi_init(struct pcie_port *pp)
>
> /* program the msi_data */
> dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4,
> - (u32)(msi_target & 0xffffffff));
> + lower_32_bits(msi_target));
> dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4,
> - (u32)(msi_target >> 32 & 0xffffffff));
> + upper_32_bits(msi_target));
> }
>
> static void dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
> {
> - unsigned int res, bit, val;
> + unsigned int res, bit, ctrl;
>
> - res = (irq / 32) * 12;
> + ctrl = irq / 32;
> + res = ctrl * 12;
> bit = irq % 32;
> - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
> - val &= ~(1 << bit);
> - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
> + pp->irq_status[ctrl] &= ~(1 << bit);
> + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
> + pp->irq_status[ctrl]);
> }
>
> static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base,
> @@ -134,13 +356,14 @@ static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base,
>
> static void dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
> {
> - unsigned int res, bit, val;
> + unsigned int res, bit, ctrl;
>
> - res = (irq / 32) * 12;
> + ctrl = irq / 32;
> + res = ctrl * 12;
> bit = irq % 32;
> - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
> - val |= 1 << bit;
> - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
> + pp->irq_status[ctrl] |= 1 << bit;
> + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
> + pp->irq_status[ctrl]);
> }
>
> static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
> @@ -288,11 +511,13 @@ int dw_pcie_host_init(struct pcie_port *pp)
> struct device *dev = pci->dev;
> struct device_node *np = dev->of_node;
> struct platform_device *pdev = to_platform_device(dev);
> + struct resource_entry *win, *tmp;
> struct pci_bus *bus, *child;
> struct pci_host_bridge *bridge;
> struct resource *cfg_res;
> - int i, ret;
> - struct resource_entry *win, *tmp;
> + int ret;
> +
> + spin_lock_init(&pci->pp.lock);
>
> cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
> if (cfg_res) {
> @@ -391,18 +616,33 @@ int dw_pcie_host_init(struct pcie_port *pp)
> pci->num_viewport = 2;
>
> if (IS_ENABLED(CONFIG_PCI_MSI)) {
> - if (!pp->ops->msi_host_init) {
> - pp->irq_domain = irq_domain_add_linear(dev->of_node,
> - MAX_MSI_IRQS, &msi_domain_ops,
> - &dw_pcie_msi_chip);
> - if (!pp->irq_domain) {
> - dev_err(dev, "irq domain init failed\n");
> - ret = -ENXIO;
> + /*
> + * If a specific SoC driver needs to change the
> + * default number of vectors, it needs to implement
> + * the set_num_vectors callback.
> + */
> + if (!pp->ops->set_num_vectors) {
> + pp->num_vectors = MSI_DEF_NUM_VECTORS;
> + } else {
> + pp->ops->set_num_vectors(pp);
> +
> + if (pp->num_vectors > MAX_MSI_IRQS ||
> + pp->num_vectors == 0) {
> + dev_err(dev,
> + "Invalid number of vectors\n");
> goto error;
> }
> + }
>
> - for (i = 0; i < MAX_MSI_IRQS; i++)
> - irq_create_mapping(pp->irq_domain, i);
> + if (!pp->ops->msi_host_init) {
> + ret = dw_pcie_allocate_domains(pp);
> + if (ret)
> + goto error;
> +
> + if (pp->msi_irq)
> + irq_set_chained_handler_and_data(pp->msi_irq,
> + dw_chained_msi_isr,
> + pp);
> } else {
> ret = pp->ops->msi_host_init(pp, &dw_pcie_msi_chip);
> if (ret < 0)
> @@ -424,10 +664,6 @@ int dw_pcie_host_init(struct pcie_port *pp)
> bridge->ops = &dw_pcie_ops;
> bridge->map_irq = of_irq_parse_and_map_pci;
> bridge->swizzle_irq = pci_common_swizzle;
> - if (IS_ENABLED(CONFIG_PCI_MSI)) {
> - bridge->msi = &dw_pcie_msi_chip;
> - dw_pcie_msi_chip.dev = dev;
> - }
>
> ret = pci_scan_root_bus_bridge(bridge);
> if (ret)
> @@ -596,11 +832,15 @@ static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci)
>
> void dw_pcie_setup_rc(struct pcie_port *pp)
> {
> - u32 val;
> + u32 val, ctrl;
> struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>
> dw_pcie_setup(pci);
>
> + /* Initialize IRQ Status array */
> + for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++)
> + dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + (ctrl * 12), 4,
> + &pp->irq_status[ctrl]);
> /* setup RC BARs */
> dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004);
> dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);
> diff --git a/drivers/pci/dwc/pcie-designware.h b/drivers/pci/dwc/pcie-designware.h
> index 8190465..b603e88 100644
> --- a/drivers/pci/dwc/pcie-designware.h
> +++ b/drivers/pci/dwc/pcie-designware.h
> @@ -117,6 +117,7 @@
> */
> #define MAX_MSI_IRQS 32
> #define MAX_MSI_CTRLS (MAX_MSI_IRQS / 32)
> +#define MSI_DEF_NUM_VECTORS 32
>
> /* Maximum number of inbound/outbound iATUs */
> #define MAX_IATU_IN 256
> @@ -152,7 +153,9 @@ struct dw_pcie_host_ops {
> phys_addr_t (*get_msi_addr)(struct pcie_port *pp);
> u32 (*get_msi_data)(struct pcie_port *pp, int pos);
> void (*scan_bus)(struct pcie_port *pp);
> + void (*set_num_vectors)(struct pcie_port *pp);
> int (*msi_host_init)(struct pcie_port *pp, struct msi_controller *chip);
> + void (*msi_irq_ack)(int irq, struct pcie_port *pp);
> };
>
> struct pcie_port {
> @@ -177,7 +180,11 @@ struct pcie_port {
> const struct dw_pcie_host_ops *ops;
> int msi_irq;
> struct irq_domain *irq_domain;
> + struct irq_domain *msi_domain;
> dma_addr_t msi_data;
> + u32 num_vectors;
> + u32 irq_status[MAX_MSI_CTRLS];
> + spinlock_t lock;
> DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
> };
>
> @@ -319,8 +326,10 @@ static inline void dw_pcie_dbi_ro_wr_dis(struct dw_pcie *pci)
> #ifdef CONFIG_PCIE_DW_HOST
> irqreturn_t dw_handle_msi_irq(struct pcie_port *pp);
> void dw_pcie_msi_init(struct pcie_port *pp);
> +void dw_pcie_free_msi(struct pcie_port *pp);
> void dw_pcie_setup_rc(struct pcie_port *pp);
> int dw_pcie_host_init(struct pcie_port *pp);
> +int dw_pcie_allocate_domains(struct pcie_port *pp);
> #else
> static inline irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
> {
> @@ -331,6 +340,10 @@ static inline void dw_pcie_msi_init(struct pcie_port *pp)
> {
> }
>
> +static inline void dw_pcie_free_msi(struct pcie_port *pp)
> +{
> +}
> +
> static inline void dw_pcie_setup_rc(struct pcie_port *pp)
> {
> }
> @@ -339,6 +352,11 @@ static inline int dw_pcie_host_init(struct pcie_port *pp)
> {
> return 0;
> }
> +
> +static inline int dw_pcie_allocate_domains(struct pcie_port *pp)
> +{
> + return 0;
> +}
> #endif
>
> #ifdef CONFIG_PCIE_DW_EP
>
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 1/9] PCI: dwc: Add IRQ chained API support
2018-02-09 11:10 ` Marc Zyngier
@ 2018-02-12 17:58 ` Gustavo Pimentel
2018-02-12 18:15 ` Marc Zyngier
0 siblings, 1 reply; 18+ messages in thread
From: Gustavo Pimentel @ 2018-02-12 17:58 UTC (permalink / raw)
To: Marc Zyngier, Joao.Pinto@synopsys.com, bhelgaas@google.com,
jingoohan1@gmail.com, kishon@ti.com
Cc: linux-pci@vger.kernel.org, m-karicheri2@ti.com,
thomas.petazzoni@free-electrons.com, minghuan.Lian@freescale.com,
mingkai.hu@freescale.com, tie-fei.zang@freescale.com,
hongxing.zhu@nxp.com, l.stach@pengutronix.de,
niklas.cassel@axis.com, jesper.nilsson@axis.com,
wangzhou1@hisilicon.com, gabriele.paoloni@huawei.com,
svarbanov@mm-sol.com, nsekhar@ti.com, Lorenzo Pieralisi
Hi Marc,
On 09/02/2018 11:10, Marc Zyngier wrote:
> Hi Gustavo,
>
> On 26/01/18 16:35, Gustavo Pimentel wrote:
>> Adds a IRQ chained API to pcie-designware, that aims to replace the current
>> IRQ domain hierarchy API implemented.
>>
>> Although the IRQ domain hierarchy API is still available, pcie-designware
>> will use now the IRQ chained API.
>
> I'm a bit thrown by this comment, as I don't think it reflects the
> reality of the code.
>
> What you have here is a domain hierarchy (generic PCI MSI layer and HW
> specific domain), *and* a multiplexer (chained interrupt) that funnels
> all your MSIs into a single parent interrupt. What you're moving away
> from is the msi_controller API.
>
> This has no impact on the code, but it helps to use the correct terminology.
>
Ok, I'll fix the terminology and descriptions.
>>
>> Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
>> Tested-by: Niklas Cassel <niklas.cassel@axis.com>
>> ---
>> Change v1->v2:
>> - num_vectors is now not configurable by the Device Tree. Now it is 32 by
>> default and can be overridden by any specific SoC driver.
>> Change v2->v3:
>> - Nothing changed, just to follow the patch set version.
>> Change v3->v4:
>> - Moved Kishon's fixes (PCI end point error and a dra7xx warning) from
>> v3-0007 patch file to here.
>> - Undo the change of the function signature to be more coherent with the
>> host mode specific functions (Thanks Kishon).
>> - Refactor the added functions in order to used host_data so that getting
>> pp again back from pci could be avoided. (Thanks Kishon)
>> - Changed summary line to match the drivers/PCI convention and changelog to
>> maintain the consistency (thanks Bjorn).
>> Change v4->v5:
>> - Implemented Kishon MSI multiple messages fix (thanks Kishon).
>> Change v5->v6:
>> - Fixed rebase problem detected by Niklas (thanks Niklas).
>>
>> drivers/pci/dwc/pcie-designware-host.c | 296 +++++++++++++++++++++++++++++----
>> drivers/pci/dwc/pcie-designware.h | 18 ++
>> 2 files changed, 286 insertions(+), 28 deletions(-)
>>
>> diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
>> index bf558df..f7b2bce 100644
>> --- a/drivers/pci/dwc/pcie-designware-host.c
>> +++ b/drivers/pci/dwc/pcie-designware-host.c
>> @@ -11,6 +11,7 @@
>> * published by the Free Software Foundation.
>> */
>>
>> +#include <linux/irqchip/chained_irq.h>
>> #include <linux/irqdomain.h>
>> #include <linux/of_address.h>
>> #include <linux/of_pci.h>
>> @@ -53,6 +54,36 @@ static struct irq_chip dw_msi_irq_chip = {
>> .irq_unmask = pci_msi_unmask_irq,
>> };
>>
>> +static void dw_msi_ack_irq(struct irq_data *d)
>> +{
>> + irq_chip_ack_parent(d);
>> +}
>> +
>> +static void dw_msi_mask_irq(struct irq_data *d)
>> +{
>> + pci_msi_mask_irq(d);
>> + irq_chip_mask_parent(d);
>> +}
>> +
>> +static void dw_msi_unmask_irq(struct irq_data *d)
>> +{
>> + pci_msi_unmask_irq(d);
>> + irq_chip_unmask_parent(d);
>> +}
>> +
>> +static struct irq_chip dw_pcie_msi_irq_chip = {
>> + .name = "PCI-MSI",
>> + .irq_ack = dw_msi_ack_irq,
>> + .irq_mask = dw_msi_mask_irq,
>> + .irq_unmask = dw_msi_unmask_irq,
>> +};
>> +
>> +static struct msi_domain_info dw_pcie_msi_domain_info = {
>> + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
>> + MSI_FLAG_PCI_MSIX | MSI_FLAG_MULTI_PCI_MSI),
>> + .chip = &dw_pcie_msi_irq_chip,
>> +};
>> +
>> /* MSI int handler */
>> irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
>> {
>> @@ -81,6 +112,196 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
>> return ret;
>> }
>>
>> +/* Chained MSI interrupt service routine */
>> +static void dw_chained_msi_isr(struct irq_desc *desc)
>> +{
>> + struct irq_chip *chip = irq_desc_get_chip(desc);
>> + struct pcie_port *pp;
>> +
>> + chained_irq_enter(chip, desc);
>> +
>> + pp = irq_desc_get_handler_data(desc);
>> + dw_handle_msi_irq(pp);
>> +
>> + chained_irq_exit(chip, desc);
>> +}
>
> Nit: once you're done with the msi_controller stuff (in patch #8), it'd
> be good to move dw_handle_msi_irq() inside this function, as the
> irqreturn_t return type doesn't make much sense any more.
Hum, I like your suggestion, however there's two drivers (HiSilicon STB and ST
Microelectronics SPEAr13xx) that uses directly the dw_handle_msi_irq function. I
would suggested to this "cleanup" in another patch series, just for keep this
patch simple, if you don't mind.
>
>> +
>> +static void dw_pci_setup_msi_msg(struct irq_data *data, struct msi_msg *msg)
>> +{
>> + struct pcie_port *pp = irq_data_get_irq_chip_data(data);
>> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> + u64 msi_target;
>> +
>> + if (pp->ops->get_msi_addr)
>> + msi_target = pp->ops->get_msi_addr(pp);
>> + else
>> + msi_target = (u64)pp->msi_data;
>> +
>> + msg->address_lo = lower_32_bits(msi_target);
>> + msg->address_hi = upper_32_bits(msi_target);
>> +
>> + if (pp->ops->get_msi_data)
>> + msg->data = pp->ops->get_msi_data(pp, data->hwirq);
>> + else
>> + msg->data = data->hwirq;
>> +
>> + dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n",
>> + (int)data->hwirq, msg->address_hi, msg->address_lo);
>> +}
>> +
>> +static int dw_pci_msi_set_affinity(struct irq_data *irq_data,
>> + const struct cpumask *mask, bool force)
>> +{
>> + return -EINVAL;
>> +}
>> +
>> +static void dw_pci_bottom_mask(struct irq_data *data)
>> +{
>> + struct pcie_port *pp = irq_data_get_irq_chip_data(data);
>> + unsigned int res, bit, ctrl;
>> + unsigned long flags;
>> +
>> + spin_lock_irqsave(&pp->lock, flags);
>
> Consider turning this lock (and the corresponding locking primitives) to
> a raw spin_lock. This won't have any additional effect for the mainline
> kernel, but will make it work correctly if you use the RT patches.
Ok, I'll do it.
>
>> +
>> + if (pp->ops->msi_clear_irq)
>> + pp->ops->msi_clear_irq(pp, data->hwirq);
>> + else {
>
> Please use { } on both sides of the else.
Done.
>
>> + ctrl = data->hwirq / 32;
>> + res = ctrl * 12;
>> + bit = data->hwirq % 32;
>> +
>> + pp->irq_status[ctrl] &= ~(1 << bit);
>> + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
>> + pp->irq_status[ctrl]);
>> + }
>> +
>> + spin_unlock_irqrestore(&pp->lock, flags);
>> +}
>> +
>> +static void dw_pci_bottom_unmask(struct irq_data *data)
>> +{
>> + struct pcie_port *pp = irq_data_get_irq_chip_data(data);
>> + unsigned int res, bit, ctrl;
>> + unsigned long flags;
>> +
>> + spin_lock_irqsave(&pp->lock, flags);
>> +
>> + if (pp->ops->msi_set_irq)
>> + pp->ops->msi_set_irq(pp, data->hwirq);
>> + else {
>> + ctrl = data->hwirq / 32;
>> + res = ctrl * 12;
>> + bit = data->hwirq % 32;
>> +
>> + pp->irq_status[ctrl] |= 1 << bit;
>> + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
>> + pp->irq_status[ctrl]);
>> + }
>> +
>> + spin_unlock_irqrestore(&pp->lock, flags);
>> +}
>> +
>> +static void dw_pci_bottom_ack(struct irq_data *d)
>> +{
>> + struct msi_desc *msi = irq_data_get_msi_desc(d);
>> + struct pcie_port *pp;
>> +
>> + pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
>
> msi_desc_to_pci_sysdata returns a void*, so you can remove this cast.
Done.
>
>> +
>> + if (pp->ops->msi_irq_ack)
>> + pp->ops->msi_irq_ack(d->hwirq, pp);
>> +}
>> +
>> +static struct irq_chip dw_pci_msi_bottom_irq_chip = {
>> + .name = "DWPCI-MSI",
>> + .irq_ack = dw_pci_bottom_ack,
>> + .irq_compose_msi_msg = dw_pci_setup_msi_msg,
>> + .irq_set_affinity = dw_pci_msi_set_affinity,
>> + .irq_mask = dw_pci_bottom_mask,
>> + .irq_unmask = dw_pci_bottom_unmask,
>> +};
>> +
>> +static int dw_pcie_irq_domain_alloc(struct irq_domain *domain,
>> + unsigned int virq, unsigned int nr_irqs,
>> + void *args)
>> +{
>> + struct pcie_port *pp = domain->host_data;
>> + unsigned long flags;
>> + unsigned long bit;
>> + u32 i;
>> +
>> + spin_lock_irqsave(&pp->lock, flags);
>> +
>> + bit = bitmap_find_free_region(pp->msi_irq_in_use, pp->num_vectors,
>> + order_base_2(nr_irqs));
>> +
>> + if (bit < 0) {
>> + spin_unlock_irqrestore(&pp->lock, flags);
>> + return -ENOSPC;
>> + }
>> +
>> + spin_unlock_irqrestore(&pp->lock, flags);
>> +
>> + for (i = 0; i < nr_irqs; i++)
>> + irq_domain_set_info(domain, virq + i, bit + i,
>> + &dw_pci_msi_bottom_irq_chip,
>> + pp, handle_level_irq,
>
> Are you sure about this "handle_level_irq"? MSIs are definitely edge
> triggered, not level.
Done.
>
>> + NULL, NULL);
>> +
>> + return 0;
>> +}
>> +
>> +static void dw_pcie_irq_domain_free(struct irq_domain *domain,
>> + unsigned int virq, unsigned int nr_irqs)
>> +{
>> + struct irq_data *data = irq_domain_get_irq_data(domain, virq);
>> + struct pcie_port *pp = irq_data_get_irq_chip_data(data);
>> + unsigned long flags;
>> +
>> + spin_lock_irqsave(&pp->lock, flags);
>> + bitmap_release_region(pp->msi_irq_in_use, data->hwirq,
>> + order_base_2(nr_irqs));
>> + spin_unlock_irqrestore(&pp->lock, flags);
>> +}
>> +
>> +static const struct irq_domain_ops dw_pcie_msi_domain_ops = {
>> + .alloc = dw_pcie_irq_domain_alloc,
>> + .free = dw_pcie_irq_domain_free,
>> +};
>> +
>> +int dw_pcie_allocate_domains(struct pcie_port *pp)
>> +{
>> + struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> + struct fwnode_handle *fwnode = of_node_to_fwnode(pci->dev->of_node);
>> +
>> + pp->irq_domain = irq_domain_create_linear(fwnode, pp->num_vectors,
>> + &dw_pcie_msi_domain_ops, pp);
>> + if (!pp->irq_domain) {
>> + dev_err(pci->dev, "failed to create IRQ domain\n");
>> + return -ENOMEM;
>> + }
>> +
>> + pp->msi_domain = pci_msi_create_irq_domain(fwnode,
>> + &dw_pcie_msi_domain_info,
>> + pp->irq_domain);
>> + if (!pp->msi_domain) {
>> + dev_err(pci->dev, "failed to create MSI domain\n");
>> + irq_domain_remove(pp->irq_domain);
>> + return -ENOMEM;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +void dw_pcie_free_msi(struct pcie_port *pp)
>> +{
>> + irq_set_chained_handler(pp->msi_irq, NULL);
>> + irq_set_handler_data(pp->msi_irq, NULL);
>> +
>> + irq_domain_remove(pp->msi_domain);
>> + irq_domain_remove(pp->irq_domain);
>> +}
>> +
>> void dw_pcie_msi_init(struct pcie_port *pp)
>> {
>> struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>> @@ -99,20 +320,21 @@ void dw_pcie_msi_init(struct pcie_port *pp)
>>
>> /* program the msi_data */
>> dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4,
>> - (u32)(msi_target & 0xffffffff));
>> + lower_32_bits(msi_target));
>> dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4,
>> - (u32)(msi_target >> 32 & 0xffffffff));
>> + upper_32_bits(msi_target));
>> }
>>
>> static void dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
>> {
>> - unsigned int res, bit, val;
>> + unsigned int res, bit, ctrl;
>>
>> - res = (irq / 32) * 12;
>> + ctrl = irq / 32;
>> + res = ctrl * 12;
>> bit = irq % 32;
>> - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
>> - val &= ~(1 << bit);
>> - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
>> + pp->irq_status[ctrl] &= ~(1 << bit);
>> + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
>> + pp->irq_status[ctrl]);
>> }
>>
>> static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base,
>> @@ -134,13 +356,14 @@ static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base,
>>
>> static void dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
>> {
>> - unsigned int res, bit, val;
>> + unsigned int res, bit, ctrl;
>>
>> - res = (irq / 32) * 12;
>> + ctrl = irq / 32;
>> + res = ctrl * 12;
>> bit = irq % 32;
>> - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, &val);
>> - val |= 1 << bit;
>> - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4, val);
>> + pp->irq_status[ctrl] |= 1 << bit;
>> + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
>> + pp->irq_status[ctrl]);
>> }
>>
>> static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
>> @@ -288,11 +511,13 @@ int dw_pcie_host_init(struct pcie_port *pp)
>> struct device *dev = pci->dev;
>> struct device_node *np = dev->of_node;
>> struct platform_device *pdev = to_platform_device(dev);
>> + struct resource_entry *win, *tmp;
>> struct pci_bus *bus, *child;
>> struct pci_host_bridge *bridge;
>> struct resource *cfg_res;
>> - int i, ret;
>> - struct resource_entry *win, *tmp;
>> + int ret;
>> +
>> + spin_lock_init(&pci->pp.lock);
>>
>> cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config");
>> if (cfg_res) {
>> @@ -391,18 +616,33 @@ int dw_pcie_host_init(struct pcie_port *pp)
>> pci->num_viewport = 2;
>>
>> if (IS_ENABLED(CONFIG_PCI_MSI)) {
>> - if (!pp->ops->msi_host_init) {
>> - pp->irq_domain = irq_domain_add_linear(dev->of_node,
>> - MAX_MSI_IRQS, &msi_domain_ops,
>> - &dw_pcie_msi_chip);
>> - if (!pp->irq_domain) {
>> - dev_err(dev, "irq domain init failed\n");
>> - ret = -ENXIO;
>> + /*
>> + * If a specific SoC driver needs to change the
>> + * default number of vectors, it needs to implement
>> + * the set_num_vectors callback.
>> + */
>> + if (!pp->ops->set_num_vectors) {
>> + pp->num_vectors = MSI_DEF_NUM_VECTORS;
>> + } else {
>> + pp->ops->set_num_vectors(pp);
>> +
>> + if (pp->num_vectors > MAX_MSI_IRQS ||
>> + pp->num_vectors == 0) {
>> + dev_err(dev,
>> + "Invalid number of vectors\n");
>> goto error;
>> }
>> + }
>>
>> - for (i = 0; i < MAX_MSI_IRQS; i++)
>> - irq_create_mapping(pp->irq_domain, i);
>> + if (!pp->ops->msi_host_init) {
>> + ret = dw_pcie_allocate_domains(pp);
>> + if (ret)
>> + goto error;
>> +
>> + if (pp->msi_irq)
>> + irq_set_chained_handler_and_data(pp->msi_irq,
>> + dw_chained_msi_isr,
>> + pp);
>> } else {
>> ret = pp->ops->msi_host_init(pp, &dw_pcie_msi_chip);
>> if (ret < 0)
>> @@ -424,10 +664,6 @@ int dw_pcie_host_init(struct pcie_port *pp)
>> bridge->ops = &dw_pcie_ops;
>> bridge->map_irq = of_irq_parse_and_map_pci;
>> bridge->swizzle_irq = pci_common_swizzle;
>> - if (IS_ENABLED(CONFIG_PCI_MSI)) {
>> - bridge->msi = &dw_pcie_msi_chip;
>> - dw_pcie_msi_chip.dev = dev;
>> - }
>>
>> ret = pci_scan_root_bus_bridge(bridge);
>> if (ret)
>> @@ -596,11 +832,15 @@ static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci)
>>
>> void dw_pcie_setup_rc(struct pcie_port *pp)
>> {
>> - u32 val;
>> + u32 val, ctrl;
>> struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
>>
>> dw_pcie_setup(pci);
>>
>> + /* Initialize IRQ Status array */
>> + for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++)
>> + dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + (ctrl * 12), 4,
>> + &pp->irq_status[ctrl]);
>> /* setup RC BARs */
>> dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0x00000004);
>> dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);
>> diff --git a/drivers/pci/dwc/pcie-designware.h b/drivers/pci/dwc/pcie-designware.h
>> index 8190465..b603e88 100644
>> --- a/drivers/pci/dwc/pcie-designware.h
>> +++ b/drivers/pci/dwc/pcie-designware.h
>> @@ -117,6 +117,7 @@
>> */
>> #define MAX_MSI_IRQS 32
>> #define MAX_MSI_CTRLS (MAX_MSI_IRQS / 32)
>> +#define MSI_DEF_NUM_VECTORS 32
>>
>> /* Maximum number of inbound/outbound iATUs */
>> #define MAX_IATU_IN 256
>> @@ -152,7 +153,9 @@ struct dw_pcie_host_ops {
>> phys_addr_t (*get_msi_addr)(struct pcie_port *pp);
>> u32 (*get_msi_data)(struct pcie_port *pp, int pos);
>> void (*scan_bus)(struct pcie_port *pp);
>> + void (*set_num_vectors)(struct pcie_port *pp);
>> int (*msi_host_init)(struct pcie_port *pp, struct msi_controller *chip);
>> + void (*msi_irq_ack)(int irq, struct pcie_port *pp);
>> };
>>
>> struct pcie_port {
>> @@ -177,7 +180,11 @@ struct pcie_port {
>> const struct dw_pcie_host_ops *ops;
>> int msi_irq;
>> struct irq_domain *irq_domain;
>> + struct irq_domain *msi_domain;
>> dma_addr_t msi_data;
>> + u32 num_vectors;
>> + u32 irq_status[MAX_MSI_CTRLS];
>> + spinlock_t lock;
>> DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
>> };
>>
>> @@ -319,8 +326,10 @@ static inline void dw_pcie_dbi_ro_wr_dis(struct dw_pcie *pci)
>> #ifdef CONFIG_PCIE_DW_HOST
>> irqreturn_t dw_handle_msi_irq(struct pcie_port *pp);
>> void dw_pcie_msi_init(struct pcie_port *pp);
>> +void dw_pcie_free_msi(struct pcie_port *pp);
>> void dw_pcie_setup_rc(struct pcie_port *pp);
>> int dw_pcie_host_init(struct pcie_port *pp);
>> +int dw_pcie_allocate_domains(struct pcie_port *pp);
>> #else
>> static inline irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
>> {
>> @@ -331,6 +340,10 @@ static inline void dw_pcie_msi_init(struct pcie_port *pp)
>> {
>> }
>>
>> +static inline void dw_pcie_free_msi(struct pcie_port *pp)
>> +{
>> +}
>> +
>> static inline void dw_pcie_setup_rc(struct pcie_port *pp)
>> {
>> }
>> @@ -339,6 +352,11 @@ static inline int dw_pcie_host_init(struct pcie_port *pp)
>> {
>> return 0;
>> }
>> +
>> +static inline int dw_pcie_allocate_domains(struct pcie_port *pp)
>> +{
>> + return 0;
>> +}
>> #endif
>>
>> #ifdef CONFIG_PCIE_DW_EP
>>
>
> Thanks,
>
> M.
>
Thanks,
Gustavo
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 1/9] PCI: dwc: Add IRQ chained API support
2018-02-12 17:58 ` Gustavo Pimentel
@ 2018-02-12 18:15 ` Marc Zyngier
0 siblings, 0 replies; 18+ messages in thread
From: Marc Zyngier @ 2018-02-12 18:15 UTC (permalink / raw)
To: Gustavo Pimentel, Joao.Pinto@synopsys.com, bhelgaas@google.com,
jingoohan1@gmail.com, kishon@ti.com
Cc: linux-pci@vger.kernel.org, m-karicheri2@ti.com,
thomas.petazzoni@free-electrons.com, minghuan.Lian@freescale.com,
mingkai.hu@freescale.com, tie-fei.zang@freescale.com,
hongxing.zhu@nxp.com, l.stach@pengutronix.de,
niklas.cassel@axis.com, jesper.nilsson@axis.com,
wangzhou1@hisilicon.com, gabriele.paoloni@huawei.com,
svarbanov@mm-sol.com, nsekhar@ti.com, Lorenzo Pieralisi
On 12/02/18 17:58, Gustavo Pimentel wrote:
> Hi Marc,
>
> On 09/02/2018 11:10, Marc Zyngier wrote:
>> Hi Gustavo,
>>
>> On 26/01/18 16:35, Gustavo Pimentel wrote:
>>> Adds a IRQ chained API to pcie-designware, that aims to replace the current
>>> IRQ domain hierarchy API implemented.
>>>
>>> Although the IRQ domain hierarchy API is still available, pcie-designware
>>> will use now the IRQ chained API.
>>
>> I'm a bit thrown by this comment, as I don't think it reflects the
>> reality of the code.
>>
>> What you have here is a domain hierarchy (generic PCI MSI layer and HW
>> specific domain), *and* a multiplexer (chained interrupt) that funnels
>> all your MSIs into a single parent interrupt. What you're moving away
>> from is the msi_controller API.
>>
>> This has no impact on the code, but it helps to use the correct terminology.
>>
>
> Ok, I'll fix the terminology and descriptions.
>
>>>
>>> Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
>>> Tested-by: Niklas Cassel <niklas.cassel@axis.com>
>>> ---
>>> Change v1->v2:
>>> - num_vectors is now not configurable by the Device Tree. Now it is 32 by
>>> default and can be overridden by any specific SoC driver.
>>> Change v2->v3:
>>> - Nothing changed, just to follow the patch set version.
>>> Change v3->v4:
>>> - Moved Kishon's fixes (PCI end point error and a dra7xx warning) from
>>> v3-0007 patch file to here.
>>> - Undo the change of the function signature to be more coherent with the
>>> host mode specific functions (Thanks Kishon).
>>> - Refactor the added functions in order to used host_data so that getting
>>> pp again back from pci could be avoided. (Thanks Kishon)
>>> - Changed summary line to match the drivers/PCI convention and changelog to
>>> maintain the consistency (thanks Bjorn).
>>> Change v4->v5:
>>> - Implemented Kishon MSI multiple messages fix (thanks Kishon).
>>> Change v5->v6:
>>> - Fixed rebase problem detected by Niklas (thanks Niklas).
>>>
>>> drivers/pci/dwc/pcie-designware-host.c | 296 +++++++++++++++++++++++++++++----
>>> drivers/pci/dwc/pcie-designware.h | 18 ++
>>> 2 files changed, 286 insertions(+), 28 deletions(-)
>>>
>>> diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
>>> index bf558df..f7b2bce 100644
>>> --- a/drivers/pci/dwc/pcie-designware-host.c
>>> +++ b/drivers/pci/dwc/pcie-designware-host.c
>>> @@ -11,6 +11,7 @@
>>> * published by the Free Software Foundation.
>>> */
>>>
>>> +#include <linux/irqchip/chained_irq.h>
>>> #include <linux/irqdomain.h>
>>> #include <linux/of_address.h>
>>> #include <linux/of_pci.h>
>>> @@ -53,6 +54,36 @@ static struct irq_chip dw_msi_irq_chip = {
>>> .irq_unmask = pci_msi_unmask_irq,
>>> };
>>>
>>> +static void dw_msi_ack_irq(struct irq_data *d)
>>> +{
>>> + irq_chip_ack_parent(d);
>>> +}
>>> +
>>> +static void dw_msi_mask_irq(struct irq_data *d)
>>> +{
>>> + pci_msi_mask_irq(d);
>>> + irq_chip_mask_parent(d);
>>> +}
>>> +
>>> +static void dw_msi_unmask_irq(struct irq_data *d)
>>> +{
>>> + pci_msi_unmask_irq(d);
>>> + irq_chip_unmask_parent(d);
>>> +}
>>> +
>>> +static struct irq_chip dw_pcie_msi_irq_chip = {
>>> + .name = "PCI-MSI",
>>> + .irq_ack = dw_msi_ack_irq,
>>> + .irq_mask = dw_msi_mask_irq,
>>> + .irq_unmask = dw_msi_unmask_irq,
>>> +};
>>> +
>>> +static struct msi_domain_info dw_pcie_msi_domain_info = {
>>> + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
>>> + MSI_FLAG_PCI_MSIX | MSI_FLAG_MULTI_PCI_MSI),
>>> + .chip = &dw_pcie_msi_irq_chip,
>>> +};
>>> +
>>> /* MSI int handler */
>>> irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
>>> {
>>> @@ -81,6 +112,196 @@ irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
>>> return ret;
>>> }
>>>
>>> +/* Chained MSI interrupt service routine */
>>> +static void dw_chained_msi_isr(struct irq_desc *desc)
>>> +{
>>> + struct irq_chip *chip = irq_desc_get_chip(desc);
>>> + struct pcie_port *pp;
>>> +
>>> + chained_irq_enter(chip, desc);
>>> +
>>> + pp = irq_desc_get_handler_data(desc);
>>> + dw_handle_msi_irq(pp);
>>> +
>>> + chained_irq_exit(chip, desc);
>>> +}
>>
>> Nit: once you're done with the msi_controller stuff (in patch #8), it'd
>> be good to move dw_handle_msi_irq() inside this function, as the
>> irqreturn_t return type doesn't make much sense any more.
>
> Hum, I like your suggestion, however there's two drivers (HiSilicon STB and ST
> Microelectronics SPEAr13xx) that uses directly the dw_handle_msi_irq function. I
> would suggested to this "cleanup" in another patch series, just for keep this
> patch simple, if you don't mind.
/me rolls eyes. Holy crap, I didn't notice that. This is terrifying. OK,
let's put that on the list of "things that should not be", and let's
focus on your series.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v6 2/9] PCI: dwc: exynos: Switch to use the IRQ chained API
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 1/9] PCI: dwc: Add IRQ chained API support Gustavo Pimentel
@ 2018-01-26 16:35 ` Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 3/9] PCI: dwc: imx6: " Gustavo Pimentel
` (7 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Gustavo Pimentel @ 2018-01-26 16:35 UTC (permalink / raw)
To: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar,
Gustavo Pimentel
Changes Exynos SoC specific driver to use the IRQ chained API available in
pcie-designware.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Acked-by: Jingoo Han <jingoohan1@gmail.com>
---
Change v1->v2:
- Nothing changed, just to follow the patch set version.
Change v2->v3:
- Nothing changed, just to follow the patch set version.
Change v3->v4:
- Changed summary line to match the drivers/PCI convention and changelog to
maintain the consistency (thanks Bjorn).
Change v4->v5:
- Nothing changed, just to follow the patch set version.
Change v5->v6:
- Nothing changed, just to follow the patch set version.
drivers/pci/dwc/pci-exynos.c | 18 ------------------
1 file changed, 18 deletions(-)
diff --git a/drivers/pci/dwc/pci-exynos.c b/drivers/pci/dwc/pci-exynos.c
index 56f32ae..0a6b283d 100644
--- a/drivers/pci/dwc/pci-exynos.c
+++ b/drivers/pci/dwc/pci-exynos.c
@@ -297,15 +297,6 @@ static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg)
return IRQ_HANDLED;
}
-static irqreturn_t exynos_pcie_msi_irq_handler(int irq, void *arg)
-{
- struct exynos_pcie *ep = arg;
- struct dw_pcie *pci = ep->pci;
- struct pcie_port *pp = &pci->pp;
-
- return dw_handle_msi_irq(pp);
-}
-
static void exynos_pcie_msi_init(struct exynos_pcie *ep)
{
struct dw_pcie *pci = ep->pci;
@@ -431,15 +422,6 @@ static int __init exynos_add_pcie_port(struct exynos_pcie *ep,
dev_err(dev, "failed to get msi irq\n");
return pp->msi_irq;
}
-
- ret = devm_request_irq(dev, pp->msi_irq,
- exynos_pcie_msi_irq_handler,
- IRQF_SHARED | IRQF_NO_THREAD,
- "exynos-pcie", ep);
- if (ret) {
- dev_err(dev, "failed to request msi irq\n");
- return ret;
- }
}
pp->root_bus_nr = -1;
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 3/9] PCI: dwc: imx6: Switch to use the IRQ chained API
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 1/9] PCI: dwc: Add IRQ chained API support Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 2/9] PCI: dwc: exynos: Switch to use the IRQ chained API Gustavo Pimentel
@ 2018-01-26 16:35 ` Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 4/9] PCI: dwc: artpec6: " Gustavo Pimentel
` (6 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Gustavo Pimentel @ 2018-01-26 16:35 UTC (permalink / raw)
To: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar,
Gustavo Pimentel
Changes iMX6 SoC specific driver to use the IRQ chained API available in
pcie-designware.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
---
Change v1->v2:
- Nothing changed, just to follow the patch set version.
Change v2->v3:
- Nothing changed, just to follow the patch set version.
Change v3->v4:
- Changed summary line to match the drivers/PCI convention and changelog to
maintain the consistency (thanks Bjorn).
Change v4->v5:
- Nothing changed, just to follow the patch set version.
Change v5->v6:
- Nothing changed, just to follow the patch set version.
drivers/pci/dwc/pci-imx6.c | 18 ------------------
1 file changed, 18 deletions(-)
diff --git a/drivers/pci/dwc/pci-imx6.c b/drivers/pci/dwc/pci-imx6.c
index b734835..d2314cb 100644
--- a/drivers/pci/dwc/pci-imx6.c
+++ b/drivers/pci/dwc/pci-imx6.c
@@ -545,15 +545,6 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
return -EINVAL;
}
-static irqreturn_t imx6_pcie_msi_handler(int irq, void *arg)
-{
- struct imx6_pcie *imx6_pcie = arg;
- struct dw_pcie *pci = imx6_pcie->pci;
- struct pcie_port *pp = &pci->pp;
-
- return dw_handle_msi_irq(pp);
-}
-
static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
{
struct dw_pcie *pci = imx6_pcie->pci;
@@ -677,15 +668,6 @@ static int imx6_add_pcie_port(struct imx6_pcie *imx6_pcie,
dev_err(dev, "failed to get MSI irq\n");
return -ENODEV;
}
-
- ret = devm_request_irq(dev, pp->msi_irq,
- imx6_pcie_msi_handler,
- IRQF_SHARED | IRQF_NO_THREAD,
- "mx6-pcie-msi", imx6_pcie);
- if (ret) {
- dev_err(dev, "failed to request MSI irq\n");
- return ret;
- }
}
pp->root_bus_nr = -1;
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 4/9] PCI: dwc: artpec6: Switch to use the IRQ chained API
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
` (2 preceding siblings ...)
2018-01-26 16:35 ` [PATCH v6 3/9] PCI: dwc: imx6: " Gustavo Pimentel
@ 2018-01-26 16:35 ` Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 5/9] PCI: dwc: designware: " Gustavo Pimentel
` (5 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Gustavo Pimentel @ 2018-01-26 16:35 UTC (permalink / raw)
To: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar,
Gustavo Pimentel
Changes ARTPEC-6 SoC specific driver to use the IRQ chained API available
in pcie-designware.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
---
Change v1->v2:
- Nothing changed, just to follow the patch set version.
Change v2->v3:
- Nothing changed, just to follow the patch set version.
Change v3->v4:
- Changed summary line to match the drivers/PCI convention and changelog to
maintain the consistency (thanks Bjorn).
Change v4->v5:
- Nothing changed, just to follow the patch set version.
Change v5->v6:
- Nothing changed, just to follow the patch set version.
drivers/pci/dwc/pcie-artpec6.c | 18 ------------------
1 file changed, 18 deletions(-)
diff --git a/drivers/pci/dwc/pcie-artpec6.c b/drivers/pci/dwc/pcie-artpec6.c
index b1e9820..5ec02ed 100644
--- a/drivers/pci/dwc/pcie-artpec6.c
+++ b/drivers/pci/dwc/pcie-artpec6.c
@@ -386,15 +386,6 @@ static const struct dw_pcie_host_ops artpec6_pcie_host_ops = {
.host_init = artpec6_pcie_host_init,
};
-static irqreturn_t artpec6_pcie_msi_handler(int irq, void *arg)
-{
- struct artpec6_pcie *artpec6_pcie = arg;
- struct dw_pcie *pci = artpec6_pcie->pci;
- struct pcie_port *pp = &pci->pp;
-
- return dw_handle_msi_irq(pp);
-}
-
static int artpec6_add_pcie_port(struct artpec6_pcie *artpec6_pcie,
struct platform_device *pdev)
{
@@ -409,15 +400,6 @@ static int artpec6_add_pcie_port(struct artpec6_pcie *artpec6_pcie,
dev_err(dev, "failed to get MSI irq\n");
return pp->msi_irq;
}
-
- ret = devm_request_irq(dev, pp->msi_irq,
- artpec6_pcie_msi_handler,
- IRQF_SHARED | IRQF_NO_THREAD,
- "artpec6-pcie-msi", artpec6_pcie);
- if (ret) {
- dev_err(dev, "failed to request MSI irq\n");
- return ret;
- }
}
pp->root_bus_nr = -1;
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 5/9] PCI: dwc: designware: Switch to use the IRQ chained API
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
` (3 preceding siblings ...)
2018-01-26 16:35 ` [PATCH v6 4/9] PCI: dwc: artpec6: " Gustavo Pimentel
@ 2018-01-26 16:35 ` Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 6/9] PCI: dwc: qcom: " Gustavo Pimentel
` (4 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Gustavo Pimentel @ 2018-01-26 16:35 UTC (permalink / raw)
To: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar,
Gustavo Pimentel
Changes designware generic driver to use the IRQ chained API available in
pcie-designware.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
---
Change v1->v2:
- Nothing changed, just to follow the patch set version.
Change v2->v3:
- Nothing changed, just to follow the patch set version.
Change v3->v4:
- Changed summary line to match the drivers/PCI convention and changelog to
maintain the consistency (thanks Bjorn).
Change v4->v5:
- Nothing changed, just to follow the patch set version.
Change v5->v6:
- Nothing changed, just to follow the patch set version.
drivers/pci/dwc/pcie-designware-plat.c | 16 ----------------
1 file changed, 16 deletions(-)
diff --git a/drivers/pci/dwc/pcie-designware-plat.c b/drivers/pci/dwc/pcie-designware-plat.c
index 168e238..c148152 100644
--- a/drivers/pci/dwc/pcie-designware-plat.c
+++ b/drivers/pci/dwc/pcie-designware-plat.c
@@ -28,13 +28,6 @@ struct dw_plat_pcie {
struct dw_pcie *pci;
};
-static irqreturn_t dw_plat_pcie_msi_irq_handler(int irq, void *arg)
-{
- struct pcie_port *pp = arg;
-
- return dw_handle_msi_irq(pp);
-}
-
static int dw_plat_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
@@ -66,15 +59,6 @@ static int dw_plat_add_pcie_port(struct pcie_port *pp,
pp->msi_irq = platform_get_irq(pdev, 0);
if (pp->msi_irq < 0)
return pp->msi_irq;
-
- ret = devm_request_irq(dev, pp->msi_irq,
- dw_plat_pcie_msi_irq_handler,
- IRQF_SHARED | IRQF_NO_THREAD,
- "dw-plat-pcie-msi", pp);
- if (ret) {
- dev_err(dev, "failed to request MSI IRQ\n");
- return ret;
- }
}
pp->root_bus_nr = -1;
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 6/9] PCI: dwc: qcom: Switch to use the IRQ chained API
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
` (4 preceding siblings ...)
2018-01-26 16:35 ` [PATCH v6 5/9] PCI: dwc: designware: " Gustavo Pimentel
@ 2018-01-26 16:35 ` Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 7/9] PCI: dwc: keystone: " Gustavo Pimentel
` (3 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Gustavo Pimentel @ 2018-01-26 16:35 UTC (permalink / raw)
To: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar,
Gustavo Pimentel
Changes QCOM SoC specific driver to use the IRQ chained API available in
pcie-designware.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
---
Change v1->v2:
- Nothing changed, just to follow the patch set version.
Change v2->v3:
- Nothing changed, just to follow the patch set version.
Change v3->v4:
- Changed summary line to match the drivers/PCI convention and changelog to
maintain the consistency (thanks Bjorn).
Change v4->v5:
- Nothing changed, just to follow the patch set version.
Change v5->v6:
- Nothing changed, just to follow the patch set version.
drivers/pci/dwc/pcie-qcom.c | 16 ----------------
1 file changed, 16 deletions(-)
diff --git a/drivers/pci/dwc/pcie-qcom.c b/drivers/pci/dwc/pcie-qcom.c
index b3adc10..6e70073 100644
--- a/drivers/pci/dwc/pcie-qcom.c
+++ b/drivers/pci/dwc/pcie-qcom.c
@@ -188,13 +188,6 @@ static void qcom_ep_reset_deassert(struct qcom_pcie *pcie)
usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500);
}
-static irqreturn_t qcom_pcie_msi_irq_handler(int irq, void *arg)
-{
- struct pcie_port *pp = arg;
-
- return dw_handle_msi_irq(pp);
-}
-
static int qcom_pcie_establish_link(struct qcom_pcie *pcie)
{
struct dw_pcie *pci = pcie->pci;
@@ -1270,15 +1263,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
pp->msi_irq = platform_get_irq_byname(pdev, "msi");
if (pp->msi_irq < 0)
return pp->msi_irq;
-
- ret = devm_request_irq(dev, pp->msi_irq,
- qcom_pcie_msi_irq_handler,
- IRQF_SHARED | IRQF_NO_THREAD,
- "qcom-pcie-msi", pp);
- if (ret) {
- dev_err(dev, "cannot request msi irq\n");
- return ret;
- }
}
ret = phy_init(pcie->phy);
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 7/9] PCI: dwc: keystone: Switch to use the IRQ chained API
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
` (5 preceding siblings ...)
2018-01-26 16:35 ` [PATCH v6 6/9] PCI: dwc: qcom: " Gustavo Pimentel
@ 2018-01-26 16:35 ` Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 8/9] PCI: dwc: Remove IRQ domain hierarchy API support Gustavo Pimentel
` (2 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Gustavo Pimentel @ 2018-01-26 16:35 UTC (permalink / raw)
To: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar,
Gustavo Pimentel
Changes KeyStone SoC specific driver to use the IRQ chained API available
in pcie-designware.
Adds new callback (ks_dw_pcie_msi_irq_ack) to dw_pcie_host_ops to handle a
specific KeyStone behavior.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
---
Change v1->v2:
- Removed hardcoded num_vectors configuration (now it is done in the core
driver by default).
Change v2->v3:
- Added Kishon's fixes (PCI end point error and a dra7xx warning).
Change v3->v4:
- Moved Kishon's fixes to v4-0001 patch file.
- Moved msi_host_init function signature change to a new v4-0008 patch file.
- Changed summary line to match the drivers/PCI convention and changelog to
maintain the consistency (thanks Bjorn).
Change v4->v5:
- Fix rebase problem.
- Implemented Kishon's suggestion.
Change v5->v6:
- Nothing changed, just to follow the patch set version.
drivers/pci/dwc/pci-keystone-dw.c | 89 ++-------------------------------------
drivers/pci/dwc/pci-keystone.c | 1 +
drivers/pci/dwc/pci-keystone.h | 1 +
3 files changed, 6 insertions(+), 85 deletions(-)
diff --git a/drivers/pci/dwc/pci-keystone-dw.c b/drivers/pci/dwc/pci-keystone-dw.c
index 2fb20b8..ace498e 100644
--- a/drivers/pci/dwc/pci-keystone-dw.c
+++ b/drivers/pci/dwc/pci-keystone-dw.c
@@ -124,20 +124,15 @@ void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset)
}
}
-static void ks_dw_pcie_msi_irq_ack(struct irq_data *d)
+void ks_dw_pcie_msi_irq_ack(int irq, struct pcie_port *pp)
{
- u32 offset, reg_offset, bit_pos;
+ u32 reg_offset, bit_pos;
struct keystone_pcie *ks_pcie;
- struct msi_desc *msi;
- struct pcie_port *pp;
struct dw_pcie *pci;
- msi = irq_data_get_msi_desc(d);
- pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
pci = to_dw_pcie_from_pp(pp);
ks_pcie = to_keystone_pcie(pci);
- offset = d->irq - irq_linear_revmap(pp->irq_domain, 0);
- update_reg_offset_bit_pos(offset, ®_offset, &bit_pos);
+ update_reg_offset_bit_pos(irq, ®_offset, &bit_pos);
ks_dw_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4),
BIT(bit_pos));
@@ -166,85 +161,9 @@ void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
BIT(bit_pos));
}
-static void ks_dw_pcie_msi_irq_mask(struct irq_data *d)
-{
- struct msi_desc *msi;
- struct pcie_port *pp;
- u32 offset;
-
- msi = irq_data_get_msi_desc(d);
- pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
- offset = d->irq - irq_linear_revmap(pp->irq_domain, 0);
-
- /* Mask the end point if PVM implemented */
- if (IS_ENABLED(CONFIG_PCI_MSI)) {
- if (msi->msi_attrib.maskbit)
- pci_msi_mask_irq(d);
- }
-
- ks_dw_pcie_msi_clear_irq(pp, offset);
-}
-
-static void ks_dw_pcie_msi_irq_unmask(struct irq_data *d)
-{
- struct msi_desc *msi;
- struct pcie_port *pp;
- u32 offset;
-
- msi = irq_data_get_msi_desc(d);
- pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi);
- offset = d->irq - irq_linear_revmap(pp->irq_domain, 0);
-
- /* Mask the end point if PVM implemented */
- if (IS_ENABLED(CONFIG_PCI_MSI)) {
- if (msi->msi_attrib.maskbit)
- pci_msi_unmask_irq(d);
- }
-
- ks_dw_pcie_msi_set_irq(pp, offset);
-}
-
-static struct irq_chip ks_dw_pcie_msi_irq_chip = {
- .name = "Keystone-PCIe-MSI-IRQ",
- .irq_ack = ks_dw_pcie_msi_irq_ack,
- .irq_mask = ks_dw_pcie_msi_irq_mask,
- .irq_unmask = ks_dw_pcie_msi_irq_unmask,
-};
-
-static int ks_dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq,
- irq_hw_number_t hwirq)
-{
- irq_set_chip_and_handler(irq, &ks_dw_pcie_msi_irq_chip,
- handle_level_irq);
- irq_set_chip_data(irq, domain->host_data);
-
- return 0;
-}
-
-static const struct irq_domain_ops ks_dw_pcie_msi_domain_ops = {
- .map = ks_dw_pcie_msi_map,
-};
-
int ks_dw_pcie_msi_host_init(struct pcie_port *pp, struct msi_controller *chip)
{
- struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
- struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
- struct device *dev = pci->dev;
- int i;
-
- pp->irq_domain = irq_domain_add_linear(ks_pcie->msi_intc_np,
- MAX_MSI_IRQS,
- &ks_dw_pcie_msi_domain_ops,
- chip);
- if (!pp->irq_domain) {
- dev_err(dev, "irq domain init failed\n");
- return -ENXIO;
- }
-
- for (i = 0; i < MAX_MSI_IRQS; i++)
- irq_create_mapping(pp->irq_domain, i);
-
- return 0;
+ return dw_pcie_allocate_domains(pp);
}
void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie)
diff --git a/drivers/pci/dwc/pci-keystone.c b/drivers/pci/dwc/pci-keystone.c
index 3940559..3909c50 100644
--- a/drivers/pci/dwc/pci-keystone.c
+++ b/drivers/pci/dwc/pci-keystone.c
@@ -300,6 +300,7 @@ static const struct dw_pcie_host_ops keystone_pcie_host_ops = {
.msi_clear_irq = ks_dw_pcie_msi_clear_irq,
.get_msi_addr = ks_dw_pcie_get_msi_addr,
.msi_host_init = ks_dw_pcie_msi_host_init,
+ .msi_irq_ack = ks_dw_pcie_msi_irq_ack,
.scan_bus = ks_dw_pcie_v3_65_scan_bus,
};
diff --git a/drivers/pci/dwc/pci-keystone.h b/drivers/pci/dwc/pci-keystone.h
index 30b7bc2..aed3c73 100644
--- a/drivers/pci/dwc/pci-keystone.h
+++ b/drivers/pci/dwc/pci-keystone.h
@@ -53,6 +53,7 @@ int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 *val);
void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie);
void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie);
+void ks_dw_pcie_msi_irq_ack(int i, struct pcie_port *pp);
void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq);
void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq);
void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp);
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 8/9] PCI: dwc: Remove IRQ domain hierarchy API support
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
` (6 preceding siblings ...)
2018-01-26 16:35 ` [PATCH v6 7/9] PCI: dwc: keystone: " Gustavo Pimentel
@ 2018-01-26 16:35 ` Gustavo Pimentel
2018-01-26 16:35 ` [PATCH v6 9/9] PCI: dwc: Expand maximum number of IRQs from 32 to 256 Gustavo Pimentel
2018-02-12 12:23 ` [PATCH v6 0/9] PCI: dwc: MSI-X feature Lorenzo Pieralisi
9 siblings, 0 replies; 18+ messages in thread
From: Gustavo Pimentel @ 2018-01-26 16:35 UTC (permalink / raw)
To: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar,
Gustavo Pimentel
Removes the unused IRQ domain hierarchy API support from pcie-designware.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
---
Change v1->v2:
- Nothing changed, just to follow the patch set version.
Change v2->v3:
- Nothing changed, just to follow the patch set version.
Change v3->v4:
- Patch renamed from v3-0008 to v4-0009.
- Changed summary line to match the drivers/PCI convention and changelog to
maintain the consistency (thanks Bjorn).
Change v4->v5:
- Nothing changed, just to follow the patch set version.
Change v5->v6:
- Nothing changed, just to follow the patch set version.
drivers/pci/dwc/pci-keystone-dw.c | 2 +-
drivers/pci/dwc/pci-keystone.h | 3 +-
drivers/pci/dwc/pci-layerscape.c | 3 +-
drivers/pci/dwc/pcie-designware-host.c | 190 +--------------------------------
drivers/pci/dwc/pcie-designware.h | 2 +-
5 files changed, 5 insertions(+), 195 deletions(-)
diff --git a/drivers/pci/dwc/pci-keystone-dw.c b/drivers/pci/dwc/pci-keystone-dw.c
index ace498e..7705814 100644
--- a/drivers/pci/dwc/pci-keystone-dw.c
+++ b/drivers/pci/dwc/pci-keystone-dw.c
@@ -161,7 +161,7 @@ void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
BIT(bit_pos));
}
-int ks_dw_pcie_msi_host_init(struct pcie_port *pp, struct msi_controller *chip)
+int ks_dw_pcie_msi_host_init(struct pcie_port *pp)
{
return dw_pcie_allocate_domains(pp);
}
diff --git a/drivers/pci/dwc/pci-keystone.h b/drivers/pci/dwc/pci-keystone.h
index aed3c73..8ac2f12 100644
--- a/drivers/pci/dwc/pci-keystone.h
+++ b/drivers/pci/dwc/pci-keystone.h
@@ -57,6 +57,5 @@ void ks_dw_pcie_msi_irq_ack(int i, struct pcie_port *pp);
void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq);
void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq);
void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp);
-int ks_dw_pcie_msi_host_init(struct pcie_port *pp,
- struct msi_controller *chip);
+int ks_dw_pcie_msi_host_init(struct pcie_port *pp);
int ks_dw_pcie_link_up(struct dw_pcie *pci);
diff --git a/drivers/pci/dwc/pci-layerscape.c b/drivers/pci/dwc/pci-layerscape.c
index 8f34c2f..e77ff65 100644
--- a/drivers/pci/dwc/pci-layerscape.c
+++ b/drivers/pci/dwc/pci-layerscape.c
@@ -185,8 +185,7 @@ static int ls1021_pcie_host_init(struct pcie_port *pp)
return ls_pcie_host_init(pp);
}
-static int ls_pcie_msi_host_init(struct pcie_port *pp,
- struct msi_controller *chip)
+static int ls_pcie_msi_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
index f7b2bce..9309055 100644
--- a/drivers/pci/dwc/pcie-designware-host.c
+++ b/drivers/pci/dwc/pcie-designware-host.c
@@ -46,14 +46,6 @@ static int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size,
return dw_pcie_write(pci->dbi_base + where, size, val);
}
-static struct irq_chip dw_msi_irq_chip = {
- .name = "PCI-MSI",
- .irq_enable = pci_msi_unmask_irq,
- .irq_disable = pci_msi_mask_irq,
- .irq_mask = pci_msi_mask_irq,
- .irq_unmask = pci_msi_unmask_irq,
-};
-
static void dw_msi_ack_irq(struct irq_data *d)
{
irq_chip_ack_parent(d);
@@ -325,186 +317,6 @@ void dw_pcie_msi_init(struct pcie_port *pp)
upper_32_bits(msi_target));
}
-static void dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq)
-{
- unsigned int res, bit, ctrl;
-
- ctrl = irq / 32;
- res = ctrl * 12;
- bit = irq % 32;
- pp->irq_status[ctrl] &= ~(1 << bit);
- dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
- pp->irq_status[ctrl]);
-}
-
-static void clear_irq_range(struct pcie_port *pp, unsigned int irq_base,
- unsigned int nvec, unsigned int pos)
-{
- unsigned int i;
-
- for (i = 0; i < nvec; i++) {
- irq_set_msi_desc_off(irq_base, i, NULL);
- /* Disable corresponding interrupt on MSI controller */
- if (pp->ops->msi_clear_irq)
- pp->ops->msi_clear_irq(pp, pos + i);
- else
- dw_pcie_msi_clear_irq(pp, pos + i);
- }
-
- bitmap_release_region(pp->msi_irq_in_use, pos, order_base_2(nvec));
-}
-
-static void dw_pcie_msi_set_irq(struct pcie_port *pp, int irq)
-{
- unsigned int res, bit, ctrl;
-
- ctrl = irq / 32;
- res = ctrl * 12;
- bit = irq % 32;
- pp->irq_status[ctrl] |= 1 << bit;
- dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + res, 4,
- pp->irq_status[ctrl]);
-}
-
-static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos)
-{
- int irq, pos0, i;
- struct pcie_port *pp;
-
- pp = (struct pcie_port *)msi_desc_to_pci_sysdata(desc);
- pos0 = bitmap_find_free_region(pp->msi_irq_in_use, MAX_MSI_IRQS,
- order_base_2(no_irqs));
- if (pos0 < 0)
- goto no_valid_irq;
-
- irq = irq_find_mapping(pp->irq_domain, pos0);
- if (!irq)
- goto no_valid_irq;
-
- /*
- * irq_create_mapping (called from dw_pcie_host_init) pre-allocates
- * descs so there is no need to allocate descs here. We can therefore
- * assume that if irq_find_mapping above returns non-zero, then the
- * descs are also successfully allocated.
- */
-
- for (i = 0; i < no_irqs; i++) {
- if (irq_set_msi_desc_off(irq, i, desc) != 0) {
- clear_irq_range(pp, irq, i, pos0);
- goto no_valid_irq;
- }
- /*Enable corresponding interrupt in MSI interrupt controller */
- if (pp->ops->msi_set_irq)
- pp->ops->msi_set_irq(pp, pos0 + i);
- else
- dw_pcie_msi_set_irq(pp, pos0 + i);
- }
-
- *pos = pos0;
- desc->nvec_used = no_irqs;
- desc->msi_attrib.multiple = order_base_2(no_irqs);
-
- return irq;
-
-no_valid_irq:
- *pos = pos0;
- return -ENOSPC;
-}
-
-static void dw_msi_setup_msg(struct pcie_port *pp, unsigned int irq, u32 pos)
-{
- struct msi_msg msg;
- u64 msi_target;
-
- if (pp->ops->get_msi_addr)
- msi_target = pp->ops->get_msi_addr(pp);
- else
- msi_target = (u64)pp->msi_data;
-
- msg.address_lo = (u32)(msi_target & 0xffffffff);
- msg.address_hi = (u32)(msi_target >> 32 & 0xffffffff);
-
- if (pp->ops->get_msi_data)
- msg.data = pp->ops->get_msi_data(pp, pos);
- else
- msg.data = pos;
-
- pci_write_msi_msg(irq, &msg);
-}
-
-static int dw_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev,
- struct msi_desc *desc)
-{
- int irq, pos;
- struct pcie_port *pp = pdev->bus->sysdata;
-
- if (desc->msi_attrib.is_msix)
- return -EINVAL;
-
- irq = assign_irq(1, desc, &pos);
- if (irq < 0)
- return irq;
-
- dw_msi_setup_msg(pp, irq, pos);
-
- return 0;
-}
-
-static int dw_msi_setup_irqs(struct msi_controller *chip, struct pci_dev *pdev,
- int nvec, int type)
-{
-#ifdef CONFIG_PCI_MSI
- int irq, pos;
- struct msi_desc *desc;
- struct pcie_port *pp = pdev->bus->sysdata;
-
- /* MSI-X interrupts are not supported */
- if (type == PCI_CAP_ID_MSIX)
- return -EINVAL;
-
- WARN_ON(!list_is_singular(&pdev->dev.msi_list));
- desc = list_entry(pdev->dev.msi_list.next, struct msi_desc, list);
-
- irq = assign_irq(nvec, desc, &pos);
- if (irq < 0)
- return irq;
-
- dw_msi_setup_msg(pp, irq, pos);
-
- return 0;
-#else
- return -EINVAL;
-#endif
-}
-
-static void dw_msi_teardown_irq(struct msi_controller *chip, unsigned int irq)
-{
- struct irq_data *data = irq_get_irq_data(irq);
- struct msi_desc *msi = irq_data_get_msi_desc(data);
- struct pcie_port *pp = (struct pcie_port *)msi_desc_to_pci_sysdata(msi);
-
- clear_irq_range(pp, irq, 1, data->hwirq);
-}
-
-static struct msi_controller dw_pcie_msi_chip = {
- .setup_irq = dw_msi_setup_irq,
- .setup_irqs = dw_msi_setup_irqs,
- .teardown_irq = dw_msi_teardown_irq,
-};
-
-static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq,
- irq_hw_number_t hwirq)
-{
- irq_set_chip_and_handler(irq, &dw_msi_irq_chip, handle_simple_irq);
- irq_set_chip_data(irq, domain->host_data);
-
- return 0;
-}
-
-static const struct irq_domain_ops msi_domain_ops = {
- .map = dw_pcie_msi_map,
-};
-
int dw_pcie_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
@@ -644,7 +456,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
dw_chained_msi_isr,
pp);
} else {
- ret = pp->ops->msi_host_init(pp, &dw_pcie_msi_chip);
+ ret = pp->ops->msi_host_init(pp);
if (ret < 0)
goto error;
}
diff --git a/drivers/pci/dwc/pcie-designware.h b/drivers/pci/dwc/pcie-designware.h
index b603e88..c77c2e0 100644
--- a/drivers/pci/dwc/pcie-designware.h
+++ b/drivers/pci/dwc/pcie-designware.h
@@ -154,7 +154,7 @@ struct dw_pcie_host_ops {
u32 (*get_msi_data)(struct pcie_port *pp, int pos);
void (*scan_bus)(struct pcie_port *pp);
void (*set_num_vectors)(struct pcie_port *pp);
- int (*msi_host_init)(struct pcie_port *pp, struct msi_controller *chip);
+ int (*msi_host_init)(struct pcie_port *pp);
void (*msi_irq_ack)(int irq, struct pcie_port *pp);
};
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 9/9] PCI: dwc: Expand maximum number of IRQs from 32 to 256
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
` (7 preceding siblings ...)
2018-01-26 16:35 ` [PATCH v6 8/9] PCI: dwc: Remove IRQ domain hierarchy API support Gustavo Pimentel
@ 2018-01-26 16:35 ` Gustavo Pimentel
2018-02-12 12:23 ` [PATCH v6 0/9] PCI: dwc: MSI-X feature Lorenzo Pieralisi
9 siblings, 0 replies; 18+ messages in thread
From: Gustavo Pimentel @ 2018-01-26 16:35 UTC (permalink / raw)
To: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon
Cc: linux-pci, m-karicheri2, thomas.petazzoni, minghuan.Lian,
mingkai.hu, tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar,
Gustavo Pimentel
The Synopsys PCIe Root Complex supports up to 256 IRQs distributed by 8
controller registers.
Having this in mind, the number of the maximum number of IRQs was changed
to 256 and now the number of controllers is calculated based on the number
of vectors used by the specific SoC driver.
Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
---
Change v1->v2:
- New patch file.
Change v2->v3:
- Nothing changed, just to follow the patch set version.
Change v3->v4:
- Patch renamed from v3-0009 to v4-0010.
- Changed summary line to match the drivers/PCI convention and changelog to
maintain the consistency (thanks Bjorn).
Change v4->v5:
- Nothing changed, just to follow the patch set version.
Change v5->v6:
- Nothing changed, just to follow the patch set version.
drivers/pci/dwc/pcie-designware-host.c | 12 ++++++++----
drivers/pci/dwc/pcie-designware.h | 10 +++-------
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
index 9309055..1daf591 100644
--- a/drivers/pci/dwc/pcie-designware-host.c
+++ b/drivers/pci/dwc/pcie-designware-host.c
@@ -79,11 +79,13 @@ static struct msi_domain_info dw_pcie_msi_domain_info = {
/* MSI int handler */
irqreturn_t dw_handle_msi_irq(struct pcie_port *pp)
{
- u32 val;
int i, pos, irq;
+ u32 val, num_ctrls;
irqreturn_t ret = IRQ_NONE;
- for (i = 0; i < MAX_MSI_CTRLS; i++) {
+ num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
+
+ for (i = 0; i < num_ctrls; i++) {
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + i * 12, 4,
&val);
if (!val)
@@ -644,13 +646,15 @@ static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci)
void dw_pcie_setup_rc(struct pcie_port *pp)
{
- u32 val, ctrl;
+ u32 val, ctrl, num_ctrls;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
dw_pcie_setup(pci);
+ num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
+
/* Initialize IRQ Status array */
- for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++)
+ for (ctrl = 0; ctrl < num_ctrls; ctrl++)
dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_ENABLE + (ctrl * 12), 4,
&pp->irq_status[ctrl]);
/* setup RC BARs */
diff --git a/drivers/pci/dwc/pcie-designware.h b/drivers/pci/dwc/pcie-designware.h
index c77c2e0..074bd14 100644
--- a/drivers/pci/dwc/pcie-designware.h
+++ b/drivers/pci/dwc/pcie-designware.h
@@ -110,13 +110,9 @@
#define MSI_MESSAGE_DATA_32 0x58
#define MSI_MESSAGE_DATA_64 0x5C
-/*
- * Maximum number of MSI IRQs can be 256 per controller. But keep
- * it 32 as of now. Probably we will never need more than 32. If needed,
- * then increment it in multiple of 32.
- */
-#define MAX_MSI_IRQS 32
-#define MAX_MSI_CTRLS (MAX_MSI_IRQS / 32)
+#define MAX_MSI_IRQS 256
+#define MAX_MSI_IRQS_PER_CTRL 32
+#define MAX_MSI_CTRLS (MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL)
#define MSI_DEF_NUM_VECTORS 32
/* Maximum number of inbound/outbound iATUs */
--
2.7.4
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v6 0/9] PCI: dwc: MSI-X feature
2018-01-26 16:35 [PATCH v6 0/9] PCI: dwc: MSI-X feature Gustavo Pimentel
` (8 preceding siblings ...)
2018-01-26 16:35 ` [PATCH v6 9/9] PCI: dwc: Expand maximum number of IRQs from 32 to 256 Gustavo Pimentel
@ 2018-02-12 12:23 ` Lorenzo Pieralisi
2018-02-12 18:14 ` Gustavo Pimentel
9 siblings, 1 reply; 18+ messages in thread
From: Lorenzo Pieralisi @ 2018-02-12 12:23 UTC (permalink / raw)
To: Gustavo Pimentel
Cc: marc.zyngier, Joao.Pinto, bhelgaas, jingoohan1, kishon, linux-pci,
m-karicheri2, thomas.petazzoni, minghuan.Lian, mingkai.hu,
tie-fei.zang, hongxing.zhu, l.stach, niklas.cassel,
jesper.nilsson, wangzhou1, gabriele.paoloni, svarbanov, nsekhar
On Fri, Jan 26, 2018 at 04:35:35PM +0000, Gustavo Pimentel wrote:
> Replaces the use of IRQ domain hierarchy by the IRQ chained used by
> pcie-designware and each SoC specific driver in order to allow new and
> complex features like MSI-X.
>
> Adds Synopsys Root Complex support for MSI-X feature.
Hi Gustavo,
please CC me for future patch versions. I have a question for you,
the series definitely looks like it is going in the right direction
but I do not understand the cover letter.
- You are not replacing the IRQ domain hierarchy, you are actually
introducing it properly. It may be just nomenclature but I though
I would mention that.
- I really do not understand why this series is advertised as
"implementing MSI-X". I want to understand why MSI-X it is not
currently supported and why this series would enable it, I
honestly do not understand the reasoning behind this, it looks
a bit fuzzy.
Please note I am just asking clarifications on the $SUBJECT for my own
understanding, the series is definitely the right thing to do and I am
happy to target v4.17 for it.
Thanks,
Lorenzo
> Expands the maximum number of IRQs from 32 to 256 distributed by
> a maximum of 8 controller registers.
>
> The patch set was made against the Bjorn's next branch.
>
> Gustavo Pimentel (9):
> PCI: dwc: Add IRQ chained API support
> PCI: dwc: exynos: Switch to use the IRQ chained API
> PCI: dwc: imx6: Switch to use the IRQ chained API
> PCI: dwc: artpec6: Switch to use the IRQ chained API
> PCI: dwc: designware: Switch to use the IRQ chained API
> PCI: dwc: qcom: Switch to use the IRQ chained API
> PCI: dwc: keystone: Switch to use the IRQ chained API
> PCI: dwc: Remove IRQ domain hierarchy API support
> PCI: dwc: Expand maximum number of IRQs from 32 to 256
>
> drivers/pci/dwc/pci-exynos.c | 18 --
> drivers/pci/dwc/pci-imx6.c | 18 --
> drivers/pci/dwc/pci-keystone-dw.c | 91 +-------
> drivers/pci/dwc/pci-keystone.c | 1 +
> drivers/pci/dwc/pci-keystone.h | 4 +-
> drivers/pci/dwc/pci-layerscape.c | 3 +-
> drivers/pci/dwc/pcie-artpec6.c | 18 --
> drivers/pci/dwc/pcie-designware-host.c | 402 +++++++++++++++++++--------------
> drivers/pci/dwc/pcie-designware-plat.c | 16 --
> drivers/pci/dwc/pcie-designware.h | 30 ++-
> drivers/pci/dwc/pcie-qcom.c | 16 --
> 11 files changed, 260 insertions(+), 357 deletions(-)
>
> --
> 2.7.4
>
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 0/9] PCI: dwc: MSI-X feature
2018-02-12 12:23 ` [PATCH v6 0/9] PCI: dwc: MSI-X feature Lorenzo Pieralisi
@ 2018-02-12 18:14 ` Gustavo Pimentel
2018-02-13 12:29 ` Lorenzo Pieralisi
0 siblings, 1 reply; 18+ messages in thread
From: Gustavo Pimentel @ 2018-02-12 18:14 UTC (permalink / raw)
To: Lorenzo Pieralisi
Cc: marc.zyngier@arm.com, Joao.Pinto@synopsys.com,
bhelgaas@google.com, jingoohan1@gmail.com, kishon@ti.com,
linux-pci@vger.kernel.org, m-karicheri2@ti.com,
thomas.petazzoni@free-electrons.com, minghuan.Lian@freescale.com,
mingkai.hu@freescale.com, tie-fei.zang@freescale.com,
hongxing.zhu@nxp.com, l.stach@pengutronix.de,
niklas.cassel@axis.com, jesper.nilsson@axis.com,
wangzhou1@hisilicon.com, gabriele.paoloni@huawei.com,
svarbanov@mm-sol.com, nsekhar@ti.com
Hi Lorenzo,
On 12/02/2018 12:23, Lorenzo Pieralisi wrote:
> On Fri, Jan 26, 2018 at 04:35:35PM +0000, Gustavo Pimentel wrote:
>> Replaces the use of IRQ domain hierarchy by the IRQ chained used by
>> pcie-designware and each SoC specific driver in order to allow new and
>> complex features like MSI-X.
>>
>> Adds Synopsys Root Complex support for MSI-X feature.
>
> Hi Gustavo,
>
> please CC me for future patch versions. I have a question for you,
> the series definitely looks like it is going in the right direction
> but I do not understand the cover letter.
>
> - You are not replacing the IRQ domain hierarchy, you are actually
> introducing it properly. It may be just nomenclature but I though
> I would mention that.
> - I really do not understand why this series is advertised as
> "implementing MSI-X". I want to understand why MSI-X it is not
> currently supported and why this series would enable it, I
> honestly do not understand the reasoning behind this, it looks
> a bit fuzzy.
>
> Please note I am just asking clarifications on the $SUBJECT for my own
> understanding, the series is definitely the right thing to do and I am
> happy to target v4.17 for it.
Marc Zyngier also pointed out like you that the description is confusing and
does not correspond to what is done in the code and now I also see it now.
Please fell free to review and point out something wrong, only in this way can I
evolve. :)
>
> Thanks,
> Lorenzo
>
>> Expands the maximum number of IRQs from 32 to 256 distributed by
>> a maximum of 8 controller registers.
>>
>> The patch set was made against the Bjorn's next branch.
>>
>> Gustavo Pimentel (9):
>> PCI: dwc: Add IRQ chained API support
>> PCI: dwc: exynos: Switch to use the IRQ chained API
>> PCI: dwc: imx6: Switch to use the IRQ chained API
>> PCI: dwc: artpec6: Switch to use the IRQ chained API
>> PCI: dwc: designware: Switch to use the IRQ chained API
>> PCI: dwc: qcom: Switch to use the IRQ chained API
>> PCI: dwc: keystone: Switch to use the IRQ chained API
>> PCI: dwc: Remove IRQ domain hierarchy API support
>> PCI: dwc: Expand maximum number of IRQs from 32 to 256
>>
>> drivers/pci/dwc/pci-exynos.c | 18 --
>> drivers/pci/dwc/pci-imx6.c | 18 --
>> drivers/pci/dwc/pci-keystone-dw.c | 91 +-------
>> drivers/pci/dwc/pci-keystone.c | 1 +
>> drivers/pci/dwc/pci-keystone.h | 4 +-
>> drivers/pci/dwc/pci-layerscape.c | 3 +-
>> drivers/pci/dwc/pcie-artpec6.c | 18 --
>> drivers/pci/dwc/pcie-designware-host.c | 402 +++++++++++++++++++--------------
>> drivers/pci/dwc/pcie-designware-plat.c | 16 --
>> drivers/pci/dwc/pcie-designware.h | 30 ++-
>> drivers/pci/dwc/pcie-qcom.c | 16 --
>> 11 files changed, 260 insertions(+), 357 deletions(-)
>>
>> --
>> 2.7.4
>>
>>
Thanks,
Gustavo
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 0/9] PCI: dwc: MSI-X feature
2018-02-12 18:14 ` Gustavo Pimentel
@ 2018-02-13 12:29 ` Lorenzo Pieralisi
2018-02-13 12:36 ` Lucas Stach
0 siblings, 1 reply; 18+ messages in thread
From: Lorenzo Pieralisi @ 2018-02-13 12:29 UTC (permalink / raw)
To: Gustavo Pimentel
Cc: marc.zyngier@arm.com, Joao.Pinto@synopsys.com,
bhelgaas@google.com, jingoohan1@gmail.com, kishon@ti.com,
linux-pci@vger.kernel.org, m-karicheri2@ti.com,
thomas.petazzoni@free-electrons.com, minghuan.Lian@freescale.com,
mingkai.hu@freescale.com, tie-fei.zang@freescale.com,
hongxing.zhu@nxp.com, l.stach@pengutronix.de,
niklas.cassel@axis.com, jesper.nilsson@axis.com,
wangzhou1@hisilicon.com, gabriele.paoloni@huawei.com,
svarbanov@mm-sol.com, nsekhar@ti.com
On Mon, Feb 12, 2018 at 06:14:29PM +0000, Gustavo Pimentel wrote:
> Hi Lorenzo,
>
> On 12/02/2018 12:23, Lorenzo Pieralisi wrote:
> > On Fri, Jan 26, 2018 at 04:35:35PM +0000, Gustavo Pimentel wrote:
> >> Replaces the use of IRQ domain hierarchy by the IRQ chained used by
> >> pcie-designware and each SoC specific driver in order to allow new and
> >> complex features like MSI-X.
> >>
> >> Adds Synopsys Root Complex support for MSI-X feature.
> >
> > Hi Gustavo,
> >
> > please CC me for future patch versions. I have a question for you,
> > the series definitely looks like it is going in the right direction
> > but I do not understand the cover letter.
> >
> > - You are not replacing the IRQ domain hierarchy, you are actually
> > introducing it properly. It may be just nomenclature but I though
> > I would mention that.
> > - I really do not understand why this series is advertised as
> > "implementing MSI-X". I want to understand why MSI-X it is not
> > currently supported and why this series would enable it, I
> > honestly do not understand the reasoning behind this, it looks
> > a bit fuzzy.
> >
> > Please note I am just asking clarifications on the $SUBJECT for my own
> > understanding, the series is definitely the right thing to do and I am
> > happy to target v4.17 for it.
>
> Marc Zyngier also pointed out like you that the description is confusing and
> does not correspond to what is done in the code and now I also see it now.
>
> Please fell free to review and point out something wrong, only in this way can I
> evolve. :)
Sure. Let's start by discussing this:
https://patchwork.kernel.org/patch/5708521/
If it is a HW issue your patches can't solve it. If it is a SW issue
patch above (and current mainline code - commit 19c5392eb1c1) is wrong.
Before updating the code I want to understand what the problem is with
MSI-X in current mainline.
To be as clear as I can: this series is doing the *right* thing, I want
to understand what the current driver is doing with MSIs and why, so
that we can help you update the code correctly.
Lorenzo
> > Thanks,
> > Lorenzo
> >
> >> Expands the maximum number of IRQs from 32 to 256 distributed by
> >> a maximum of 8 controller registers.
> >>
> >> The patch set was made against the Bjorn's next branch.
> >>
> >> Gustavo Pimentel (9):
> >> PCI: dwc: Add IRQ chained API support
> >> PCI: dwc: exynos: Switch to use the IRQ chained API
> >> PCI: dwc: imx6: Switch to use the IRQ chained API
> >> PCI: dwc: artpec6: Switch to use the IRQ chained API
> >> PCI: dwc: designware: Switch to use the IRQ chained API
> >> PCI: dwc: qcom: Switch to use the IRQ chained API
> >> PCI: dwc: keystone: Switch to use the IRQ chained API
> >> PCI: dwc: Remove IRQ domain hierarchy API support
> >> PCI: dwc: Expand maximum number of IRQs from 32 to 256
> >>
> >> drivers/pci/dwc/pci-exynos.c | 18 --
> >> drivers/pci/dwc/pci-imx6.c | 18 --
> >> drivers/pci/dwc/pci-keystone-dw.c | 91 +-------
> >> drivers/pci/dwc/pci-keystone.c | 1 +
> >> drivers/pci/dwc/pci-keystone.h | 4 +-
> >> drivers/pci/dwc/pci-layerscape.c | 3 +-
> >> drivers/pci/dwc/pcie-artpec6.c | 18 --
> >> drivers/pci/dwc/pcie-designware-host.c | 402 +++++++++++++++++++--------------
> >> drivers/pci/dwc/pcie-designware-plat.c | 16 --
> >> drivers/pci/dwc/pcie-designware.h | 30 ++-
> >> drivers/pci/dwc/pcie-qcom.c | 16 --
> >> 11 files changed, 260 insertions(+), 357 deletions(-)
> >>
> >> --
> >> 2.7.4
> >>
> >>
>
> Thanks,
> Gustavo
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 0/9] PCI: dwc: MSI-X feature
2018-02-13 12:29 ` Lorenzo Pieralisi
@ 2018-02-13 12:36 ` Lucas Stach
2018-02-13 14:04 ` Lorenzo Pieralisi
0 siblings, 1 reply; 18+ messages in thread
From: Lucas Stach @ 2018-02-13 12:36 UTC (permalink / raw)
To: Lorenzo Pieralisi, Gustavo Pimentel
Cc: marc.zyngier@arm.com, Joao.Pinto@synopsys.com,
bhelgaas@google.com, jingoohan1@gmail.com, kishon@ti.com,
linux-pci@vger.kernel.org, m-karicheri2@ti.com,
thomas.petazzoni@free-electrons.com, minghuan.Lian@freescale.com,
mingkai.hu@freescale.com, tie-fei.zang@freescale.com,
hongxing.zhu@nxp.com, niklas.cassel@axis.com,
jesper.nilsson@axis.com, wangzhou1@hisilicon.com,
gabriele.paoloni@huawei.com, svarbanov@mm-sol.com, nsekhar@ti.com
Am Dienstag, den 13.02.2018, 12:29 +0000 schrieb Lorenzo Pieralisi:
> On Mon, Feb 12, 2018 at 06:14:29PM +0000, Gustavo Pimentel wrote:
> > Hi Lorenzo,
> >
> > On 12/02/2018 12:23, Lorenzo Pieralisi wrote:
> > > On Fri, Jan 26, 2018 at 04:35:35PM +0000, Gustavo Pimentel wrote:
> > > > Replaces the use of IRQ domain hierarchy by the IRQ chained
> > > > used by
> > > > pcie-designware and each SoC specific driver in order to allow
> > > > new and
> > > > complex features like MSI-X.
> > > >
> > > > Adds Synopsys Root Complex support for MSI-X feature.
> > >
> > > Hi Gustavo,
> > >
> > > please CC me for future patch versions. I have a question for
> > > you,
> > > the series definitely looks like it is going in the right
> > > direction
> > > but I do not understand the cover letter.
> > >
> > > - You are not replacing the IRQ domain hierarchy, you are
> > > actually
> > > introducing it properly. It may be just nomenclature but I
> > > though
> > > I would mention that.
> > > - I really do not understand why this series is advertised as
> > > "implementing MSI-X". I want to understand why MSI-X it is not
> > > currently supported and why this series would enable it, I
> > > honestly do not understand the reasoning behind this, it looks
> > > a bit fuzzy.
> > >
> > > Please note I am just asking clarifications on the $SUBJECT for
> > > my own
> > > understanding, the series is definitely the right thing to do and
> > > I am
> > > happy to target v4.17 for it.
> >
> > Marc Zyngier also pointed out like you that the description is
> > confusing and
> > does not correspond to what is done in the code and now I also see
> > it now.
> >
> > Please fell free to review and point out something wrong, only in
> > this way can I
> > evolve. :)
>
> Sure. Let's start by discussing this:
>
> https://patchwork.kernel.org/patch/5708521/
>
> If it is a HW issue your patches can't solve it. If it is a SW issue
> patch above (and current mainline code - commit 19c5392eb1c1) is
> wrong.
>
> Before updating the code I want to understand what the problem is
> with
> MSI-X in current mainline.
>
> To be as clear as I can: this series is doing the *right* thing, I
> want
> to understand what the current driver is doing with MSIs and why, so
> that we can help you update the code correctly.
It's partly a software issue and partly a wrong understanding of MSI-X
on my side at the time. The DW hardware doesn't support different MSI
target addresses for individual MSIs, but that's just a optional
feature of MSI-X and not required to implement them correctly. Enabling
of MSI-X on the DW hardware via a proper software implementation is
fine.
Regards,
Lucas
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 0/9] PCI: dwc: MSI-X feature
2018-02-13 12:36 ` Lucas Stach
@ 2018-02-13 14:04 ` Lorenzo Pieralisi
0 siblings, 0 replies; 18+ messages in thread
From: Lorenzo Pieralisi @ 2018-02-13 14:04 UTC (permalink / raw)
To: Lucas Stach
Cc: Gustavo Pimentel, marc.zyngier@arm.com, Joao.Pinto@synopsys.com,
bhelgaas@google.com, jingoohan1@gmail.com, kishon@ti.com,
linux-pci@vger.kernel.org, m-karicheri2@ti.com,
thomas.petazzoni@free-electrons.com, minghuan.Lian@freescale.com,
mingkai.hu@freescale.com, tie-fei.zang@freescale.com,
hongxing.zhu@nxp.com, niklas.cassel@axis.com,
jesper.nilsson@axis.com, wangzhou1@hisilicon.com,
gabriele.paoloni@huawei.com, svarbanov@mm-sol.com, nsekhar@ti.com
On Tue, Feb 13, 2018 at 01:36:35PM +0100, Lucas Stach wrote:
> Am Dienstag, den 13.02.2018, 12:29 +0000 schrieb Lorenzo Pieralisi:
> > On Mon, Feb 12, 2018 at 06:14:29PM +0000, Gustavo Pimentel wrote:
> > > Hi Lorenzo,
> > >
> > > On 12/02/2018 12:23, Lorenzo Pieralisi wrote:
> > > > On Fri, Jan 26, 2018 at 04:35:35PM +0000, Gustavo Pimentel wrote:
> > > > > Replaces the use of IRQ domain hierarchy by the IRQ chained
> > > > > used by
> > > > > pcie-designware and each SoC specific driver in order to allow
> > > > > new and
> > > > > complex features like MSI-X.
> > > > >
> > > > > Adds Synopsys Root Complex support for MSI-X feature.
> > > >
> > > > Hi Gustavo,
> > > >
> > > > please CC me for future patch versions. I have a question for
> > > > you,
> > > > the series definitely looks like it is going in the right
> > > > direction
> > > > but I do not understand the cover letter.
> > > >
> > > > - You are not replacing the IRQ domain hierarchy, you are
> > > > actually
> > > > ?? introducing it properly. It may be just nomenclature but I
> > > > though
> > > > ?? I would mention that.
> > > > - I really do not understand why this series is advertised as
> > > > ?? "implementing MSI-X". I want to understand why MSI-X it is not
> > > > ?? currently supported and why this series would enable it, I
> > > > ?? honestly do not understand the reasoning behind this, it looks
> > > > ?? a bit fuzzy.
> > > >
> > > > Please note I am just asking clarifications on the $SUBJECT for
> > > > my own
> > > > understanding, the series is definitely the right thing to do and
> > > > I am
> > > > happy to target v4.17 for it.
> > >
> > > Marc Zyngier also pointed out like you that the description is
> > > confusing and
> > > does not correspond to what is done in the code and now I also see
> > > it now.
> > >
> > > Please fell free to review and point out something wrong, only in
> > > this way can I
> > > evolve. :)
> >
> > Sure. Let's start by discussing this:
> >
> > https://patchwork.kernel.org/patch/5708521/
> >
> > If it is a HW issue your patches can't solve it. If it is a SW issue
> > patch above (and current mainline code - commit 19c5392eb1c1) is
> > wrong.
> >
> > Before updating the code I want to understand what the problem is
> > with
> > MSI-X in current mainline.
> >
> > To be as clear as I can: this series is doing the *right* thing, I
> > want
> > to understand what the current driver is doing with MSIs and why, so
> > that we can help you update the code correctly.
>
> It's partly a software issue and partly a wrong understanding of MSI-X
> on my side at the time. The DW hardware doesn't support different MSI
> target addresses for individual MSIs, but that's just a optional
> feature of MSI-X and not required to implement them correctly. Enabling
> of MSI-X on the DW hardware via a proper software implementation is
> fine.
Ok, thanks for clarifying, that matches my understanding, I completely
agree that to enable MSI-X we must move to a hierarchical IRQ domain
implementation (where core IRQ code deals with IRQ allocation and
mapping on behalf of drivers) instead of abusing the IRQ layer even
further, so this series is definitely the way to go, I just wanted to
understand why DW does not support MSI-X.
Thanks,
Lorenzo
^ permalink raw reply [flat|nested] 18+ messages in thread