* [RFC PATCH 0/2] Add Qualcomm PCIe ECAM root complex driver
@ 2024-04-04 19:11 Mayank Rana
2024-04-04 19:11 ` [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex Mayank Rana
` (2 more replies)
0 siblings, 3 replies; 27+ messages in thread
From: Mayank Rana @ 2024-04-04 19:11 UTC (permalink / raw)
To: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt, Mayank Rana
On some of Qualcomm platform, firmware takes care of system resources
related to PCIe PHY and controller as well bringing up PCIe link and
having static iATU configuration for PCIe controller to work into
ECAM compliant mode. Hence add Qualcomm PCIe ECAM root complex driver.
Tested:
- Validated NVME functionality with PCIe0 and PCIe1 on SA877p-ride platform
Mayank Rana (2):
dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex
PCI: Add Qualcomm PCIe ECAM root complex driver
.../devicetree/bindings/pci/qcom,pcie-ecam.yaml | 94 ++++
drivers/pci/controller/Kconfig | 12 +
drivers/pci/controller/Makefile | 1 +
drivers/pci/controller/pcie-qcom-ecam.c | 575 +++++++++++++++++++++
4 files changed, 682 insertions(+)
create mode 100644 Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
create mode 100644 drivers/pci/controller/pcie-qcom-ecam.c
--
2.7.4
^ permalink raw reply [flat|nested] 27+ messages in thread
* [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex
2024-04-04 19:11 [RFC PATCH 0/2] Add Qualcomm PCIe ECAM root complex driver Mayank Rana
@ 2024-04-04 19:11 ` Mayank Rana
2024-04-04 19:30 ` Krzysztof Kozlowski
2024-04-04 19:11 ` [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver Mayank Rana
2024-04-04 19:33 ` [RFC PATCH 0/2] " Krzysztof Kozlowski
2 siblings, 1 reply; 27+ messages in thread
From: Mayank Rana @ 2024-04-04 19:11 UTC (permalink / raw)
To: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt, Mayank Rana
On some of Qualcomm platform, firmware configures PCIe controller in RC
mode with static iATU window mappings of configuration space for entire
supported bus range in ECAM compatible mode. Firmware also manages PCIe
PHY as well required system resources. Here document properties and
required configuration to power up QCOM PCIe ECAM compatible root complex
and PHY for PCIe functionality.
Signed-off-by: Mayank Rana <quic_mrana@quicinc.com>
---
.../devicetree/bindings/pci/qcom,pcie-ecam.yaml | 94 ++++++++++++++++++++++
1 file changed, 94 insertions(+)
create mode 100644 Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
new file mode 100644
index 00000000..c209f12
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
@@ -0,0 +1,94 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/qcom,pcie-ecam.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Qualcomm ECAM compliant PCI express root complex
+
+description: |
+ Qualcomm SOC based ECAM compatible PCIe root complex supporting MSI controller.
+ Firmware configures PCIe controller in RC mode with static iATU window mappings
+ of configuration space for entire supported bus range in ECAM compatible mode.
+
+maintainers:
+ - Mayank Rana <quic_mrana@quicinc.com>
+
+allOf:
+ - $ref: /schemas/pci/pci-bus.yaml#
+ - $ref: /schemas/power-domain/power-domain-consumer.yaml
+
+properties:
+ compatible:
+ const: qcom,pcie-ecam-rc
+
+ reg:
+ minItems: 1
+ description: ECAM address space starting from root port till supported bus range
+
+ interrupts:
+ minItems: 1
+ maxItems: 8
+
+ ranges:
+ minItems: 2
+ maxItems: 3
+
+ iommu-map:
+ minItems: 1
+ maxItems: 16
+
+ power-domains:
+ maxItems: 1
+ description: A phandle to node which is able support way to communicate with firmware
+ for enabling PCIe controller and PHY as well managing all system resources needed to
+ make both controller and PHY operational for PCIe functionality.
+
+ dma-coherent: true
+
+required:
+ - compatible
+ - reg
+ - interrupts
+ - ranges
+ - power-domains
+ - device_type
+ - linux,pci-domain
+ - bus-range
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ soc {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ pcie0: pci@1c00000 {
+ compatible = "qcom,pcie-ecam-rc";
+ reg = <0x4 0x00000000 0 0x10000000>;
+ device_type = "pci";
+ #address-cells = <3>;
+ #size-cells = <2>;
+ ranges = <0x01000000 0x0 0x40000000 0x0 0x40000000 0x0 0x100000>,
+ <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>,
+ <0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x100000>;
+ bus-range = <0x00 0xff>;
+ dma-coherent;
+ linux,pci-domain = <0>;
+ power-domains = <&scmi5_pd 0>;
+ power-domain-names = "power";
+ iommu-map = <0x0 &pcie_smmu 0x0000 0x1>,
+ <0x100 &pcie_smmu 0x0001 0x1>;
+ interrupt-parent = <&intc>;
+ interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 309 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>;
+ };
+ };
+...
--
2.7.4
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-04 19:11 [RFC PATCH 0/2] Add Qualcomm PCIe ECAM root complex driver Mayank Rana
2024-04-04 19:11 ` [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex Mayank Rana
@ 2024-04-04 19:11 ` Mayank Rana
2024-04-04 19:33 ` Krzysztof Kozlowski
` (2 more replies)
2024-04-04 19:33 ` [RFC PATCH 0/2] " Krzysztof Kozlowski
2 siblings, 3 replies; 27+ messages in thread
From: Mayank Rana @ 2024-04-04 19:11 UTC (permalink / raw)
To: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt, Mayank Rana
On some of Qualcomm platform, firmware configures PCIe controller into
ECAM mode allowing static memory allocation for configuration space of
supported bus range. Firmware also takes care of bringing up PCIe PHY
and performing required operation to bring PCIe link into D0. Firmware
also manages system resources (e.g. clocks/regulators/resets/ bus voting).
Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
root complex and connected PCIe devices. Firmware won't be enumerating
or powering up PCIe root complex until this driver invokes power domain
based notification to bring PCIe link into D0/D3cold mode.
This driver also support MSI functionality using PCIe controller based
MSI controller as GIC ITS based MSI functionality is not available on
some of platform.
Signed-off-by: Mayank Rana <quic_mrana@quicinc.com>
---
drivers/pci/controller/Kconfig | 12 +
drivers/pci/controller/Makefile | 1 +
drivers/pci/controller/pcie-qcom-ecam.c | 575 ++++++++++++++++++++++++++++++++
3 files changed, 588 insertions(+)
create mode 100644 drivers/pci/controller/pcie-qcom-ecam.c
diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
index e534c02..abbd9f2 100644
--- a/drivers/pci/controller/Kconfig
+++ b/drivers/pci/controller/Kconfig
@@ -353,6 +353,18 @@ config PCIE_XILINX_CPM
Say 'Y' here if you want kernel support for the
Xilinx Versal CPM host bridge.
+config PCIE_QCOM_ECAM
+ tristate "QCOM PCIe ECAM host controller"
+ depends on ARCH_QCOM && PCI
+ depends on OF
+ select PCI_MSI
+ select PCI_HOST_COMMON
+ select IRQ_DOMAIN
+ help
+ Say 'Y' here if you want to use ECAM shift mode compatible Qualcomm
+ PCIe root host controller. The controller is programmed using firmware
+ to support ECAM compatible memory address space.
+
source "drivers/pci/controller/cadence/Kconfig"
source "drivers/pci/controller/dwc/Kconfig"
source "drivers/pci/controller/mobiveil/Kconfig"
diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
index f2b19e6..2f1ee1e 100644
--- a/drivers/pci/controller/Makefile
+++ b/drivers/pci/controller/Makefile
@@ -40,6 +40,7 @@ obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o
obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o
obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
+obj-$(CONFIG_PCIE_QCOM_ECAM) += pcie-qcom-ecam.o
# pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
obj-y += dwc/
diff --git a/drivers/pci/controller/pcie-qcom-ecam.c b/drivers/pci/controller/pcie-qcom-ecam.c
new file mode 100644
index 00000000..5b4c68b
--- /dev/null
+++ b/drivers/pci/controller/pcie-qcom-ecam.c
@@ -0,0 +1,575 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Qualcomm PCIe ECAM root host controller driver
+ * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/irq.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/msi.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/pci.h>
+#include <linux/pci-ecam.h>
+#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#define PCIE_MSI_CTRL_BASE (0x820)
+#define PCIE_MSI_CTRL_SIZE (0x68)
+#define PCIE_MSI_CTRL_ADDR_OFFS (0x0)
+#define PCIE_MSI_CTRL_UPPER_ADDR_OFFS (0x4)
+#define PCIE_MSI_CTRL_INT_N_EN_OFFS(n) (0x8 + 0xc * (n))
+#define PCIE_MSI_CTRL_INT_N_MASK_OFFS(n) (0xc + 0xc * (n))
+#define PCIE_MSI_CTRL_INT_N_STATUS_OFFS(n) (0x10 + 0xc * (n))
+
+#define MSI_DB_ADDR 0xa0000000
+#define MSI_IRQ_PER_GRP (32)
+
+/**
+ * struct qcom_msi_irq - MSI IRQ information
+ * @client: pointer to MSI client struct
+ * @grp: group the irq belongs to
+ * @grp_index: index in group
+ * @hwirq: hwirq number
+ * @virq: virq number
+ * @pos: position in MSI bitmap
+ */
+struct qcom_msi_irq {
+ struct qcom_msi_client *client;
+ struct qcom_msi_grp *grp;
+ unsigned int grp_index;
+ unsigned int hwirq;
+ unsigned int virq;
+ u32 pos;
+};
+
+/**
+ * struct qcom_msi_grp - MSI group information
+ * @int_en_reg: memory-mapped interrupt enable register address
+ * @int_mask_reg: memory-mapped interrupt mask register address
+ * @int_status_reg: memory-mapped interrupt status register address
+ * @mask: tracks masked/unmasked MSI
+ * @irqs: structure to MSI IRQ information
+ */
+struct qcom_msi_grp {
+ void __iomem *int_en_reg;
+ void __iomem *int_mask_reg;
+ void __iomem *int_status_reg;
+ u32 mask;
+ struct qcom_msi_irq irqs[MSI_IRQ_PER_GRP];
+};
+
+/**
+ * struct qcom_msi - PCIe controller based MSI controller information
+ * @clients: list for tracking clients
+ * @dev: platform device node
+ * @nr_hwirqs: total number of hardware IRQs
+ * @nr_virqs: total number of virqs
+ * @nr_grps: total number of groups
+ * @grps: pointer to all groups information
+ * @bitmap: tracks used/unused MSI
+ * @mutex: for modifying MSI client list and bitmap
+ * @inner_domain: parent domain; gen irq related
+ * @msi_domain: child domain; pcie related
+ * @msi_db_addr: MSI doorbell address
+ * @cfg_lock: lock for configuring MSI controller registers
+ * @pcie_msi_cfg: memory-mapped MSI controller register space
+ */
+struct qcom_msi {
+ struct list_head clients;
+ struct device *dev;
+ int nr_hwirqs;
+ int nr_virqs;
+ int nr_grps;
+ struct qcom_msi_grp *grps;
+ unsigned long *bitmap;
+ struct mutex mutex;
+ struct irq_domain *inner_domain;
+ struct irq_domain *msi_domain;
+ phys_addr_t msi_db_addr;
+ spinlock_t cfg_lock;
+ void __iomem *pcie_msi_cfg;
+};
+
+/**
+ * struct qcom_msi_client - structure for each client of MSI controller
+ * @node: list to track number of MSI clients
+ * @msi: client specific MSI controller based resource pointer
+ * @dev: client's dev of pci_dev
+ * @nr_irqs: number of irqs allocated for client
+ * @msi_addr: MSI doorbell address
+ */
+struct qcom_msi_client {
+ struct list_head node;
+ struct qcom_msi *msi;
+ struct device *dev;
+ unsigned int nr_irqs;
+ phys_addr_t msi_addr;
+};
+
+static void qcom_msi_handler(struct irq_desc *desc)
+{
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+ struct qcom_msi_grp *msi_grp;
+ u32 status;
+ int i;
+
+ chained_irq_enter(chip, desc);
+
+ msi_grp = irq_desc_get_handler_data(desc);
+ status = readl_relaxed(msi_grp->int_status_reg);
+ status ^= (msi_grp->mask & status);
+ writel(status, msi_grp->int_status_reg);
+
+ for (i = 0; status; i++, status >>= 1)
+ if (status & 0x1)
+ generic_handle_irq(msi_grp->irqs[i].virq);
+
+ chained_irq_exit(chip, desc);
+}
+
+static void qcom_msi_mask_irq(struct irq_data *data)
+{
+ struct irq_data *parent_data;
+ struct qcom_msi_irq *msi_irq;
+ struct qcom_msi_grp *msi_grp;
+ struct qcom_msi *msi;
+ unsigned long flags;
+
+ parent_data = data->parent_data;
+ if (!parent_data)
+ return;
+
+ msi_irq = irq_data_get_irq_chip_data(parent_data);
+ msi = msi_irq->client->msi;
+ msi_grp = msi_irq->grp;
+
+ spin_lock_irqsave(&msi->cfg_lock, flags);
+ pci_msi_mask_irq(data);
+ msi_grp->mask |= BIT(msi_irq->grp_index);
+ writel(msi_grp->mask, msi_grp->int_mask_reg);
+ spin_unlock_irqrestore(&msi->cfg_lock, flags);
+}
+
+static void qcom_msi_unmask_irq(struct irq_data *data)
+{
+ struct irq_data *parent_data;
+ struct qcom_msi_irq *msi_irq;
+ struct qcom_msi_grp *msi_grp;
+ struct qcom_msi *msi;
+ unsigned long flags;
+
+ parent_data = data->parent_data;
+ if (!parent_data)
+ return;
+
+ msi_irq = irq_data_get_irq_chip_data(parent_data);
+ msi = msi_irq->client->msi;
+ msi_grp = msi_irq->grp;
+
+ spin_lock_irqsave(&msi->cfg_lock, flags);
+ msi_grp->mask &= ~BIT(msi_irq->grp_index);
+ writel(msi_grp->mask, msi_grp->int_mask_reg);
+ pci_msi_unmask_irq(data);
+ spin_unlock_irqrestore(&msi->cfg_lock, flags);
+}
+
+static struct irq_chip qcom_msi_irq_chip = {
+ .name = "qcom_pci_msi",
+ .irq_enable = qcom_msi_unmask_irq,
+ .irq_disable = qcom_msi_mask_irq,
+ .irq_mask = qcom_msi_mask_irq,
+ .irq_unmask = qcom_msi_unmask_irq,
+};
+
+static int qcom_msi_domain_prepare(struct irq_domain *domain, struct device *dev,
+ int nvec, msi_alloc_info_t *arg)
+{
+ struct qcom_msi *msi = domain->parent->host_data;
+ struct qcom_msi_client *client;
+
+ client = kzalloc(sizeof(*client), GFP_KERNEL);
+ if (!client)
+ return -ENOMEM;
+
+ client->msi = msi;
+ client->dev = dev;
+ client->msi_addr = msi->msi_db_addr;
+ mutex_lock(&msi->mutex);
+ list_add_tail(&client->node, &msi->clients);
+ mutex_unlock(&msi->mutex);
+
+ /* zero out struct for pcie msi framework */
+ memset(arg, 0, sizeof(*arg));
+ return 0;
+}
+
+static struct msi_domain_ops qcom_msi_domain_ops = {
+ .msi_prepare = qcom_msi_domain_prepare,
+};
+
+static struct msi_domain_info qcom_msi_domain_info = {
+ .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
+ MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
+ .ops = &qcom_msi_domain_ops,
+ .chip = &qcom_msi_irq_chip,
+};
+
+static int qcom_msi_irq_set_affinity(struct irq_data *data,
+ const struct cpumask *mask, bool force)
+{
+ struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
+ int ret = 0;
+
+ if (!parent_data)
+ return -ENODEV;
+
+ /* set affinity for MSI HW IRQ */
+ if (parent_data->chip->irq_set_affinity)
+ ret = parent_data->chip->irq_set_affinity(parent_data, mask, force);
+
+ return ret;
+}
+
+static void qcom_msi_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+{
+ struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
+ struct qcom_msi_irq *msi_irq = irq_data_get_irq_chip_data(data);
+ struct qcom_msi_client *client = msi_irq->client;
+
+ if (!parent_data)
+ return;
+
+ msg->address_lo = lower_32_bits(client->msi_addr);
+ msg->address_hi = upper_32_bits(client->msi_addr);
+ msg->data = msi_irq->pos;
+}
+
+static struct irq_chip qcom_msi_bottom_irq_chip = {
+ .name = "qcom_msi",
+ .irq_set_affinity = qcom_msi_irq_set_affinity,
+ .irq_compose_msi_msg = qcom_msi_irq_compose_msi_msg,
+};
+
+static int qcom_msi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs, void *args)
+{
+ struct device *dev = ((msi_alloc_info_t *)args)->desc->dev;
+ struct qcom_msi_client *tmp, *client = NULL;
+ struct qcom_msi *msi = domain->host_data;
+ int i, ret = 0;
+ int pos;
+
+ mutex_lock(&msi->mutex);
+ list_for_each_entry(tmp, &msi->clients, node) {
+ if (tmp->dev == dev) {
+ client = tmp;
+ break;
+ }
+ }
+
+ if (!client) {
+ dev_err(msi->dev, "failed to find MSI client dev\n");
+ ret = -ENODEV;
+ goto out;
+ }
+
+ pos = bitmap_find_next_zero_area(msi->bitmap, msi->nr_virqs, 0,
+ nr_irqs, nr_irqs - 1);
+ if (pos > msi->nr_virqs) {
+ ret = -ENOSPC;
+ goto out;
+ }
+
+ bitmap_set(msi->bitmap, pos, nr_irqs);
+ for (i = 0; i < nr_irqs; i++) {
+ u32 grp = pos / MSI_IRQ_PER_GRP;
+ u32 index = pos % MSI_IRQ_PER_GRP;
+ struct qcom_msi_irq *msi_irq = &msi->grps[grp].irqs[index];
+
+ msi_irq->virq = virq + i;
+ msi_irq->client = client;
+ irq_domain_set_info(domain, msi_irq->virq,
+ msi_irq->hwirq,
+ &qcom_msi_bottom_irq_chip, msi_irq,
+ handle_simple_irq, NULL, NULL);
+ client->nr_irqs++;
+ pos++;
+ }
+out:
+ mutex_unlock(&msi->mutex);
+ return ret;
+}
+
+static void qcom_msi_irq_domain_free(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs)
+{
+ struct irq_data *data = irq_domain_get_irq_data(domain, virq);
+ struct qcom_msi_client *client;
+ struct qcom_msi_irq *msi_irq;
+ struct qcom_msi *msi;
+
+ if (!data)
+ return;
+
+ msi_irq = irq_data_get_irq_chip_data(data);
+ client = msi_irq->client;
+ msi = client->msi;
+
+ mutex_lock(&msi->mutex);
+ bitmap_clear(msi->bitmap, msi_irq->pos, nr_irqs);
+
+ client->nr_irqs -= nr_irqs;
+ if (!client->nr_irqs) {
+ list_del(&client->node);
+ kfree(client);
+ }
+ mutex_unlock(&msi->mutex);
+
+ irq_domain_free_irqs_parent(domain, virq, nr_irqs);
+}
+
+static const struct irq_domain_ops msi_domain_ops = {
+ .alloc = qcom_msi_irq_domain_alloc,
+ .free = qcom_msi_irq_domain_free,
+};
+
+static int qcom_msi_alloc_domains(struct qcom_msi *msi)
+{
+ msi->inner_domain = irq_domain_add_linear(NULL, msi->nr_virqs,
+ &msi_domain_ops, msi);
+ if (!msi->inner_domain) {
+ dev_err(msi->dev, "failed to create IRQ inner domain\n");
+ return -ENOMEM;
+ }
+
+ msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->dev->of_node),
+ &qcom_msi_domain_info, msi->inner_domain);
+ if (!msi->msi_domain) {
+ dev_err(msi->dev, "failed to create MSI domain\n");
+ irq_domain_remove(msi->inner_domain);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static int qcom_msi_irq_setup(struct qcom_msi *msi)
+{
+ struct qcom_msi_grp *msi_grp;
+ struct qcom_msi_irq *msi_irq;
+ int i, index, ret;
+ unsigned int irq;
+
+ /* setup each MSI group. nr_hwirqs == nr_grps */
+ for (i = 0; i < msi->nr_hwirqs; i++) {
+ irq = irq_of_parse_and_map(msi->dev->of_node, i);
+ if (!irq) {
+ dev_err(msi->dev,
+ "MSI: failed to parse/map interrupt\n");
+ ret = -ENODEV;
+ goto free_irqs;
+ }
+
+ msi_grp = &msi->grps[i];
+ msi_grp->int_en_reg = msi->pcie_msi_cfg +
+ PCIE_MSI_CTRL_INT_N_EN_OFFS(i);
+ msi_grp->int_mask_reg = msi->pcie_msi_cfg +
+ PCIE_MSI_CTRL_INT_N_MASK_OFFS(i);
+ msi_grp->int_status_reg = msi->pcie_msi_cfg +
+ PCIE_MSI_CTRL_INT_N_STATUS_OFFS(i);
+
+ for (index = 0; index < MSI_IRQ_PER_GRP; index++) {
+ msi_irq = &msi_grp->irqs[index];
+
+ msi_irq->grp = msi_grp;
+ msi_irq->grp_index = index;
+ msi_irq->pos = (i * MSI_IRQ_PER_GRP) + index;
+ msi_irq->hwirq = irq;
+ }
+
+ irq_set_chained_handler_and_data(irq, qcom_msi_handler, msi_grp);
+ }
+
+ return 0;
+
+free_irqs:
+ for (--i; i >= 0; i--) {
+ irq = msi->grps[i].irqs[0].hwirq;
+
+ irq_set_chained_handler_and_data(irq, NULL, NULL);
+ irq_dispose_mapping(irq);
+ }
+
+ return ret;
+}
+
+static void qcom_msi_config(struct irq_domain *domain)
+{
+ struct qcom_msi *msi;
+ int i;
+
+ msi = domain->parent->host_data;
+
+ /* program termination address */
+ writel(msi->msi_db_addr, msi->pcie_msi_cfg + PCIE_MSI_CTRL_ADDR_OFFS);
+ writel(0, msi->pcie_msi_cfg + PCIE_MSI_CTRL_UPPER_ADDR_OFFS);
+
+ /* restore mask and enable all interrupts for each group */
+ for (i = 0; i < msi->nr_grps; i++) {
+ struct qcom_msi_grp *msi_grp = &msi->grps[i];
+
+ writel(msi_grp->mask, msi_grp->int_mask_reg);
+ writel(~0, msi_grp->int_en_reg);
+ }
+}
+
+static void qcom_msi_deinit(struct qcom_msi *msi)
+{
+ irq_domain_remove(msi->msi_domain);
+ irq_domain_remove(msi->inner_domain);
+}
+
+static struct qcom_msi *qcom_msi_init(struct device *dev)
+{
+ struct qcom_msi *msi;
+ u64 addr;
+ int ret;
+
+ msi = devm_kzalloc(dev, sizeof(*msi), GFP_KERNEL);
+ if (!msi)
+ return ERR_PTR(-ENOMEM);
+
+ msi->dev = dev;
+ mutex_init(&msi->mutex);
+ spin_lock_init(&msi->cfg_lock);
+ INIT_LIST_HEAD(&msi->clients);
+
+ msi->msi_db_addr = MSI_DB_ADDR;
+ msi->nr_hwirqs = of_irq_count(dev->of_node);
+ if (!msi->nr_hwirqs) {
+ dev_err(msi->dev, "no hwirqs found\n");
+ return ERR_PTR(-ENODEV);
+ }
+
+ if (of_property_read_reg(dev->of_node, 0, &addr, NULL) < 0) {
+ dev_err(msi->dev, "failed to get reg address\n");
+ return ERR_PTR(-ENODEV);
+ }
+
+ dev_dbg(msi->dev, "hwirq:%d pcie_msi_cfg:%llx\n", msi->nr_hwirqs, addr);
+ msi->pcie_msi_cfg = devm_ioremap(dev, addr + PCIE_MSI_CTRL_BASE, PCIE_MSI_CTRL_SIZE);
+ if (!msi->pcie_msi_cfg)
+ return ERR_PTR(-ENOMEM);
+
+ msi->nr_virqs = msi->nr_hwirqs * MSI_IRQ_PER_GRP;
+ msi->nr_grps = msi->nr_hwirqs;
+ msi->grps = devm_kcalloc(dev, msi->nr_grps, sizeof(*msi->grps), GFP_KERNEL);
+ if (!msi->grps)
+ return ERR_PTR(-ENOMEM);
+
+ msi->bitmap = devm_kcalloc(dev, BITS_TO_LONGS(msi->nr_virqs),
+ sizeof(*msi->bitmap), GFP_KERNEL);
+ if (!msi->bitmap)
+ return ERR_PTR(-ENOMEM);
+
+ ret = qcom_msi_alloc_domains(msi);
+ if (ret)
+ return ERR_PTR(ret);
+
+ ret = qcom_msi_irq_setup(msi);
+ if (ret) {
+ qcom_msi_deinit(msi);
+ return ERR_PTR(ret);
+ }
+
+ qcom_msi_config(msi->msi_domain);
+ return msi;
+}
+
+static int qcom_pcie_ecam_suspend_noirq(struct device *dev)
+{
+ return pm_runtime_put_sync(dev);
+}
+
+static int qcom_pcie_ecam_resume_noirq(struct device *dev)
+{
+ return pm_runtime_get_sync(dev);
+}
+
+static int qcom_pcie_ecam_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct qcom_msi *msi;
+ int ret;
+
+ ret = devm_pm_runtime_enable(dev);
+ if (ret)
+ return ret;
+
+ ret = pm_runtime_resume_and_get(dev);
+ if (ret < 0) {
+ dev_err(dev, "fail to enable pcie controller: %d\n", ret);
+ return ret;
+ }
+
+ msi = qcom_msi_init(dev);
+ if (IS_ERR(msi)) {
+ pm_runtime_put_sync(dev);
+ return PTR_ERR(msi);
+ }
+
+ ret = pci_host_common_probe(pdev);
+ if (ret) {
+ dev_err(dev, "pci_host_common_probe() failed:%d\n", ret);
+ qcom_msi_deinit(msi);
+ pm_runtime_put_sync(dev);
+ }
+
+ return ret;
+}
+
+static const struct dev_pm_ops qcom_pcie_ecam_pm_ops = {
+ NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_ecam_suspend_noirq,
+ qcom_pcie_ecam_resume_noirq)
+};
+
+static const struct pci_ecam_ops qcom_pcie_ecam_ops = {
+ .pci_ops = {
+ .map_bus = pci_ecam_map_bus,
+ .read = pci_generic_config_read,
+ .write = pci_generic_config_write,
+ }
+};
+
+static const struct of_device_id qcom_pcie_ecam_of_match[] = {
+ {
+ .compatible = "qcom,pcie-ecam-rc",
+ .data = &qcom_pcie_ecam_ops,
+ },
+ { },
+};
+MODULE_DEVICE_TABLE(of, qcom_pcie_ecam_of_match);
+
+static struct platform_driver qcom_pcie_ecam_driver = {
+ .probe = qcom_pcie_ecam_probe,
+ .driver = {
+ .name = "qcom-pcie-ecam-rc",
+ .suppress_bind_attrs = true,
+ .of_match_table = qcom_pcie_ecam_of_match,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
+ .pm = &qcom_pcie_ecam_pm_ops,
+ },
+};
+module_platform_driver(qcom_pcie_ecam_driver);
+
+MODULE_DESCRIPTION("Qualcomm PCIe ECAM root complex driver");
+MODULE_LICENSE("GPL");
--
2.7.4
^ permalink raw reply related [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex
2024-04-04 19:11 ` [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex Mayank Rana
@ 2024-04-04 19:30 ` Krzysztof Kozlowski
2024-04-08 19:09 ` Mayank Rana
0 siblings, 1 reply; 27+ messages in thread
From: Krzysztof Kozlowski @ 2024-04-04 19:30 UTC (permalink / raw)
To: Mayank Rana, linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
On 04/04/2024 21:11, Mayank Rana wrote:
> On some of Qualcomm platform, firmware configures PCIe controller in RC
On which?
Your commit or binding must answer to all such questions.
> mode with static iATU window mappings of configuration space for entire
> supported bus range in ECAM compatible mode. Firmware also manages PCIe
> PHY as well required system resources. Here document properties and
> required configuration to power up QCOM PCIe ECAM compatible root complex
> and PHY for PCIe functionality.
>
> Signed-off-by: Mayank Rana <quic_mrana@quicinc.com>
> ---
> .../devicetree/bindings/pci/qcom,pcie-ecam.yaml | 94 ++++++++++++++++++++++
> 1 file changed, 94 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
>
> diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
> new file mode 100644
> index 00000000..c209f12
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
> @@ -0,0 +1,94 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/qcom,pcie-ecam.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Qualcomm ECAM compliant PCI express root complex
> +
> +description: |
Do not need '|' unless you need to preserve formatting.
> + Qualcomm SOC based ECAM compatible PCIe root complex supporting MSI controller.
Which SoC?
> + Firmware configures PCIe controller in RC mode with static iATU window mappings
> + of configuration space for entire supported bus range in ECAM compatible mode.
> +
> +maintainers:
> + - Mayank Rana <quic_mrana@quicinc.com>
> +
> +allOf:
> + - $ref: /schemas/pci/pci-bus.yaml#
> + - $ref: /schemas/power-domain/power-domain-consumer.yaml
> +
> +properties:
> + compatible:
> + const: qcom,pcie-ecam-rc
No, this must have SoC specific compatibles.
> +
> + reg:
> + minItems: 1
maxItems instead
> + description: ECAM address space starting from root port till supported bus range
> +
> + interrupts:
> + minItems: 1
> + maxItems: 8
This is way too unspecific.
> +
> + ranges:
> + minItems: 2
> + maxItems: 3
Why variable?
> +
> + iommu-map:
> + minItems: 1
> + maxItems: 16
Why variable?
Open existing bindings and look how it is done.
> +
> + power-domains:
> + maxItems: 1
> + description: A phandle to node which is able support way to communicate with firmware
> + for enabling PCIe controller and PHY as well managing all system resources needed to
> + make both controller and PHY operational for PCIe functionality.
This description does not tell me much. Say something specific. And drop
redundant parts like phandle.
> +
> + dma-coherent: true
> +
> +required:
> + - compatible
> + - reg
> + - interrupts
> + - ranges
> + - power-domains
> + - device_type
> + - linux,pci-domain
> + - bus-range
> +
> +unevaluatedProperties: false
> +
> +examples:
> + - |
> + #include <dt-bindings/interrupt-controller/arm-gic.h>
> + soc {
> + #address-cells = <2>;
> + #size-cells = <2>;
> + pcie0: pci@1c00000 {
> + compatible = "qcom,pcie-ecam-rc";
> + reg = <0x4 0x00000000 0 0x10000000>;
> + device_type = "pci";
> + #address-cells = <3>;
> + #size-cells = <2>;
> + ranges = <0x01000000 0x0 0x40000000 0x0 0x40000000 0x0 0x100000>,
> + <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>,
> + <0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x100000>;
Follow DTS coding style about placement and alignment.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-04 19:11 ` [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver Mayank Rana
@ 2024-04-04 19:33 ` Krzysztof Kozlowski
2024-04-05 5:30 ` Manivannan Sadhasivam
2024-04-05 18:30 ` Bjorn Helgaas
2 siblings, 0 replies; 27+ messages in thread
From: Krzysztof Kozlowski @ 2024-04-04 19:33 UTC (permalink / raw)
To: Mayank Rana, linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
On 04/04/2024 21:11, Mayank Rana wrote:
> On some of Qualcomm platform, firmware configures PCIe controller into
> ECAM mode allowing static memory allocation for configuration space of
> supported bus range. Firmware also takes care of bringing up PCIe PHY
> and performing required operation to bring PCIe link into D0. Firmware
> also manages system resources (e.g. clocks/regulators/resets/ bus voting).
> Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
> root complex and connected PCIe devices. Firmware won't be enumerating
> or powering up PCIe root complex until this driver invokes power domain
> based notification to bring PCIe link into D0/D3cold mode.
...
> +
> +static int qcom_pcie_ecam_suspend_noirq(struct device *dev)
> +{
> + return pm_runtime_put_sync(dev);
> +}
> +
> +static int qcom_pcie_ecam_resume_noirq(struct device *dev)
> +{
> + return pm_runtime_get_sync(dev);
> +}
> +
> +static int qcom_pcie_ecam_probe(struct platform_device *pdev)
> +{
> + struct device *dev = &pdev->dev;
> + struct qcom_msi *msi;
> + int ret;
> +
> + ret = devm_pm_runtime_enable(dev);
> + if (ret)
> + return ret;
> +
> + ret = pm_runtime_resume_and_get(dev);
> + if (ret < 0) {
> + dev_err(dev, "fail to enable pcie controller: %d\n", ret);
> + return ret;
> + }
> +
> + msi = qcom_msi_init(dev);
> + if (IS_ERR(msi)) {
> + pm_runtime_put_sync(dev);
> + return PTR_ERR(msi);
> + }
> +
> + ret = pci_host_common_probe(pdev);
> + if (ret) {
> + dev_err(dev, "pci_host_common_probe() failed:%d\n", ret);
Don't print function name, but instead say something useful. Above error
message is so not useful that just drop it.
> + qcom_msi_deinit(msi);
> + pm_runtime_put_sync(dev);
> + }
> +
> + return ret;
> +}
> +
> +static const struct dev_pm_ops qcom_pcie_ecam_pm_ops = {
> + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_ecam_suspend_noirq,
> + qcom_pcie_ecam_resume_noirq)
> +};
> +
> +static const struct pci_ecam_ops qcom_pcie_ecam_ops = {
> + .pci_ops = {
> + .map_bus = pci_ecam_map_bus,
> + .read = pci_generic_config_read,
> + .write = pci_generic_config_write,
> + }
> +};
> +
> +static const struct of_device_id qcom_pcie_ecam_of_match[] = {
> + {
> + .compatible = "qcom,pcie-ecam-rc",
> + .data = &qcom_pcie_ecam_ops,
Why do you have ops/match data for generic compatible?
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/2] Add Qualcomm PCIe ECAM root complex driver
2024-04-04 19:11 [RFC PATCH 0/2] Add Qualcomm PCIe ECAM root complex driver Mayank Rana
2024-04-04 19:11 ` [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex Mayank Rana
2024-04-04 19:11 ` [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver Mayank Rana
@ 2024-04-04 19:33 ` Krzysztof Kozlowski
2024-04-04 23:02 ` Mayank Rana
2 siblings, 1 reply; 27+ messages in thread
From: Krzysztof Kozlowski @ 2024-04-04 19:33 UTC (permalink / raw)
To: Mayank Rana, linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
On 04/04/2024 21:11, Mayank Rana wrote:
> On some of Qualcomm platform, firmware takes care of system resources
> related to PCIe PHY and controller as well bringing up PCIe link and
> having static iATU configuration for PCIe controller to work into
> ECAM compliant mode. Hence add Qualcomm PCIe ECAM root complex driver.
>
> Tested:
> - Validated NVME functionality with PCIe0 and PCIe1 on SA877p-ride platform
>
RFC means code is not ready, right? Please get internal review done and
send it when it is ready. I am not sure if you expect any reviews. Some
people send RFC and do not expect reviews. Some expect. I have no clue
and I do not want to waste my time. Please clarify what you expect from
maintainers regarding this contribution.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/2] Add Qualcomm PCIe ECAM root complex driver
2024-04-04 19:33 ` [RFC PATCH 0/2] " Krzysztof Kozlowski
@ 2024-04-04 23:02 ` Mayank Rana
2024-04-05 6:50 ` Krzysztof Kozlowski
0 siblings, 1 reply; 27+ messages in thread
From: Mayank Rana @ 2024-04-04 23:02 UTC (permalink / raw)
To: Krzysztof Kozlowski, linux-pci, lpieralisi, kw, robh, bhelgaas,
andersson, manivannan.sadhasivam, krzysztof.kozlowski+dt,
conor+dt, devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
Hi Krzysztof
On 4/4/2024 12:33 PM, Krzysztof Kozlowski wrote:
> On 04/04/2024 21:11, Mayank Rana wrote:
>> On some of Qualcomm platform, firmware takes care of system resources
>> related to PCIe PHY and controller as well bringing up PCIe link and
>> having static iATU configuration for PCIe controller to work into
>> ECAM compliant mode. Hence add Qualcomm PCIe ECAM root complex driver.
>>
>> Tested:
>> - Validated NVME functionality with PCIe0 and PCIe1 on SA877p-ride platform
>>
>
> RFC means code is not ready, right? Please get internal review done and
> send it when it is ready. I am not sure if you expect any reviews. Some
> people send RFC and do not expect reviews. Some expect. I have no clue
> and I do not want to waste my time. Please clarify what you expect from
> maintainers regarding this contribution.
>
> Best regards,
> Krzysztof
>
Thanks for initial comments.
yes, this is work in progress. There are still more functionalities
planned to be added as part of this driver. Although purpose of sending
initial change here to get feedback and review comments in terms of
usage of generic Qualcomm PCIe ECAM driver, and usage of MSI
functionality with it. I missed mentioning this as part of cover letter.
So please help to review and provide feedback.
Regards,
Mayank
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-04 19:11 ` [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver Mayank Rana
2024-04-04 19:33 ` Krzysztof Kozlowski
@ 2024-04-05 5:30 ` Manivannan Sadhasivam
2024-04-05 17:41 ` Mayank Rana
2024-04-05 18:30 ` Bjorn Helgaas
2 siblings, 1 reply; 27+ messages in thread
From: Manivannan Sadhasivam @ 2024-04-05 5:30 UTC (permalink / raw)
To: Mayank Rana
Cc: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
krzysztof.kozlowski+dt, conor+dt, devicetree, linux-arm-msm,
quic_ramkri, quic_nkela, quic_shazhuss, quic_msarkar,
quic_nitegupt
On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
> On some of Qualcomm platform, firmware configures PCIe controller into
> ECAM mode allowing static memory allocation for configuration space of
> supported bus range. Firmware also takes care of bringing up PCIe PHY
> and performing required operation to bring PCIe link into D0. Firmware
> also manages system resources (e.g. clocks/regulators/resets/ bus voting).
> Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
> root complex and connected PCIe devices. Firmware won't be enumerating
> or powering up PCIe root complex until this driver invokes power domain
> based notification to bring PCIe link into D0/D3cold mode.
>
Is this an in-house PCIe IP of Qualcomm or the same DWC IP that is used in other
SoCs?
- Mani
> This driver also support MSI functionality using PCIe controller based
> MSI controller as GIC ITS based MSI functionality is not available on
> some of platform.
>
> Signed-off-by: Mayank Rana <quic_mrana@quicinc.com>
> ---
> drivers/pci/controller/Kconfig | 12 +
> drivers/pci/controller/Makefile | 1 +
> drivers/pci/controller/pcie-qcom-ecam.c | 575 ++++++++++++++++++++++++++++++++
> 3 files changed, 588 insertions(+)
> create mode 100644 drivers/pci/controller/pcie-qcom-ecam.c
>
> diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
> index e534c02..abbd9f2 100644
> --- a/drivers/pci/controller/Kconfig
> +++ b/drivers/pci/controller/Kconfig
> @@ -353,6 +353,18 @@ config PCIE_XILINX_CPM
> Say 'Y' here if you want kernel support for the
> Xilinx Versal CPM host bridge.
>
> +config PCIE_QCOM_ECAM
> + tristate "QCOM PCIe ECAM host controller"
> + depends on ARCH_QCOM && PCI
> + depends on OF
> + select PCI_MSI
> + select PCI_HOST_COMMON
> + select IRQ_DOMAIN
> + help
> + Say 'Y' here if you want to use ECAM shift mode compatible Qualcomm
> + PCIe root host controller. The controller is programmed using firmware
> + to support ECAM compatible memory address space.
> +
> source "drivers/pci/controller/cadence/Kconfig"
> source "drivers/pci/controller/dwc/Kconfig"
> source "drivers/pci/controller/mobiveil/Kconfig"
> diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
> index f2b19e6..2f1ee1e 100644
> --- a/drivers/pci/controller/Makefile
> +++ b/drivers/pci/controller/Makefile
> @@ -40,6 +40,7 @@ obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
> obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o
> obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o
> obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
> +obj-$(CONFIG_PCIE_QCOM_ECAM) += pcie-qcom-ecam.o
>
> # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
> obj-y += dwc/
> diff --git a/drivers/pci/controller/pcie-qcom-ecam.c b/drivers/pci/controller/pcie-qcom-ecam.c
> new file mode 100644
> index 00000000..5b4c68b
> --- /dev/null
> +++ b/drivers/pci/controller/pcie-qcom-ecam.c
> @@ -0,0 +1,575 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Qualcomm PCIe ECAM root host controller driver
> + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
> + */
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/irq.h>
> +#include <linux/irqchip/chained_irq.h>
> +#include <linux/irqdomain.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/msi.h>
> +#include <linux/of_address.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_pci.h>
> +#include <linux/pci.h>
> +#include <linux/pci-ecam.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_domain.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/slab.h>
> +#include <linux/types.h>
> +
> +#define PCIE_MSI_CTRL_BASE (0x820)
> +#define PCIE_MSI_CTRL_SIZE (0x68)
> +#define PCIE_MSI_CTRL_ADDR_OFFS (0x0)
> +#define PCIE_MSI_CTRL_UPPER_ADDR_OFFS (0x4)
> +#define PCIE_MSI_CTRL_INT_N_EN_OFFS(n) (0x8 + 0xc * (n))
> +#define PCIE_MSI_CTRL_INT_N_MASK_OFFS(n) (0xc + 0xc * (n))
> +#define PCIE_MSI_CTRL_INT_N_STATUS_OFFS(n) (0x10 + 0xc * (n))
> +
> +#define MSI_DB_ADDR 0xa0000000
> +#define MSI_IRQ_PER_GRP (32)
> +
> +/**
> + * struct qcom_msi_irq - MSI IRQ information
> + * @client: pointer to MSI client struct
> + * @grp: group the irq belongs to
> + * @grp_index: index in group
> + * @hwirq: hwirq number
> + * @virq: virq number
> + * @pos: position in MSI bitmap
> + */
> +struct qcom_msi_irq {
> + struct qcom_msi_client *client;
> + struct qcom_msi_grp *grp;
> + unsigned int grp_index;
> + unsigned int hwirq;
> + unsigned int virq;
> + u32 pos;
> +};
> +
> +/**
> + * struct qcom_msi_grp - MSI group information
> + * @int_en_reg: memory-mapped interrupt enable register address
> + * @int_mask_reg: memory-mapped interrupt mask register address
> + * @int_status_reg: memory-mapped interrupt status register address
> + * @mask: tracks masked/unmasked MSI
> + * @irqs: structure to MSI IRQ information
> + */
> +struct qcom_msi_grp {
> + void __iomem *int_en_reg;
> + void __iomem *int_mask_reg;
> + void __iomem *int_status_reg;
> + u32 mask;
> + struct qcom_msi_irq irqs[MSI_IRQ_PER_GRP];
> +};
> +
> +/**
> + * struct qcom_msi - PCIe controller based MSI controller information
> + * @clients: list for tracking clients
> + * @dev: platform device node
> + * @nr_hwirqs: total number of hardware IRQs
> + * @nr_virqs: total number of virqs
> + * @nr_grps: total number of groups
> + * @grps: pointer to all groups information
> + * @bitmap: tracks used/unused MSI
> + * @mutex: for modifying MSI client list and bitmap
> + * @inner_domain: parent domain; gen irq related
> + * @msi_domain: child domain; pcie related
> + * @msi_db_addr: MSI doorbell address
> + * @cfg_lock: lock for configuring MSI controller registers
> + * @pcie_msi_cfg: memory-mapped MSI controller register space
> + */
> +struct qcom_msi {
> + struct list_head clients;
> + struct device *dev;
> + int nr_hwirqs;
> + int nr_virqs;
> + int nr_grps;
> + struct qcom_msi_grp *grps;
> + unsigned long *bitmap;
> + struct mutex mutex;
> + struct irq_domain *inner_domain;
> + struct irq_domain *msi_domain;
> + phys_addr_t msi_db_addr;
> + spinlock_t cfg_lock;
> + void __iomem *pcie_msi_cfg;
> +};
> +
> +/**
> + * struct qcom_msi_client - structure for each client of MSI controller
> + * @node: list to track number of MSI clients
> + * @msi: client specific MSI controller based resource pointer
> + * @dev: client's dev of pci_dev
> + * @nr_irqs: number of irqs allocated for client
> + * @msi_addr: MSI doorbell address
> + */
> +struct qcom_msi_client {
> + struct list_head node;
> + struct qcom_msi *msi;
> + struct device *dev;
> + unsigned int nr_irqs;
> + phys_addr_t msi_addr;
> +};
> +
> +static void qcom_msi_handler(struct irq_desc *desc)
> +{
> + struct irq_chip *chip = irq_desc_get_chip(desc);
> + struct qcom_msi_grp *msi_grp;
> + u32 status;
> + int i;
> +
> + chained_irq_enter(chip, desc);
> +
> + msi_grp = irq_desc_get_handler_data(desc);
> + status = readl_relaxed(msi_grp->int_status_reg);
> + status ^= (msi_grp->mask & status);
> + writel(status, msi_grp->int_status_reg);
> +
> + for (i = 0; status; i++, status >>= 1)
> + if (status & 0x1)
> + generic_handle_irq(msi_grp->irqs[i].virq);
> +
> + chained_irq_exit(chip, desc);
> +}
> +
> +static void qcom_msi_mask_irq(struct irq_data *data)
> +{
> + struct irq_data *parent_data;
> + struct qcom_msi_irq *msi_irq;
> + struct qcom_msi_grp *msi_grp;
> + struct qcom_msi *msi;
> + unsigned long flags;
> +
> + parent_data = data->parent_data;
> + if (!parent_data)
> + return;
> +
> + msi_irq = irq_data_get_irq_chip_data(parent_data);
> + msi = msi_irq->client->msi;
> + msi_grp = msi_irq->grp;
> +
> + spin_lock_irqsave(&msi->cfg_lock, flags);
> + pci_msi_mask_irq(data);
> + msi_grp->mask |= BIT(msi_irq->grp_index);
> + writel(msi_grp->mask, msi_grp->int_mask_reg);
> + spin_unlock_irqrestore(&msi->cfg_lock, flags);
> +}
> +
> +static void qcom_msi_unmask_irq(struct irq_data *data)
> +{
> + struct irq_data *parent_data;
> + struct qcom_msi_irq *msi_irq;
> + struct qcom_msi_grp *msi_grp;
> + struct qcom_msi *msi;
> + unsigned long flags;
> +
> + parent_data = data->parent_data;
> + if (!parent_data)
> + return;
> +
> + msi_irq = irq_data_get_irq_chip_data(parent_data);
> + msi = msi_irq->client->msi;
> + msi_grp = msi_irq->grp;
> +
> + spin_lock_irqsave(&msi->cfg_lock, flags);
> + msi_grp->mask &= ~BIT(msi_irq->grp_index);
> + writel(msi_grp->mask, msi_grp->int_mask_reg);
> + pci_msi_unmask_irq(data);
> + spin_unlock_irqrestore(&msi->cfg_lock, flags);
> +}
> +
> +static struct irq_chip qcom_msi_irq_chip = {
> + .name = "qcom_pci_msi",
> + .irq_enable = qcom_msi_unmask_irq,
> + .irq_disable = qcom_msi_mask_irq,
> + .irq_mask = qcom_msi_mask_irq,
> + .irq_unmask = qcom_msi_unmask_irq,
> +};
> +
> +static int qcom_msi_domain_prepare(struct irq_domain *domain, struct device *dev,
> + int nvec, msi_alloc_info_t *arg)
> +{
> + struct qcom_msi *msi = domain->parent->host_data;
> + struct qcom_msi_client *client;
> +
> + client = kzalloc(sizeof(*client), GFP_KERNEL);
> + if (!client)
> + return -ENOMEM;
> +
> + client->msi = msi;
> + client->dev = dev;
> + client->msi_addr = msi->msi_db_addr;
> + mutex_lock(&msi->mutex);
> + list_add_tail(&client->node, &msi->clients);
> + mutex_unlock(&msi->mutex);
> +
> + /* zero out struct for pcie msi framework */
> + memset(arg, 0, sizeof(*arg));
> + return 0;
> +}
> +
> +static struct msi_domain_ops qcom_msi_domain_ops = {
> + .msi_prepare = qcom_msi_domain_prepare,
> +};
> +
> +static struct msi_domain_info qcom_msi_domain_info = {
> + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
> + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
> + .ops = &qcom_msi_domain_ops,
> + .chip = &qcom_msi_irq_chip,
> +};
> +
> +static int qcom_msi_irq_set_affinity(struct irq_data *data,
> + const struct cpumask *mask, bool force)
> +{
> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
> + int ret = 0;
> +
> + if (!parent_data)
> + return -ENODEV;
> +
> + /* set affinity for MSI HW IRQ */
> + if (parent_data->chip->irq_set_affinity)
> + ret = parent_data->chip->irq_set_affinity(parent_data, mask, force);
> +
> + return ret;
> +}
> +
> +static void qcom_msi_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
> +{
> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
> + struct qcom_msi_irq *msi_irq = irq_data_get_irq_chip_data(data);
> + struct qcom_msi_client *client = msi_irq->client;
> +
> + if (!parent_data)
> + return;
> +
> + msg->address_lo = lower_32_bits(client->msi_addr);
> + msg->address_hi = upper_32_bits(client->msi_addr);
> + msg->data = msi_irq->pos;
> +}
> +
> +static struct irq_chip qcom_msi_bottom_irq_chip = {
> + .name = "qcom_msi",
> + .irq_set_affinity = qcom_msi_irq_set_affinity,
> + .irq_compose_msi_msg = qcom_msi_irq_compose_msi_msg,
> +};
> +
> +static int qcom_msi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
> + unsigned int nr_irqs, void *args)
> +{
> + struct device *dev = ((msi_alloc_info_t *)args)->desc->dev;
> + struct qcom_msi_client *tmp, *client = NULL;
> + struct qcom_msi *msi = domain->host_data;
> + int i, ret = 0;
> + int pos;
> +
> + mutex_lock(&msi->mutex);
> + list_for_each_entry(tmp, &msi->clients, node) {
> + if (tmp->dev == dev) {
> + client = tmp;
> + break;
> + }
> + }
> +
> + if (!client) {
> + dev_err(msi->dev, "failed to find MSI client dev\n");
> + ret = -ENODEV;
> + goto out;
> + }
> +
> + pos = bitmap_find_next_zero_area(msi->bitmap, msi->nr_virqs, 0,
> + nr_irqs, nr_irqs - 1);
> + if (pos > msi->nr_virqs) {
> + ret = -ENOSPC;
> + goto out;
> + }
> +
> + bitmap_set(msi->bitmap, pos, nr_irqs);
> + for (i = 0; i < nr_irqs; i++) {
> + u32 grp = pos / MSI_IRQ_PER_GRP;
> + u32 index = pos % MSI_IRQ_PER_GRP;
> + struct qcom_msi_irq *msi_irq = &msi->grps[grp].irqs[index];
> +
> + msi_irq->virq = virq + i;
> + msi_irq->client = client;
> + irq_domain_set_info(domain, msi_irq->virq,
> + msi_irq->hwirq,
> + &qcom_msi_bottom_irq_chip, msi_irq,
> + handle_simple_irq, NULL, NULL);
> + client->nr_irqs++;
> + pos++;
> + }
> +out:
> + mutex_unlock(&msi->mutex);
> + return ret;
> +}
> +
> +static void qcom_msi_irq_domain_free(struct irq_domain *domain, unsigned int virq,
> + unsigned int nr_irqs)
> +{
> + struct irq_data *data = irq_domain_get_irq_data(domain, virq);
> + struct qcom_msi_client *client;
> + struct qcom_msi_irq *msi_irq;
> + struct qcom_msi *msi;
> +
> + if (!data)
> + return;
> +
> + msi_irq = irq_data_get_irq_chip_data(data);
> + client = msi_irq->client;
> + msi = client->msi;
> +
> + mutex_lock(&msi->mutex);
> + bitmap_clear(msi->bitmap, msi_irq->pos, nr_irqs);
> +
> + client->nr_irqs -= nr_irqs;
> + if (!client->nr_irqs) {
> + list_del(&client->node);
> + kfree(client);
> + }
> + mutex_unlock(&msi->mutex);
> +
> + irq_domain_free_irqs_parent(domain, virq, nr_irqs);
> +}
> +
> +static const struct irq_domain_ops msi_domain_ops = {
> + .alloc = qcom_msi_irq_domain_alloc,
> + .free = qcom_msi_irq_domain_free,
> +};
> +
> +static int qcom_msi_alloc_domains(struct qcom_msi *msi)
> +{
> + msi->inner_domain = irq_domain_add_linear(NULL, msi->nr_virqs,
> + &msi_domain_ops, msi);
> + if (!msi->inner_domain) {
> + dev_err(msi->dev, "failed to create IRQ inner domain\n");
> + return -ENOMEM;
> + }
> +
> + msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->dev->of_node),
> + &qcom_msi_domain_info, msi->inner_domain);
> + if (!msi->msi_domain) {
> + dev_err(msi->dev, "failed to create MSI domain\n");
> + irq_domain_remove(msi->inner_domain);
> + return -ENOMEM;
> + }
> +
> + return 0;
> +}
> +
> +static int qcom_msi_irq_setup(struct qcom_msi *msi)
> +{
> + struct qcom_msi_grp *msi_grp;
> + struct qcom_msi_irq *msi_irq;
> + int i, index, ret;
> + unsigned int irq;
> +
> + /* setup each MSI group. nr_hwirqs == nr_grps */
> + for (i = 0; i < msi->nr_hwirqs; i++) {
> + irq = irq_of_parse_and_map(msi->dev->of_node, i);
> + if (!irq) {
> + dev_err(msi->dev,
> + "MSI: failed to parse/map interrupt\n");
> + ret = -ENODEV;
> + goto free_irqs;
> + }
> +
> + msi_grp = &msi->grps[i];
> + msi_grp->int_en_reg = msi->pcie_msi_cfg +
> + PCIE_MSI_CTRL_INT_N_EN_OFFS(i);
> + msi_grp->int_mask_reg = msi->pcie_msi_cfg +
> + PCIE_MSI_CTRL_INT_N_MASK_OFFS(i);
> + msi_grp->int_status_reg = msi->pcie_msi_cfg +
> + PCIE_MSI_CTRL_INT_N_STATUS_OFFS(i);
> +
> + for (index = 0; index < MSI_IRQ_PER_GRP; index++) {
> + msi_irq = &msi_grp->irqs[index];
> +
> + msi_irq->grp = msi_grp;
> + msi_irq->grp_index = index;
> + msi_irq->pos = (i * MSI_IRQ_PER_GRP) + index;
> + msi_irq->hwirq = irq;
> + }
> +
> + irq_set_chained_handler_and_data(irq, qcom_msi_handler, msi_grp);
> + }
> +
> + return 0;
> +
> +free_irqs:
> + for (--i; i >= 0; i--) {
> + irq = msi->grps[i].irqs[0].hwirq;
> +
> + irq_set_chained_handler_and_data(irq, NULL, NULL);
> + irq_dispose_mapping(irq);
> + }
> +
> + return ret;
> +}
> +
> +static void qcom_msi_config(struct irq_domain *domain)
> +{
> + struct qcom_msi *msi;
> + int i;
> +
> + msi = domain->parent->host_data;
> +
> + /* program termination address */
> + writel(msi->msi_db_addr, msi->pcie_msi_cfg + PCIE_MSI_CTRL_ADDR_OFFS);
> + writel(0, msi->pcie_msi_cfg + PCIE_MSI_CTRL_UPPER_ADDR_OFFS);
> +
> + /* restore mask and enable all interrupts for each group */
> + for (i = 0; i < msi->nr_grps; i++) {
> + struct qcom_msi_grp *msi_grp = &msi->grps[i];
> +
> + writel(msi_grp->mask, msi_grp->int_mask_reg);
> + writel(~0, msi_grp->int_en_reg);
> + }
> +}
> +
> +static void qcom_msi_deinit(struct qcom_msi *msi)
> +{
> + irq_domain_remove(msi->msi_domain);
> + irq_domain_remove(msi->inner_domain);
> +}
> +
> +static struct qcom_msi *qcom_msi_init(struct device *dev)
> +{
> + struct qcom_msi *msi;
> + u64 addr;
> + int ret;
> +
> + msi = devm_kzalloc(dev, sizeof(*msi), GFP_KERNEL);
> + if (!msi)
> + return ERR_PTR(-ENOMEM);
> +
> + msi->dev = dev;
> + mutex_init(&msi->mutex);
> + spin_lock_init(&msi->cfg_lock);
> + INIT_LIST_HEAD(&msi->clients);
> +
> + msi->msi_db_addr = MSI_DB_ADDR;
> + msi->nr_hwirqs = of_irq_count(dev->of_node);
> + if (!msi->nr_hwirqs) {
> + dev_err(msi->dev, "no hwirqs found\n");
> + return ERR_PTR(-ENODEV);
> + }
> +
> + if (of_property_read_reg(dev->of_node, 0, &addr, NULL) < 0) {
> + dev_err(msi->dev, "failed to get reg address\n");
> + return ERR_PTR(-ENODEV);
> + }
> +
> + dev_dbg(msi->dev, "hwirq:%d pcie_msi_cfg:%llx\n", msi->nr_hwirqs, addr);
> + msi->pcie_msi_cfg = devm_ioremap(dev, addr + PCIE_MSI_CTRL_BASE, PCIE_MSI_CTRL_SIZE);
> + if (!msi->pcie_msi_cfg)
> + return ERR_PTR(-ENOMEM);
> +
> + msi->nr_virqs = msi->nr_hwirqs * MSI_IRQ_PER_GRP;
> + msi->nr_grps = msi->nr_hwirqs;
> + msi->grps = devm_kcalloc(dev, msi->nr_grps, sizeof(*msi->grps), GFP_KERNEL);
> + if (!msi->grps)
> + return ERR_PTR(-ENOMEM);
> +
> + msi->bitmap = devm_kcalloc(dev, BITS_TO_LONGS(msi->nr_virqs),
> + sizeof(*msi->bitmap), GFP_KERNEL);
> + if (!msi->bitmap)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = qcom_msi_alloc_domains(msi);
> + if (ret)
> + return ERR_PTR(ret);
> +
> + ret = qcom_msi_irq_setup(msi);
> + if (ret) {
> + qcom_msi_deinit(msi);
> + return ERR_PTR(ret);
> + }
> +
> + qcom_msi_config(msi->msi_domain);
> + return msi;
> +}
> +
> +static int qcom_pcie_ecam_suspend_noirq(struct device *dev)
> +{
> + return pm_runtime_put_sync(dev);
> +}
> +
> +static int qcom_pcie_ecam_resume_noirq(struct device *dev)
> +{
> + return pm_runtime_get_sync(dev);
> +}
> +
> +static int qcom_pcie_ecam_probe(struct platform_device *pdev)
> +{
> + struct device *dev = &pdev->dev;
> + struct qcom_msi *msi;
> + int ret;
> +
> + ret = devm_pm_runtime_enable(dev);
> + if (ret)
> + return ret;
> +
> + ret = pm_runtime_resume_and_get(dev);
> + if (ret < 0) {
> + dev_err(dev, "fail to enable pcie controller: %d\n", ret);
> + return ret;
> + }
> +
> + msi = qcom_msi_init(dev);
> + if (IS_ERR(msi)) {
> + pm_runtime_put_sync(dev);
> + return PTR_ERR(msi);
> + }
> +
> + ret = pci_host_common_probe(pdev);
> + if (ret) {
> + dev_err(dev, "pci_host_common_probe() failed:%d\n", ret);
> + qcom_msi_deinit(msi);
> + pm_runtime_put_sync(dev);
> + }
> +
> + return ret;
> +}
> +
> +static const struct dev_pm_ops qcom_pcie_ecam_pm_ops = {
> + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_ecam_suspend_noirq,
> + qcom_pcie_ecam_resume_noirq)
> +};
> +
> +static const struct pci_ecam_ops qcom_pcie_ecam_ops = {
> + .pci_ops = {
> + .map_bus = pci_ecam_map_bus,
> + .read = pci_generic_config_read,
> + .write = pci_generic_config_write,
> + }
> +};
> +
> +static const struct of_device_id qcom_pcie_ecam_of_match[] = {
> + {
> + .compatible = "qcom,pcie-ecam-rc",
> + .data = &qcom_pcie_ecam_ops,
> + },
> + { },
> +};
> +MODULE_DEVICE_TABLE(of, qcom_pcie_ecam_of_match);
> +
> +static struct platform_driver qcom_pcie_ecam_driver = {
> + .probe = qcom_pcie_ecam_probe,
> + .driver = {
> + .name = "qcom-pcie-ecam-rc",
> + .suppress_bind_attrs = true,
> + .of_match_table = qcom_pcie_ecam_of_match,
> + .probe_type = PROBE_PREFER_ASYNCHRONOUS,
> + .pm = &qcom_pcie_ecam_pm_ops,
> + },
> +};
> +module_platform_driver(qcom_pcie_ecam_driver);
> +
> +MODULE_DESCRIPTION("Qualcomm PCIe ECAM root complex driver");
> +MODULE_LICENSE("GPL");
> --
> 2.7.4
>
--
மணிவண்ணன் சதாசிவம்
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/2] Add Qualcomm PCIe ECAM root complex driver
2024-04-04 23:02 ` Mayank Rana
@ 2024-04-05 6:50 ` Krzysztof Kozlowski
2024-04-05 17:45 ` Mayank Rana
0 siblings, 1 reply; 27+ messages in thread
From: Krzysztof Kozlowski @ 2024-04-05 6:50 UTC (permalink / raw)
To: Mayank Rana, linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
On 05/04/2024 01:02, Mayank Rana wrote:
> Hi Krzysztof
>
> On 4/4/2024 12:33 PM, Krzysztof Kozlowski wrote:
>> On 04/04/2024 21:11, Mayank Rana wrote:
>>> On some of Qualcomm platform, firmware takes care of system resources
>>> related to PCIe PHY and controller as well bringing up PCIe link and
>>> having static iATU configuration for PCIe controller to work into
>>> ECAM compliant mode. Hence add Qualcomm PCIe ECAM root complex driver.
>>>
>>> Tested:
>>> - Validated NVME functionality with PCIe0 and PCIe1 on SA877p-ride platform
>>>
>>
>> RFC means code is not ready, right? Please get internal review done and
>> send it when it is ready. I am not sure if you expect any reviews. Some
>> people send RFC and do not expect reviews. Some expect. I have no clue
>> and I do not want to waste my time. Please clarify what you expect from
>> maintainers regarding this contribution.
>>
>> Best regards,
>> Krzysztof
>>
> Thanks for initial comments.
> yes, this is work in progress. There are still more functionalities
> planned to be added as part of this driver. Although purpose of sending
> initial change here to get feedback and review comments in terms of
> usage of generic Qualcomm PCIe ECAM driver, and usage of MSI
> functionality with it. I missed mentioning this as part of cover letter.
> So please help to review and provide feedback.
Thanks for explanation. Work in progress as not ready to be merged? Then
I am sorry, I am not going to provide review of unfinished work. I have
many more *finished* patches to review first. You can help with these
too....
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-05 5:30 ` Manivannan Sadhasivam
@ 2024-04-05 17:41 ` Mayank Rana
2024-04-06 4:17 ` Manivannan Sadhasivam
0 siblings, 1 reply; 27+ messages in thread
From: Mayank Rana @ 2024-04-05 17:41 UTC (permalink / raw)
To: Manivannan Sadhasivam
Cc: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
krzysztof.kozlowski+dt, conor+dt, devicetree, linux-arm-msm,
quic_ramkri, quic_nkela, quic_shazhuss, quic_msarkar,
quic_nitegupt
Hi Mani
On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
> On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
>> On some of Qualcomm platform, firmware configures PCIe controller into
>> ECAM mode allowing static memory allocation for configuration space of
>> supported bus range. Firmware also takes care of bringing up PCIe PHY
>> and performing required operation to bring PCIe link into D0. Firmware
>> also manages system resources (e.g. clocks/regulators/resets/ bus voting).
>> Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
>> root complex and connected PCIe devices. Firmware won't be enumerating
>> or powering up PCIe root complex until this driver invokes power domain
>> based notification to bring PCIe link into D0/D3cold mode.
>>
>
> Is this an in-house PCIe IP of Qualcomm or the same DWC IP that is used in other
> SoCs?
>
> - Mani
Driver is validated on SA8775p-ride platform using PCIe DWC IP for
now.Although this driver doesn't need to know used PCIe controller and
PHY IP as well programming sequence as that would be taken care by firmware.
>> This driver also support MSI functionality using PCIe controller based
>> MSI controller as GIC ITS based MSI functionality is not available on
>> some of platform.
>>
>> Signed-off-by: Mayank Rana <quic_mrana@quicinc.com>
>> ---
>> drivers/pci/controller/Kconfig | 12 +
>> drivers/pci/controller/Makefile | 1 +
>> drivers/pci/controller/pcie-qcom-ecam.c | 575 ++++++++++++++++++++++++++++++++
>> 3 files changed, 588 insertions(+)
>> create mode 100644 drivers/pci/controller/pcie-qcom-ecam.c
>>
>> diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
>> index e534c02..abbd9f2 100644
>> --- a/drivers/pci/controller/Kconfig
>> +++ b/drivers/pci/controller/Kconfig
>> @@ -353,6 +353,18 @@ config PCIE_XILINX_CPM
>> Say 'Y' here if you want kernel support for the
>> Xilinx Versal CPM host bridge.
>>
>> +config PCIE_QCOM_ECAM
>> + tristate "QCOM PCIe ECAM host controller"
>> + depends on ARCH_QCOM && PCI
>> + depends on OF
>> + select PCI_MSI
>> + select PCI_HOST_COMMON
>> + select IRQ_DOMAIN
>> + help
>> + Say 'Y' here if you want to use ECAM shift mode compatible Qualcomm
>> + PCIe root host controller. The controller is programmed using firmware
>> + to support ECAM compatible memory address space.
>> +
>> source "drivers/pci/controller/cadence/Kconfig"
>> source "drivers/pci/controller/dwc/Kconfig"
>> source "drivers/pci/controller/mobiveil/Kconfig"
>> diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
>> index f2b19e6..2f1ee1e 100644
>> --- a/drivers/pci/controller/Makefile
>> +++ b/drivers/pci/controller/Makefile
>> @@ -40,6 +40,7 @@ obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
>> obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o
>> obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o
>> obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
>> +obj-$(CONFIG_PCIE_QCOM_ECAM) += pcie-qcom-ecam.o
>>
>> # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
>> obj-y += dwc/
>> diff --git a/drivers/pci/controller/pcie-qcom-ecam.c b/drivers/pci/controller/pcie-qcom-ecam.c
>> new file mode 100644
>> index 00000000..5b4c68b
>> --- /dev/null
>> +++ b/drivers/pci/controller/pcie-qcom-ecam.c
>> @@ -0,0 +1,575 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * Qualcomm PCIe ECAM root host controller driver
>> + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
>> + */
>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>> +
>> +#include <linux/irq.h>
>> +#include <linux/irqchip/chained_irq.h>
>> +#include <linux/irqdomain.h>
>> +#include <linux/kernel.h>
>> +#include <linux/module.h>
>> +#include <linux/msi.h>
>> +#include <linux/of_address.h>
>> +#include <linux/of_irq.h>
>> +#include <linux/of_pci.h>
>> +#include <linux/pci.h>
>> +#include <linux/pci-ecam.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/pm_domain.h>
>> +#include <linux/pm_runtime.h>
>> +#include <linux/slab.h>
>> +#include <linux/types.h>
>> +
>> +#define PCIE_MSI_CTRL_BASE (0x820)
>> +#define PCIE_MSI_CTRL_SIZE (0x68)
>> +#define PCIE_MSI_CTRL_ADDR_OFFS (0x0)
>> +#define PCIE_MSI_CTRL_UPPER_ADDR_OFFS (0x4)
>> +#define PCIE_MSI_CTRL_INT_N_EN_OFFS(n) (0x8 + 0xc * (n))
>> +#define PCIE_MSI_CTRL_INT_N_MASK_OFFS(n) (0xc + 0xc * (n))
>> +#define PCIE_MSI_CTRL_INT_N_STATUS_OFFS(n) (0x10 + 0xc * (n))
>> +
>> +#define MSI_DB_ADDR 0xa0000000
>> +#define MSI_IRQ_PER_GRP (32)
>> +
>> +/**
>> + * struct qcom_msi_irq - MSI IRQ information
>> + * @client: pointer to MSI client struct
>> + * @grp: group the irq belongs to
>> + * @grp_index: index in group
>> + * @hwirq: hwirq number
>> + * @virq: virq number
>> + * @pos: position in MSI bitmap
>> + */
>> +struct qcom_msi_irq {
>> + struct qcom_msi_client *client;
>> + struct qcom_msi_grp *grp;
>> + unsigned int grp_index;
>> + unsigned int hwirq;
>> + unsigned int virq;
>> + u32 pos;
>> +};
>> +
>> +/**
>> + * struct qcom_msi_grp - MSI group information
>> + * @int_en_reg: memory-mapped interrupt enable register address
>> + * @int_mask_reg: memory-mapped interrupt mask register address
>> + * @int_status_reg: memory-mapped interrupt status register address
>> + * @mask: tracks masked/unmasked MSI
>> + * @irqs: structure to MSI IRQ information
>> + */
>> +struct qcom_msi_grp {
>> + void __iomem *int_en_reg;
>> + void __iomem *int_mask_reg;
>> + void __iomem *int_status_reg;
>> + u32 mask;
>> + struct qcom_msi_irq irqs[MSI_IRQ_PER_GRP];
>> +};
>> +
>> +/**
>> + * struct qcom_msi - PCIe controller based MSI controller information
>> + * @clients: list for tracking clients
>> + * @dev: platform device node
>> + * @nr_hwirqs: total number of hardware IRQs
>> + * @nr_virqs: total number of virqs
>> + * @nr_grps: total number of groups
>> + * @grps: pointer to all groups information
>> + * @bitmap: tracks used/unused MSI
>> + * @mutex: for modifying MSI client list and bitmap
>> + * @inner_domain: parent domain; gen irq related
>> + * @msi_domain: child domain; pcie related
>> + * @msi_db_addr: MSI doorbell address
>> + * @cfg_lock: lock for configuring MSI controller registers
>> + * @pcie_msi_cfg: memory-mapped MSI controller register space
>> + */
>> +struct qcom_msi {
>> + struct list_head clients;
>> + struct device *dev;
>> + int nr_hwirqs;
>> + int nr_virqs;
>> + int nr_grps;
>> + struct qcom_msi_grp *grps;
>> + unsigned long *bitmap;
>> + struct mutex mutex;
>> + struct irq_domain *inner_domain;
>> + struct irq_domain *msi_domain;
>> + phys_addr_t msi_db_addr;
>> + spinlock_t cfg_lock;
>> + void __iomem *pcie_msi_cfg;
>> +};
>> +
>> +/**
>> + * struct qcom_msi_client - structure for each client of MSI controller
>> + * @node: list to track number of MSI clients
>> + * @msi: client specific MSI controller based resource pointer
>> + * @dev: client's dev of pci_dev
>> + * @nr_irqs: number of irqs allocated for client
>> + * @msi_addr: MSI doorbell address
>> + */
>> +struct qcom_msi_client {
>> + struct list_head node;
>> + struct qcom_msi *msi;
>> + struct device *dev;
>> + unsigned int nr_irqs;
>> + phys_addr_t msi_addr;
>> +};
>> +
>> +static void qcom_msi_handler(struct irq_desc *desc)
>> +{
>> + struct irq_chip *chip = irq_desc_get_chip(desc);
>> + struct qcom_msi_grp *msi_grp;
>> + u32 status;
>> + int i;
>> +
>> + chained_irq_enter(chip, desc);
>> +
>> + msi_grp = irq_desc_get_handler_data(desc);
>> + status = readl_relaxed(msi_grp->int_status_reg);
>> + status ^= (msi_grp->mask & status);
>> + writel(status, msi_grp->int_status_reg);
>> +
>> + for (i = 0; status; i++, status >>= 1)
>> + if (status & 0x1)
>> + generic_handle_irq(msi_grp->irqs[i].virq);
>> +
>> + chained_irq_exit(chip, desc);
>> +}
>> +
>> +static void qcom_msi_mask_irq(struct irq_data *data)
>> +{
>> + struct irq_data *parent_data;
>> + struct qcom_msi_irq *msi_irq;
>> + struct qcom_msi_grp *msi_grp;
>> + struct qcom_msi *msi;
>> + unsigned long flags;
>> +
>> + parent_data = data->parent_data;
>> + if (!parent_data)
>> + return;
>> +
>> + msi_irq = irq_data_get_irq_chip_data(parent_data);
>> + msi = msi_irq->client->msi;
>> + msi_grp = msi_irq->grp;
>> +
>> + spin_lock_irqsave(&msi->cfg_lock, flags);
>> + pci_msi_mask_irq(data);
>> + msi_grp->mask |= BIT(msi_irq->grp_index);
>> + writel(msi_grp->mask, msi_grp->int_mask_reg);
>> + spin_unlock_irqrestore(&msi->cfg_lock, flags);
>> +}
>> +
>> +static void qcom_msi_unmask_irq(struct irq_data *data)
>> +{
>> + struct irq_data *parent_data;
>> + struct qcom_msi_irq *msi_irq;
>> + struct qcom_msi_grp *msi_grp;
>> + struct qcom_msi *msi;
>> + unsigned long flags;
>> +
>> + parent_data = data->parent_data;
>> + if (!parent_data)
>> + return;
>> +
>> + msi_irq = irq_data_get_irq_chip_data(parent_data);
>> + msi = msi_irq->client->msi;
>> + msi_grp = msi_irq->grp;
>> +
>> + spin_lock_irqsave(&msi->cfg_lock, flags);
>> + msi_grp->mask &= ~BIT(msi_irq->grp_index);
>> + writel(msi_grp->mask, msi_grp->int_mask_reg);
>> + pci_msi_unmask_irq(data);
>> + spin_unlock_irqrestore(&msi->cfg_lock, flags);
>> +}
>> +
>> +static struct irq_chip qcom_msi_irq_chip = {
>> + .name = "qcom_pci_msi",
>> + .irq_enable = qcom_msi_unmask_irq,
>> + .irq_disable = qcom_msi_mask_irq,
>> + .irq_mask = qcom_msi_mask_irq,
>> + .irq_unmask = qcom_msi_unmask_irq,
>> +};
>> +
>> +static int qcom_msi_domain_prepare(struct irq_domain *domain, struct device *dev,
>> + int nvec, msi_alloc_info_t *arg)
>> +{
>> + struct qcom_msi *msi = domain->parent->host_data;
>> + struct qcom_msi_client *client;
>> +
>> + client = kzalloc(sizeof(*client), GFP_KERNEL);
>> + if (!client)
>> + return -ENOMEM;
>> +
>> + client->msi = msi;
>> + client->dev = dev;
>> + client->msi_addr = msi->msi_db_addr;
>> + mutex_lock(&msi->mutex);
>> + list_add_tail(&client->node, &msi->clients);
>> + mutex_unlock(&msi->mutex);
>> +
>> + /* zero out struct for pcie msi framework */
>> + memset(arg, 0, sizeof(*arg));
>> + return 0;
>> +}
>> +
>> +static struct msi_domain_ops qcom_msi_domain_ops = {
>> + .msi_prepare = qcom_msi_domain_prepare,
>> +};
>> +
>> +static struct msi_domain_info qcom_msi_domain_info = {
>> + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
>> + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
>> + .ops = &qcom_msi_domain_ops,
>> + .chip = &qcom_msi_irq_chip,
>> +};
>> +
>> +static int qcom_msi_irq_set_affinity(struct irq_data *data,
>> + const struct cpumask *mask, bool force)
>> +{
>> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
>> + int ret = 0;
>> +
>> + if (!parent_data)
>> + return -ENODEV;
>> +
>> + /* set affinity for MSI HW IRQ */
>> + if (parent_data->chip->irq_set_affinity)
>> + ret = parent_data->chip->irq_set_affinity(parent_data, mask, force);
>> +
>> + return ret;
>> +}
>> +
>> +static void qcom_msi_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
>> +{
>> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
>> + struct qcom_msi_irq *msi_irq = irq_data_get_irq_chip_data(data);
>> + struct qcom_msi_client *client = msi_irq->client;
>> +
>> + if (!parent_data)
>> + return;
>> +
>> + msg->address_lo = lower_32_bits(client->msi_addr);
>> + msg->address_hi = upper_32_bits(client->msi_addr);
>> + msg->data = msi_irq->pos;
>> +}
>> +
>> +static struct irq_chip qcom_msi_bottom_irq_chip = {
>> + .name = "qcom_msi",
>> + .irq_set_affinity = qcom_msi_irq_set_affinity,
>> + .irq_compose_msi_msg = qcom_msi_irq_compose_msi_msg,
>> +};
>> +
>> +static int qcom_msi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
>> + unsigned int nr_irqs, void *args)
>> +{
>> + struct device *dev = ((msi_alloc_info_t *)args)->desc->dev;
>> + struct qcom_msi_client *tmp, *client = NULL;
>> + struct qcom_msi *msi = domain->host_data;
>> + int i, ret = 0;
>> + int pos;
>> +
>> + mutex_lock(&msi->mutex);
>> + list_for_each_entry(tmp, &msi->clients, node) {
>> + if (tmp->dev == dev) {
>> + client = tmp;
>> + break;
>> + }
>> + }
>> +
>> + if (!client) {
>> + dev_err(msi->dev, "failed to find MSI client dev\n");
>> + ret = -ENODEV;
>> + goto out;
>> + }
>> +
>> + pos = bitmap_find_next_zero_area(msi->bitmap, msi->nr_virqs, 0,
>> + nr_irqs, nr_irqs - 1);
>> + if (pos > msi->nr_virqs) {
>> + ret = -ENOSPC;
>> + goto out;
>> + }
>> +
>> + bitmap_set(msi->bitmap, pos, nr_irqs);
>> + for (i = 0; i < nr_irqs; i++) {
>> + u32 grp = pos / MSI_IRQ_PER_GRP;
>> + u32 index = pos % MSI_IRQ_PER_GRP;
>> + struct qcom_msi_irq *msi_irq = &msi->grps[grp].irqs[index];
>> +
>> + msi_irq->virq = virq + i;
>> + msi_irq->client = client;
>> + irq_domain_set_info(domain, msi_irq->virq,
>> + msi_irq->hwirq,
>> + &qcom_msi_bottom_irq_chip, msi_irq,
>> + handle_simple_irq, NULL, NULL);
>> + client->nr_irqs++;
>> + pos++;
>> + }
>> +out:
>> + mutex_unlock(&msi->mutex);
>> + return ret;
>> +}
>> +
>> +static void qcom_msi_irq_domain_free(struct irq_domain *domain, unsigned int virq,
>> + unsigned int nr_irqs)
>> +{
>> + struct irq_data *data = irq_domain_get_irq_data(domain, virq);
>> + struct qcom_msi_client *client;
>> + struct qcom_msi_irq *msi_irq;
>> + struct qcom_msi *msi;
>> +
>> + if (!data)
>> + return;
>> +
>> + msi_irq = irq_data_get_irq_chip_data(data);
>> + client = msi_irq->client;
>> + msi = client->msi;
>> +
>> + mutex_lock(&msi->mutex);
>> + bitmap_clear(msi->bitmap, msi_irq->pos, nr_irqs);
>> +
>> + client->nr_irqs -= nr_irqs;
>> + if (!client->nr_irqs) {
>> + list_del(&client->node);
>> + kfree(client);
>> + }
>> + mutex_unlock(&msi->mutex);
>> +
>> + irq_domain_free_irqs_parent(domain, virq, nr_irqs);
>> +}
>> +
>> +static const struct irq_domain_ops msi_domain_ops = {
>> + .alloc = qcom_msi_irq_domain_alloc,
>> + .free = qcom_msi_irq_domain_free,
>> +};
>> +
>> +static int qcom_msi_alloc_domains(struct qcom_msi *msi)
>> +{
>> + msi->inner_domain = irq_domain_add_linear(NULL, msi->nr_virqs,
>> + &msi_domain_ops, msi);
>> + if (!msi->inner_domain) {
>> + dev_err(msi->dev, "failed to create IRQ inner domain\n");
>> + return -ENOMEM;
>> + }
>> +
>> + msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->dev->of_node),
>> + &qcom_msi_domain_info, msi->inner_domain);
>> + if (!msi->msi_domain) {
>> + dev_err(msi->dev, "failed to create MSI domain\n");
>> + irq_domain_remove(msi->inner_domain);
>> + return -ENOMEM;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int qcom_msi_irq_setup(struct qcom_msi *msi)
>> +{
>> + struct qcom_msi_grp *msi_grp;
>> + struct qcom_msi_irq *msi_irq;
>> + int i, index, ret;
>> + unsigned int irq;
>> +
>> + /* setup each MSI group. nr_hwirqs == nr_grps */
>> + for (i = 0; i < msi->nr_hwirqs; i++) {
>> + irq = irq_of_parse_and_map(msi->dev->of_node, i);
>> + if (!irq) {
>> + dev_err(msi->dev,
>> + "MSI: failed to parse/map interrupt\n");
>> + ret = -ENODEV;
>> + goto free_irqs;
>> + }
>> +
>> + msi_grp = &msi->grps[i];
>> + msi_grp->int_en_reg = msi->pcie_msi_cfg +
>> + PCIE_MSI_CTRL_INT_N_EN_OFFS(i);
>> + msi_grp->int_mask_reg = msi->pcie_msi_cfg +
>> + PCIE_MSI_CTRL_INT_N_MASK_OFFS(i);
>> + msi_grp->int_status_reg = msi->pcie_msi_cfg +
>> + PCIE_MSI_CTRL_INT_N_STATUS_OFFS(i);
>> +
>> + for (index = 0; index < MSI_IRQ_PER_GRP; index++) {
>> + msi_irq = &msi_grp->irqs[index];
>> +
>> + msi_irq->grp = msi_grp;
>> + msi_irq->grp_index = index;
>> + msi_irq->pos = (i * MSI_IRQ_PER_GRP) + index;
>> + msi_irq->hwirq = irq;
>> + }
>> +
>> + irq_set_chained_handler_and_data(irq, qcom_msi_handler, msi_grp);
>> + }
>> +
>> + return 0;
>> +
>> +free_irqs:
>> + for (--i; i >= 0; i--) {
>> + irq = msi->grps[i].irqs[0].hwirq;
>> +
>> + irq_set_chained_handler_and_data(irq, NULL, NULL);
>> + irq_dispose_mapping(irq);
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static void qcom_msi_config(struct irq_domain *domain)
>> +{
>> + struct qcom_msi *msi;
>> + int i;
>> +
>> + msi = domain->parent->host_data;
>> +
>> + /* program termination address */
>> + writel(msi->msi_db_addr, msi->pcie_msi_cfg + PCIE_MSI_CTRL_ADDR_OFFS);
>> + writel(0, msi->pcie_msi_cfg + PCIE_MSI_CTRL_UPPER_ADDR_OFFS);
>> +
>> + /* restore mask and enable all interrupts for each group */
>> + for (i = 0; i < msi->nr_grps; i++) {
>> + struct qcom_msi_grp *msi_grp = &msi->grps[i];
>> +
>> + writel(msi_grp->mask, msi_grp->int_mask_reg);
>> + writel(~0, msi_grp->int_en_reg);
>> + }
>> +}
>> +
>> +static void qcom_msi_deinit(struct qcom_msi *msi)
>> +{
>> + irq_domain_remove(msi->msi_domain);
>> + irq_domain_remove(msi->inner_domain);
>> +}
>> +
>> +static struct qcom_msi *qcom_msi_init(struct device *dev)
>> +{
>> + struct qcom_msi *msi;
>> + u64 addr;
>> + int ret;
>> +
>> + msi = devm_kzalloc(dev, sizeof(*msi), GFP_KERNEL);
>> + if (!msi)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + msi->dev = dev;
>> + mutex_init(&msi->mutex);
>> + spin_lock_init(&msi->cfg_lock);
>> + INIT_LIST_HEAD(&msi->clients);
>> +
>> + msi->msi_db_addr = MSI_DB_ADDR;
>> + msi->nr_hwirqs = of_irq_count(dev->of_node);
>> + if (!msi->nr_hwirqs) {
>> + dev_err(msi->dev, "no hwirqs found\n");
>> + return ERR_PTR(-ENODEV);
>> + }
>> +
>> + if (of_property_read_reg(dev->of_node, 0, &addr, NULL) < 0) {
>> + dev_err(msi->dev, "failed to get reg address\n");
>> + return ERR_PTR(-ENODEV);
>> + }
>> +
>> + dev_dbg(msi->dev, "hwirq:%d pcie_msi_cfg:%llx\n", msi->nr_hwirqs, addr);
>> + msi->pcie_msi_cfg = devm_ioremap(dev, addr + PCIE_MSI_CTRL_BASE, PCIE_MSI_CTRL_SIZE);
>> + if (!msi->pcie_msi_cfg)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + msi->nr_virqs = msi->nr_hwirqs * MSI_IRQ_PER_GRP;
>> + msi->nr_grps = msi->nr_hwirqs;
>> + msi->grps = devm_kcalloc(dev, msi->nr_grps, sizeof(*msi->grps), GFP_KERNEL);
>> + if (!msi->grps)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + msi->bitmap = devm_kcalloc(dev, BITS_TO_LONGS(msi->nr_virqs),
>> + sizeof(*msi->bitmap), GFP_KERNEL);
>> + if (!msi->bitmap)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + ret = qcom_msi_alloc_domains(msi);
>> + if (ret)
>> + return ERR_PTR(ret);
>> +
>> + ret = qcom_msi_irq_setup(msi);
>> + if (ret) {
>> + qcom_msi_deinit(msi);
>> + return ERR_PTR(ret);
>> + }
>> +
>> + qcom_msi_config(msi->msi_domain);
>> + return msi;
>> +}
>> +
>> +static int qcom_pcie_ecam_suspend_noirq(struct device *dev)
>> +{
>> + return pm_runtime_put_sync(dev);
>> +}
>> +
>> +static int qcom_pcie_ecam_resume_noirq(struct device *dev)
>> +{
>> + return pm_runtime_get_sync(dev);
>> +}
>> +
>> +static int qcom_pcie_ecam_probe(struct platform_device *pdev)
>> +{
>> + struct device *dev = &pdev->dev;
>> + struct qcom_msi *msi;
>> + int ret;
>> +
>> + ret = devm_pm_runtime_enable(dev);
>> + if (ret)
>> + return ret;
>> +
>> + ret = pm_runtime_resume_and_get(dev);
>> + if (ret < 0) {
>> + dev_err(dev, "fail to enable pcie controller: %d\n", ret);
>> + return ret;
>> + }
>> +
>> + msi = qcom_msi_init(dev);
>> + if (IS_ERR(msi)) {
>> + pm_runtime_put_sync(dev);
>> + return PTR_ERR(msi);
>> + }
>> +
>> + ret = pci_host_common_probe(pdev);
>> + if (ret) {
>> + dev_err(dev, "pci_host_common_probe() failed:%d\n", ret);
>> + qcom_msi_deinit(msi);
>> + pm_runtime_put_sync(dev);
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static const struct dev_pm_ops qcom_pcie_ecam_pm_ops = {
>> + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_ecam_suspend_noirq,
>> + qcom_pcie_ecam_resume_noirq)
>> +};
>> +
>> +static const struct pci_ecam_ops qcom_pcie_ecam_ops = {
>> + .pci_ops = {
>> + .map_bus = pci_ecam_map_bus,
>> + .read = pci_generic_config_read,
>> + .write = pci_generic_config_write,
>> + }
>> +};
>> +
>> +static const struct of_device_id qcom_pcie_ecam_of_match[] = {
>> + {
>> + .compatible = "qcom,pcie-ecam-rc",
>> + .data = &qcom_pcie_ecam_ops,
>> + },
>> + { },
>> +};
>> +MODULE_DEVICE_TABLE(of, qcom_pcie_ecam_of_match);
>> +
>> +static struct platform_driver qcom_pcie_ecam_driver = {
>> + .probe = qcom_pcie_ecam_probe,
>> + .driver = {
>> + .name = "qcom-pcie-ecam-rc",
>> + .suppress_bind_attrs = true,
>> + .of_match_table = qcom_pcie_ecam_of_match,
>> + .probe_type = PROBE_PREFER_ASYNCHRONOUS,
>> + .pm = &qcom_pcie_ecam_pm_ops,
>> + },
>> +};
>> +module_platform_driver(qcom_pcie_ecam_driver);
>> +
>> +MODULE_DESCRIPTION("Qualcomm PCIe ECAM root complex driver");
>> +MODULE_LICENSE("GPL");
>> --
>> 2.7.4
>>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 0/2] Add Qualcomm PCIe ECAM root complex driver
2024-04-05 6:50 ` Krzysztof Kozlowski
@ 2024-04-05 17:45 ` Mayank Rana
0 siblings, 0 replies; 27+ messages in thread
From: Mayank Rana @ 2024-04-05 17:45 UTC (permalink / raw)
To: Krzysztof Kozlowski, linux-pci, lpieralisi, kw, robh, bhelgaas,
andersson, manivannan.sadhasivam, krzysztof.kozlowski+dt,
conor+dt, devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
Hi Krzysztof
On 4/4/2024 11:50 PM, Krzysztof Kozlowski wrote:
> On 05/04/2024 01:02, Mayank Rana wrote:
>> Hi Krzysztof
>>
>> On 4/4/2024 12:33 PM, Krzysztof Kozlowski wrote:
>>> On 04/04/2024 21:11, Mayank Rana wrote:
>>>> On some of Qualcomm platform, firmware takes care of system resources
>>>> related to PCIe PHY and controller as well bringing up PCIe link and
>>>> having static iATU configuration for PCIe controller to work into
>>>> ECAM compliant mode. Hence add Qualcomm PCIe ECAM root complex driver.
>>>>
>>>> Tested:
>>>> - Validated NVME functionality with PCIe0 and PCIe1 on SA877p-ride platform
>>>>
>>>
>>> RFC means code is not ready, right? Please get internal review done and
>>> send it when it is ready. I am not sure if you expect any reviews. Some
>>> people send RFC and do not expect reviews. Some expect. I have no clue
>>> and I do not want to waste my time. Please clarify what you expect from
>>> maintainers regarding this contribution.
>>>
>>> Best regards,
>>> Krzysztof
>>>
>> Thanks for initial comments.
>> yes, this is work in progress. There are still more functionalities
>> planned to be added as part of this driver. Although purpose of sending
>> initial change here to get feedback and review comments in terms of
>> usage of generic Qualcomm PCIe ECAM driver, and usage of MSI
>> functionality with it. I missed mentioning this as part of cover letter.
>> So please help to review and provide feedback.
>
> Thanks for explanation. Work in progress as not ready to be merged? Then
> I am sorry, I am not going to provide review of unfinished work. I have
> many more *finished* patches to review first. You can help with these
> too....
>
> Best regards,
> Krzysztof
Ok. I am looking forward to send finished work on this once ready.
Thank you.
Regards,
Mayank
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-04 19:11 ` [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver Mayank Rana
2024-04-04 19:33 ` Krzysztof Kozlowski
2024-04-05 5:30 ` Manivannan Sadhasivam
@ 2024-04-05 18:30 ` Bjorn Helgaas
2024-04-06 0:43 ` Mayank Rana
2 siblings, 1 reply; 27+ messages in thread
From: Bjorn Helgaas @ 2024-04-05 18:30 UTC (permalink / raw)
To: Mayank Rana
Cc: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree, linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
> On some of Qualcomm platform, firmware configures PCIe controller into
> ECAM mode allowing static memory allocation for configuration space of
> supported bus range. Firmware also takes care of bringing up PCIe PHY
> and performing required operation to bring PCIe link into D0. Firmware
I think link state would be L0, not D0.
> also manages system resources (e.g. clocks/regulators/resets/ bus voting).
> Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
> root complex and connected PCIe devices. Firmware won't be enumerating
> or powering up PCIe root complex until this driver invokes power domain
> based notification to bring PCIe link into D0/D3cold mode.
Again.
> +config PCIE_QCOM_ECAM
> + tristate "QCOM PCIe ECAM host controller"
> + depends on ARCH_QCOM && PCI
> + depends on OF
> + select PCI_MSI
> + select PCI_HOST_COMMON
> + select IRQ_DOMAIN
> + help
> + Say 'Y' here if you want to use ECAM shift mode compatible Qualcomm
> + PCIe root host controller. The controller is programmed using firmware
> + to support ECAM compatible memory address space.
Instead of adding this at the end, place this entry so the entire list
remains sorted by vendor name.
Other related entries are "Qualcomm PCIe controller ..." (not "QCOM").
Use "ECAM PCIe controller (host mode)" (not "PCIe ECAM host
controller") so it matches similar entries.
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
Does this actually work? I expected "#define dev_fmt" since you're
using dev_err(), etc below.
> +#include <linux/irqchip/chained_irq.h>
Can this be reworked so it doesn't use chained IRQs? I admit to not
being an IRQ expert, but I have the impression that it's better to
avoid the chained IRQ model when possible. See
https://lore.kernel.org/all/20231108153133.GA393726@bhelgaas/
> +#define MSI_DB_ADDR 0xa0000000
Where does this come from and why is it hard-coded here? Looks like a
magic address that maybe should come from DT?
> + * struct qcom_msi_irq - MSI IRQ information
> + * @client: pointer to MSI client struct
> + * @grp: group the irq belongs to
s/irq/IRQ/ in comments for consistency (other occurrences below).
Same for s/pcie/PCIe/ and s/msi/MSI/.
> +static void qcom_msi_mask_irq(struct irq_data *data)
> +{
> + struct irq_data *parent_data;
> + struct qcom_msi_irq *msi_irq;
> + struct qcom_msi_grp *msi_grp;
> + struct qcom_msi *msi;
> + unsigned long flags;
> +
> + parent_data = data->parent_data;
> + if (!parent_data)
> + return;
Drop this test; I think it only detects logic errors in the driver or
memory corruptions, and we want to find out about those.
> +static void qcom_msi_unmask_irq(struct irq_data *data)
> +{
> + struct irq_data *parent_data;
> + struct qcom_msi_irq *msi_irq;
> + struct qcom_msi_grp *msi_grp;
> + struct qcom_msi *msi;
> + unsigned long flags;
> +
> + parent_data = data->parent_data;
> + if (!parent_data)
> + return;
Drop.
> +static struct irq_chip qcom_msi_irq_chip = {
> + .name = "qcom_pci_msi",
> + .irq_enable = qcom_msi_unmask_irq,
> + .irq_disable = qcom_msi_mask_irq,
> + .irq_mask = qcom_msi_mask_irq,
> + .irq_unmask = qcom_msi_unmask_irq,
Name these so they match the struct member, e.g., the name should
contain "irq_mask", not "mask_irq") so grep finds them easily.
> +static struct msi_domain_ops qcom_msi_domain_ops = {
> + .msi_prepare = qcom_msi_domain_prepare,
Rename so function name includes the struct member name.
> +static int qcom_msi_irq_set_affinity(struct irq_data *data,
> + const struct cpumask *mask, bool force)
> +{
> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
> + int ret = 0;
> +
> + if (!parent_data)
> + return -ENODEV;
> +
> + /* set affinity for MSI HW IRQ */
Unnecessary comment.
> + if (parent_data->chip->irq_set_affinity)
> + ret = parent_data->chip->irq_set_affinity(parent_data, mask, force);
> +
> + return ret;
Drop "ret" and return directly, e.g.,
if (parent_data->chip->irq_set_affinity)
return parent_data->chip->irq_set_affinity(...);
return 0;
> +static void qcom_msi_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
> +{
> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
> + struct qcom_msi_irq *msi_irq = irq_data_get_irq_chip_data(data);
> + struct qcom_msi_client *client = msi_irq->client;
> +
> + if (!parent_data)
> + return;
Drop.
> +static void qcom_msi_irq_domain_free(struct irq_domain *domain, unsigned int virq,
> + unsigned int nr_irqs)
> +{
> + struct irq_data *data = irq_domain_get_irq_data(domain, virq);
> + struct qcom_msi_client *client;
> + struct qcom_msi_irq *msi_irq;
> + struct qcom_msi *msi;
> +
> + if (!data)
> + return;
Drop.
> +static int qcom_msi_irq_setup(struct qcom_msi *msi)
> +{
> + struct qcom_msi_grp *msi_grp;
> + struct qcom_msi_irq *msi_irq;
> + int i, index, ret;
> + unsigned int irq;
> +
> + /* setup each MSI group. nr_hwirqs == nr_grps */
> + for (i = 0; i < msi->nr_hwirqs; i++) {
> + irq = irq_of_parse_and_map(msi->dev->of_node, i);
> + if (!irq) {
> + dev_err(msi->dev,
> + "MSI: failed to parse/map interrupt\n");
Possibly include "i" to identify the offending entry.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-05 18:30 ` Bjorn Helgaas
@ 2024-04-06 0:43 ` Mayank Rana
0 siblings, 0 replies; 27+ messages in thread
From: Mayank Rana @ 2024-04-06 0:43 UTC (permalink / raw)
To: Bjorn Helgaas
Cc: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree, linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
Hi Bjorn
Thanks for reviewing change.
On 4/5/2024 11:30 AM, Bjorn Helgaas wrote:
> On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
>> On some of Qualcomm platform, firmware configures PCIe controller into
>> ECAM mode allowing static memory allocation for configuration space of
>> supported bus range. Firmware also takes care of bringing up PCIe PHY
>> and performing required operation to bring PCIe link into D0. Firmware
>
> I think link state would be L0, not D0.
ACK
>> also manages system resources (e.g. clocks/regulators/resets/ bus voting).
>> Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
>> root complex and connected PCIe devices. Firmware won't be enumerating
>> or powering up PCIe root complex until this driver invokes power domain
>> based notification to bring PCIe link into D0/D3cold mode.
>
> Again.
ACK. will repharse it.
>> +config PCIE_QCOM_ECAM
>> + tristate "QCOM PCIe ECAM host controller"
>> + depends on ARCH_QCOM && PCI
>> + depends on OF
>> + select PCI_MSI
>> + select PCI_HOST_COMMON
>> + select IRQ_DOMAIN
>> + help
>> + Say 'Y' here if you want to use ECAM shift mode compatible Qualcomm
>> + PCIe root host controller. The controller is programmed using firmware
>> + to support ECAM compatible memory address space.
>
> Instead of adding this at the end, place this entry so the entire list
> remains sorted by vendor name.
>
> Other related entries are "Qualcomm PCIe controller ..." (not "QCOM").
>
> Use "ECAM PCIe controller (host mode)" (not "PCIe ECAM host
> controller") so it matches similar entries.
Ok. will rephrase and move as suggested.
>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>
> Does this actually work? I expected "#define dev_fmt" since you're
> using dev_err(), etc below.
Yes. look like it is not needed anymore for this driver as very limited
pr_* usage.
>> +#include <linux/irqchip/chained_irq.h>
>
> Can this be reworked so it doesn't use chained IRQs? I admit to not
> being an IRQ expert, but I have the impression that it's better to
> avoid the chained IRQ model when possible. See
> https://lore.kernel.org/all/20231108153133.GA393726@bhelgaas/
Ok. will review shared information, and try to rework upon this.
>> +#define MSI_DB_ADDR 0xa0000000
>
> Where does this come from and why is it hard-coded here? Looks like a
> magic address that maybe should come from DT?
Yes it is DB address to generate MSI, and it is not tied with directly
with any
hardware/platform. Hence hardcoding here.
>> + * struct qcom_msi_irq - MSI IRQ information
>> + * @client: pointer to MSI client struct
>> + * @grp: group the irq belongs to
>
> s/irq/IRQ/ in comments for consistency (other occurrences below).
> Same for s/pcie/PCIe/ and s/msi/MSI/.
ACK
>> +static void qcom_msi_mask_irq(struct irq_data *data)
>> +{
>> + struct irq_data *parent_data;
>> + struct qcom_msi_irq *msi_irq;
>> + struct qcom_msi_grp *msi_grp;
>> + struct qcom_msi *msi;
>> + unsigned long flags;
>> +
>> + parent_data = data->parent_data;
>> + if (!parent_data)
>> + return;
>
> Drop this test; I think it only detects logic errors in the driver or
> memory corruptions, and we want to find out about those.
ACK
>> +static void qcom_msi_unmask_irq(struct irq_data *data)
>> +{
>> + struct irq_data *parent_data;
>> + struct qcom_msi_irq *msi_irq;
>> + struct qcom_msi_grp *msi_grp;
>> + struct qcom_msi *msi;
>> + unsigned long flags;
>> +
>> + parent_data = data->parent_data;
>> + if (!parent_data)
>> + return;
>
> Drop.
ACK
>> +static struct irq_chip qcom_msi_irq_chip = {
>> + .name = "qcom_pci_msi",
>> + .irq_enable = qcom_msi_unmask_irq,
>> + .irq_disable = qcom_msi_mask_irq,
>> + .irq_mask = qcom_msi_mask_irq,
>> + .irq_unmask = qcom_msi_unmask_irq,
>
> Name these so they match the struct member, e.g., the name should
> contain "irq_mask", not "mask_irq") so grep finds them easily.
ACK
>> +static struct msi_domain_ops qcom_msi_domain_ops = {
>> + .msi_prepare = qcom_msi_domain_prepare,
>
> Rename so function name includes the struct member name.
ACK
>> +static int qcom_msi_irq_set_affinity(struct irq_data *data,
>> + const struct cpumask *mask, bool force)
>> +{
>> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
>> + int ret = 0;
>> +
>> + if (!parent_data)
>> + return -ENODEV;
>> +
>> + /* set affinity for MSI HW IRQ */
>
> Unnecessary comment.
ACK
>> + if (parent_data->chip->irq_set_affinity)
>> + ret = parent_data->chip->irq_set_affinity(parent_data, mask, force);
>> +
>> + return ret;
>
> Drop "ret" and return directly, e.g.,
>
> if (parent_data->chip->irq_set_affinity)
> return parent_data->chip->irq_set_affinity(...);
>
> return 0;
ACK
>> +static void qcom_msi_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
>> +{
>> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
>> + struct qcom_msi_irq *msi_irq = irq_data_get_irq_chip_data(data);
>> + struct qcom_msi_client *client = msi_irq->client;
>> +
>> + if (!parent_data)
>> + return;
>
> Drop.
ACK
>> +static void qcom_msi_irq_domain_free(struct irq_domain *domain, unsigned int virq,
>> + unsigned int nr_irqs)
>> +{
>> + struct irq_data *data = irq_domain_get_irq_data(domain, virq);
>> + struct qcom_msi_client *client;
>> + struct qcom_msi_irq *msi_irq;
>> + struct qcom_msi *msi;
>> +
>> + if (!data)
>> + return;
>
> Drop.
ACK
>> +static int qcom_msi_irq_setup(struct qcom_msi *msi)
>> +{
>> + struct qcom_msi_grp *msi_grp;
>> + struct qcom_msi_irq *msi_irq;
>> + int i, index, ret;
>> + unsigned int irq;
>> +
>> + /* setup each MSI group. nr_hwirqs == nr_grps */
>> + for (i = 0; i < msi->nr_hwirqs; i++) {
>> + irq = irq_of_parse_and_map(msi->dev->of_node, i);
>> + if (!irq) {
>> + dev_err(msi->dev,
>> + "MSI: failed to parse/map interrupt\n");
>
> Possibly include "i" to identify the offending entry.
ACK
Regards,
Mayank
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-05 17:41 ` Mayank Rana
@ 2024-04-06 4:17 ` Manivannan Sadhasivam
2024-04-08 18:57 ` Mayank Rana
0 siblings, 1 reply; 27+ messages in thread
From: Manivannan Sadhasivam @ 2024-04-06 4:17 UTC (permalink / raw)
To: Mayank Rana
Cc: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
krzysztof.kozlowski+dt, conor+dt, devicetree, linux-arm-msm,
quic_ramkri, quic_nkela, quic_shazhuss, quic_msarkar,
quic_nitegupt
On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
> Hi Mani
>
> On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
> > On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
> > > On some of Qualcomm platform, firmware configures PCIe controller into
> > > ECAM mode allowing static memory allocation for configuration space of
> > > supported bus range. Firmware also takes care of bringing up PCIe PHY
> > > and performing required operation to bring PCIe link into D0. Firmware
> > > also manages system resources (e.g. clocks/regulators/resets/ bus voting).
> > > Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
> > > root complex and connected PCIe devices. Firmware won't be enumerating
> > > or powering up PCIe root complex until this driver invokes power domain
> > > based notification to bring PCIe link into D0/D3cold mode.
> > >
> >
> > Is this an in-house PCIe IP of Qualcomm or the same DWC IP that is used in other
> > SoCs?
> >
> > - Mani
> Driver is validated on SA8775p-ride platform using PCIe DWC IP for
> now.Although this driver doesn't need to know used PCIe controller and PHY
> IP as well programming sequence as that would be taken care by firmware.
>
Ok, so it is the same IP but firmware is controlling the resources now. This
information should be present in the commit message.
Btw, there is an existing generic ECAM host controller driver:
drivers/pci/controller/pci-host-generic.c
This driver is already being used by several vendors as well. So we should try
to extend it for Qcom usecase also.
> > > This driver also support MSI functionality using PCIe controller based
> > > MSI controller as GIC ITS based MSI functionality is not available on
> > > some of platform.
> > >
So is this the same internal MSI controller in the DWC IP? If so, then we
already have the MSI implementation in
drivers/pci/controller/dwc/pcie-designware-host.c and that should be reused here
instead of duplicating the code.
- Mani
> > > Signed-off-by: Mayank Rana <quic_mrana@quicinc.com>
> > > ---
> > > drivers/pci/controller/Kconfig | 12 +
> > > drivers/pci/controller/Makefile | 1 +
> > > drivers/pci/controller/pcie-qcom-ecam.c | 575 ++++++++++++++++++++++++++++++++
> > > 3 files changed, 588 insertions(+)
> > > create mode 100644 drivers/pci/controller/pcie-qcom-ecam.c
> > >
> > > diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
> > > index e534c02..abbd9f2 100644
> > > --- a/drivers/pci/controller/Kconfig
> > > +++ b/drivers/pci/controller/Kconfig
> > > @@ -353,6 +353,18 @@ config PCIE_XILINX_CPM
> > > Say 'Y' here if you want kernel support for the
> > > Xilinx Versal CPM host bridge.
> > > +config PCIE_QCOM_ECAM
> > > + tristate "QCOM PCIe ECAM host controller"
> > > + depends on ARCH_QCOM && PCI
> > > + depends on OF
> > > + select PCI_MSI
> > > + select PCI_HOST_COMMON
> > > + select IRQ_DOMAIN
> > > + help
> > > + Say 'Y' here if you want to use ECAM shift mode compatible Qualcomm
> > > + PCIe root host controller. The controller is programmed using firmware
> > > + to support ECAM compatible memory address space.
> > > +
> > > source "drivers/pci/controller/cadence/Kconfig"
> > > source "drivers/pci/controller/dwc/Kconfig"
> > > source "drivers/pci/controller/mobiveil/Kconfig"
> > > diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
> > > index f2b19e6..2f1ee1e 100644
> > > --- a/drivers/pci/controller/Makefile
> > > +++ b/drivers/pci/controller/Makefile
> > > @@ -40,6 +40,7 @@ obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
> > > obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o
> > > obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o
> > > obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
> > > +obj-$(CONFIG_PCIE_QCOM_ECAM) += pcie-qcom-ecam.o
> > > # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
> > > obj-y += dwc/
> > > diff --git a/drivers/pci/controller/pcie-qcom-ecam.c b/drivers/pci/controller/pcie-qcom-ecam.c
> > > new file mode 100644
> > > index 00000000..5b4c68b
> > > --- /dev/null
> > > +++ b/drivers/pci/controller/pcie-qcom-ecam.c
> > > @@ -0,0 +1,575 @@
> > > +// SPDX-License-Identifier: GPL-2.0-only
> > > +/*
> > > + * Qualcomm PCIe ECAM root host controller driver
> > > + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
> > > + */
> > > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> > > +
> > > +#include <linux/irq.h>
> > > +#include <linux/irqchip/chained_irq.h>
> > > +#include <linux/irqdomain.h>
> > > +#include <linux/kernel.h>
> > > +#include <linux/module.h>
> > > +#include <linux/msi.h>
> > > +#include <linux/of_address.h>
> > > +#include <linux/of_irq.h>
> > > +#include <linux/of_pci.h>
> > > +#include <linux/pci.h>
> > > +#include <linux/pci-ecam.h>
> > > +#include <linux/platform_device.h>
> > > +#include <linux/pm_domain.h>
> > > +#include <linux/pm_runtime.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/types.h>
> > > +
> > > +#define PCIE_MSI_CTRL_BASE (0x820)
> > > +#define PCIE_MSI_CTRL_SIZE (0x68)
> > > +#define PCIE_MSI_CTRL_ADDR_OFFS (0x0)
> > > +#define PCIE_MSI_CTRL_UPPER_ADDR_OFFS (0x4)
> > > +#define PCIE_MSI_CTRL_INT_N_EN_OFFS(n) (0x8 + 0xc * (n))
> > > +#define PCIE_MSI_CTRL_INT_N_MASK_OFFS(n) (0xc + 0xc * (n))
> > > +#define PCIE_MSI_CTRL_INT_N_STATUS_OFFS(n) (0x10 + 0xc * (n))
> > > +
> > > +#define MSI_DB_ADDR 0xa0000000
> > > +#define MSI_IRQ_PER_GRP (32)
> > > +
> > > +/**
> > > + * struct qcom_msi_irq - MSI IRQ information
> > > + * @client: pointer to MSI client struct
> > > + * @grp: group the irq belongs to
> > > + * @grp_index: index in group
> > > + * @hwirq: hwirq number
> > > + * @virq: virq number
> > > + * @pos: position in MSI bitmap
> > > + */
> > > +struct qcom_msi_irq {
> > > + struct qcom_msi_client *client;
> > > + struct qcom_msi_grp *grp;
> > > + unsigned int grp_index;
> > > + unsigned int hwirq;
> > > + unsigned int virq;
> > > + u32 pos;
> > > +};
> > > +
> > > +/**
> > > + * struct qcom_msi_grp - MSI group information
> > > + * @int_en_reg: memory-mapped interrupt enable register address
> > > + * @int_mask_reg: memory-mapped interrupt mask register address
> > > + * @int_status_reg: memory-mapped interrupt status register address
> > > + * @mask: tracks masked/unmasked MSI
> > > + * @irqs: structure to MSI IRQ information
> > > + */
> > > +struct qcom_msi_grp {
> > > + void __iomem *int_en_reg;
> > > + void __iomem *int_mask_reg;
> > > + void __iomem *int_status_reg;
> > > + u32 mask;
> > > + struct qcom_msi_irq irqs[MSI_IRQ_PER_GRP];
> > > +};
> > > +
> > > +/**
> > > + * struct qcom_msi - PCIe controller based MSI controller information
> > > + * @clients: list for tracking clients
> > > + * @dev: platform device node
> > > + * @nr_hwirqs: total number of hardware IRQs
> > > + * @nr_virqs: total number of virqs
> > > + * @nr_grps: total number of groups
> > > + * @grps: pointer to all groups information
> > > + * @bitmap: tracks used/unused MSI
> > > + * @mutex: for modifying MSI client list and bitmap
> > > + * @inner_domain: parent domain; gen irq related
> > > + * @msi_domain: child domain; pcie related
> > > + * @msi_db_addr: MSI doorbell address
> > > + * @cfg_lock: lock for configuring MSI controller registers
> > > + * @pcie_msi_cfg: memory-mapped MSI controller register space
> > > + */
> > > +struct qcom_msi {
> > > + struct list_head clients;
> > > + struct device *dev;
> > > + int nr_hwirqs;
> > > + int nr_virqs;
> > > + int nr_grps;
> > > + struct qcom_msi_grp *grps;
> > > + unsigned long *bitmap;
> > > + struct mutex mutex;
> > > + struct irq_domain *inner_domain;
> > > + struct irq_domain *msi_domain;
> > > + phys_addr_t msi_db_addr;
> > > + spinlock_t cfg_lock;
> > > + void __iomem *pcie_msi_cfg;
> > > +};
> > > +
> > > +/**
> > > + * struct qcom_msi_client - structure for each client of MSI controller
> > > + * @node: list to track number of MSI clients
> > > + * @msi: client specific MSI controller based resource pointer
> > > + * @dev: client's dev of pci_dev
> > > + * @nr_irqs: number of irqs allocated for client
> > > + * @msi_addr: MSI doorbell address
> > > + */
> > > +struct qcom_msi_client {
> > > + struct list_head node;
> > > + struct qcom_msi *msi;
> > > + struct device *dev;
> > > + unsigned int nr_irqs;
> > > + phys_addr_t msi_addr;
> > > +};
> > > +
> > > +static void qcom_msi_handler(struct irq_desc *desc)
> > > +{
> > > + struct irq_chip *chip = irq_desc_get_chip(desc);
> > > + struct qcom_msi_grp *msi_grp;
> > > + u32 status;
> > > + int i;
> > > +
> > > + chained_irq_enter(chip, desc);
> > > +
> > > + msi_grp = irq_desc_get_handler_data(desc);
> > > + status = readl_relaxed(msi_grp->int_status_reg);
> > > + status ^= (msi_grp->mask & status);
> > > + writel(status, msi_grp->int_status_reg);
> > > +
> > > + for (i = 0; status; i++, status >>= 1)
> > > + if (status & 0x1)
> > > + generic_handle_irq(msi_grp->irqs[i].virq);
> > > +
> > > + chained_irq_exit(chip, desc);
> > > +}
> > > +
> > > +static void qcom_msi_mask_irq(struct irq_data *data)
> > > +{
> > > + struct irq_data *parent_data;
> > > + struct qcom_msi_irq *msi_irq;
> > > + struct qcom_msi_grp *msi_grp;
> > > + struct qcom_msi *msi;
> > > + unsigned long flags;
> > > +
> > > + parent_data = data->parent_data;
> > > + if (!parent_data)
> > > + return;
> > > +
> > > + msi_irq = irq_data_get_irq_chip_data(parent_data);
> > > + msi = msi_irq->client->msi;
> > > + msi_grp = msi_irq->grp;
> > > +
> > > + spin_lock_irqsave(&msi->cfg_lock, flags);
> > > + pci_msi_mask_irq(data);
> > > + msi_grp->mask |= BIT(msi_irq->grp_index);
> > > + writel(msi_grp->mask, msi_grp->int_mask_reg);
> > > + spin_unlock_irqrestore(&msi->cfg_lock, flags);
> > > +}
> > > +
> > > +static void qcom_msi_unmask_irq(struct irq_data *data)
> > > +{
> > > + struct irq_data *parent_data;
> > > + struct qcom_msi_irq *msi_irq;
> > > + struct qcom_msi_grp *msi_grp;
> > > + struct qcom_msi *msi;
> > > + unsigned long flags;
> > > +
> > > + parent_data = data->parent_data;
> > > + if (!parent_data)
> > > + return;
> > > +
> > > + msi_irq = irq_data_get_irq_chip_data(parent_data);
> > > + msi = msi_irq->client->msi;
> > > + msi_grp = msi_irq->grp;
> > > +
> > > + spin_lock_irqsave(&msi->cfg_lock, flags);
> > > + msi_grp->mask &= ~BIT(msi_irq->grp_index);
> > > + writel(msi_grp->mask, msi_grp->int_mask_reg);
> > > + pci_msi_unmask_irq(data);
> > > + spin_unlock_irqrestore(&msi->cfg_lock, flags);
> > > +}
> > > +
> > > +static struct irq_chip qcom_msi_irq_chip = {
> > > + .name = "qcom_pci_msi",
> > > + .irq_enable = qcom_msi_unmask_irq,
> > > + .irq_disable = qcom_msi_mask_irq,
> > > + .irq_mask = qcom_msi_mask_irq,
> > > + .irq_unmask = qcom_msi_unmask_irq,
> > > +};
> > > +
> > > +static int qcom_msi_domain_prepare(struct irq_domain *domain, struct device *dev,
> > > + int nvec, msi_alloc_info_t *arg)
> > > +{
> > > + struct qcom_msi *msi = domain->parent->host_data;
> > > + struct qcom_msi_client *client;
> > > +
> > > + client = kzalloc(sizeof(*client), GFP_KERNEL);
> > > + if (!client)
> > > + return -ENOMEM;
> > > +
> > > + client->msi = msi;
> > > + client->dev = dev;
> > > + client->msi_addr = msi->msi_db_addr;
> > > + mutex_lock(&msi->mutex);
> > > + list_add_tail(&client->node, &msi->clients);
> > > + mutex_unlock(&msi->mutex);
> > > +
> > > + /* zero out struct for pcie msi framework */
> > > + memset(arg, 0, sizeof(*arg));
> > > + return 0;
> > > +}
> > > +
> > > +static struct msi_domain_ops qcom_msi_domain_ops = {
> > > + .msi_prepare = qcom_msi_domain_prepare,
> > > +};
> > > +
> > > +static struct msi_domain_info qcom_msi_domain_info = {
> > > + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
> > > + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
> > > + .ops = &qcom_msi_domain_ops,
> > > + .chip = &qcom_msi_irq_chip,
> > > +};
> > > +
> > > +static int qcom_msi_irq_set_affinity(struct irq_data *data,
> > > + const struct cpumask *mask, bool force)
> > > +{
> > > + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
> > > + int ret = 0;
> > > +
> > > + if (!parent_data)
> > > + return -ENODEV;
> > > +
> > > + /* set affinity for MSI HW IRQ */
> > > + if (parent_data->chip->irq_set_affinity)
> > > + ret = parent_data->chip->irq_set_affinity(parent_data, mask, force);
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static void qcom_msi_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
> > > +{
> > > + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
> > > + struct qcom_msi_irq *msi_irq = irq_data_get_irq_chip_data(data);
> > > + struct qcom_msi_client *client = msi_irq->client;
> > > +
> > > + if (!parent_data)
> > > + return;
> > > +
> > > + msg->address_lo = lower_32_bits(client->msi_addr);
> > > + msg->address_hi = upper_32_bits(client->msi_addr);
> > > + msg->data = msi_irq->pos;
> > > +}
> > > +
> > > +static struct irq_chip qcom_msi_bottom_irq_chip = {
> > > + .name = "qcom_msi",
> > > + .irq_set_affinity = qcom_msi_irq_set_affinity,
> > > + .irq_compose_msi_msg = qcom_msi_irq_compose_msi_msg,
> > > +};
> > > +
> > > +static int qcom_msi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
> > > + unsigned int nr_irqs, void *args)
> > > +{
> > > + struct device *dev = ((msi_alloc_info_t *)args)->desc->dev;
> > > + struct qcom_msi_client *tmp, *client = NULL;
> > > + struct qcom_msi *msi = domain->host_data;
> > > + int i, ret = 0;
> > > + int pos;
> > > +
> > > + mutex_lock(&msi->mutex);
> > > + list_for_each_entry(tmp, &msi->clients, node) {
> > > + if (tmp->dev == dev) {
> > > + client = tmp;
> > > + break;
> > > + }
> > > + }
> > > +
> > > + if (!client) {
> > > + dev_err(msi->dev, "failed to find MSI client dev\n");
> > > + ret = -ENODEV;
> > > + goto out;
> > > + }
> > > +
> > > + pos = bitmap_find_next_zero_area(msi->bitmap, msi->nr_virqs, 0,
> > > + nr_irqs, nr_irqs - 1);
> > > + if (pos > msi->nr_virqs) {
> > > + ret = -ENOSPC;
> > > + goto out;
> > > + }
> > > +
> > > + bitmap_set(msi->bitmap, pos, nr_irqs);
> > > + for (i = 0; i < nr_irqs; i++) {
> > > + u32 grp = pos / MSI_IRQ_PER_GRP;
> > > + u32 index = pos % MSI_IRQ_PER_GRP;
> > > + struct qcom_msi_irq *msi_irq = &msi->grps[grp].irqs[index];
> > > +
> > > + msi_irq->virq = virq + i;
> > > + msi_irq->client = client;
> > > + irq_domain_set_info(domain, msi_irq->virq,
> > > + msi_irq->hwirq,
> > > + &qcom_msi_bottom_irq_chip, msi_irq,
> > > + handle_simple_irq, NULL, NULL);
> > > + client->nr_irqs++;
> > > + pos++;
> > > + }
> > > +out:
> > > + mutex_unlock(&msi->mutex);
> > > + return ret;
> > > +}
> > > +
> > > +static void qcom_msi_irq_domain_free(struct irq_domain *domain, unsigned int virq,
> > > + unsigned int nr_irqs)
> > > +{
> > > + struct irq_data *data = irq_domain_get_irq_data(domain, virq);
> > > + struct qcom_msi_client *client;
> > > + struct qcom_msi_irq *msi_irq;
> > > + struct qcom_msi *msi;
> > > +
> > > + if (!data)
> > > + return;
> > > +
> > > + msi_irq = irq_data_get_irq_chip_data(data);
> > > + client = msi_irq->client;
> > > + msi = client->msi;
> > > +
> > > + mutex_lock(&msi->mutex);
> > > + bitmap_clear(msi->bitmap, msi_irq->pos, nr_irqs);
> > > +
> > > + client->nr_irqs -= nr_irqs;
> > > + if (!client->nr_irqs) {
> > > + list_del(&client->node);
> > > + kfree(client);
> > > + }
> > > + mutex_unlock(&msi->mutex);
> > > +
> > > + irq_domain_free_irqs_parent(domain, virq, nr_irqs);
> > > +}
> > > +
> > > +static const struct irq_domain_ops msi_domain_ops = {
> > > + .alloc = qcom_msi_irq_domain_alloc,
> > > + .free = qcom_msi_irq_domain_free,
> > > +};
> > > +
> > > +static int qcom_msi_alloc_domains(struct qcom_msi *msi)
> > > +{
> > > + msi->inner_domain = irq_domain_add_linear(NULL, msi->nr_virqs,
> > > + &msi_domain_ops, msi);
> > > + if (!msi->inner_domain) {
> > > + dev_err(msi->dev, "failed to create IRQ inner domain\n");
> > > + return -ENOMEM;
> > > + }
> > > +
> > > + msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->dev->of_node),
> > > + &qcom_msi_domain_info, msi->inner_domain);
> > > + if (!msi->msi_domain) {
> > > + dev_err(msi->dev, "failed to create MSI domain\n");
> > > + irq_domain_remove(msi->inner_domain);
> > > + return -ENOMEM;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int qcom_msi_irq_setup(struct qcom_msi *msi)
> > > +{
> > > + struct qcom_msi_grp *msi_grp;
> > > + struct qcom_msi_irq *msi_irq;
> > > + int i, index, ret;
> > > + unsigned int irq;
> > > +
> > > + /* setup each MSI group. nr_hwirqs == nr_grps */
> > > + for (i = 0; i < msi->nr_hwirqs; i++) {
> > > + irq = irq_of_parse_and_map(msi->dev->of_node, i);
> > > + if (!irq) {
> > > + dev_err(msi->dev,
> > > + "MSI: failed to parse/map interrupt\n");
> > > + ret = -ENODEV;
> > > + goto free_irqs;
> > > + }
> > > +
> > > + msi_grp = &msi->grps[i];
> > > + msi_grp->int_en_reg = msi->pcie_msi_cfg +
> > > + PCIE_MSI_CTRL_INT_N_EN_OFFS(i);
> > > + msi_grp->int_mask_reg = msi->pcie_msi_cfg +
> > > + PCIE_MSI_CTRL_INT_N_MASK_OFFS(i);
> > > + msi_grp->int_status_reg = msi->pcie_msi_cfg +
> > > + PCIE_MSI_CTRL_INT_N_STATUS_OFFS(i);
> > > +
> > > + for (index = 0; index < MSI_IRQ_PER_GRP; index++) {
> > > + msi_irq = &msi_grp->irqs[index];
> > > +
> > > + msi_irq->grp = msi_grp;
> > > + msi_irq->grp_index = index;
> > > + msi_irq->pos = (i * MSI_IRQ_PER_GRP) + index;
> > > + msi_irq->hwirq = irq;
> > > + }
> > > +
> > > + irq_set_chained_handler_and_data(irq, qcom_msi_handler, msi_grp);
> > > + }
> > > +
> > > + return 0;
> > > +
> > > +free_irqs:
> > > + for (--i; i >= 0; i--) {
> > > + irq = msi->grps[i].irqs[0].hwirq;
> > > +
> > > + irq_set_chained_handler_and_data(irq, NULL, NULL);
> > > + irq_dispose_mapping(irq);
> > > + }
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static void qcom_msi_config(struct irq_domain *domain)
> > > +{
> > > + struct qcom_msi *msi;
> > > + int i;
> > > +
> > > + msi = domain->parent->host_data;
> > > +
> > > + /* program termination address */
> > > + writel(msi->msi_db_addr, msi->pcie_msi_cfg + PCIE_MSI_CTRL_ADDR_OFFS);
> > > + writel(0, msi->pcie_msi_cfg + PCIE_MSI_CTRL_UPPER_ADDR_OFFS);
> > > +
> > > + /* restore mask and enable all interrupts for each group */
> > > + for (i = 0; i < msi->nr_grps; i++) {
> > > + struct qcom_msi_grp *msi_grp = &msi->grps[i];
> > > +
> > > + writel(msi_grp->mask, msi_grp->int_mask_reg);
> > > + writel(~0, msi_grp->int_en_reg);
> > > + }
> > > +}
> > > +
> > > +static void qcom_msi_deinit(struct qcom_msi *msi)
> > > +{
> > > + irq_domain_remove(msi->msi_domain);
> > > + irq_domain_remove(msi->inner_domain);
> > > +}
> > > +
> > > +static struct qcom_msi *qcom_msi_init(struct device *dev)
> > > +{
> > > + struct qcom_msi *msi;
> > > + u64 addr;
> > > + int ret;
> > > +
> > > + msi = devm_kzalloc(dev, sizeof(*msi), GFP_KERNEL);
> > > + if (!msi)
> > > + return ERR_PTR(-ENOMEM);
> > > +
> > > + msi->dev = dev;
> > > + mutex_init(&msi->mutex);
> > > + spin_lock_init(&msi->cfg_lock);
> > > + INIT_LIST_HEAD(&msi->clients);
> > > +
> > > + msi->msi_db_addr = MSI_DB_ADDR;
> > > + msi->nr_hwirqs = of_irq_count(dev->of_node);
> > > + if (!msi->nr_hwirqs) {
> > > + dev_err(msi->dev, "no hwirqs found\n");
> > > + return ERR_PTR(-ENODEV);
> > > + }
> > > +
> > > + if (of_property_read_reg(dev->of_node, 0, &addr, NULL) < 0) {
> > > + dev_err(msi->dev, "failed to get reg address\n");
> > > + return ERR_PTR(-ENODEV);
> > > + }
> > > +
> > > + dev_dbg(msi->dev, "hwirq:%d pcie_msi_cfg:%llx\n", msi->nr_hwirqs, addr);
> > > + msi->pcie_msi_cfg = devm_ioremap(dev, addr + PCIE_MSI_CTRL_BASE, PCIE_MSI_CTRL_SIZE);
> > > + if (!msi->pcie_msi_cfg)
> > > + return ERR_PTR(-ENOMEM);
> > > +
> > > + msi->nr_virqs = msi->nr_hwirqs * MSI_IRQ_PER_GRP;
> > > + msi->nr_grps = msi->nr_hwirqs;
> > > + msi->grps = devm_kcalloc(dev, msi->nr_grps, sizeof(*msi->grps), GFP_KERNEL);
> > > + if (!msi->grps)
> > > + return ERR_PTR(-ENOMEM);
> > > +
> > > + msi->bitmap = devm_kcalloc(dev, BITS_TO_LONGS(msi->nr_virqs),
> > > + sizeof(*msi->bitmap), GFP_KERNEL);
> > > + if (!msi->bitmap)
> > > + return ERR_PTR(-ENOMEM);
> > > +
> > > + ret = qcom_msi_alloc_domains(msi);
> > > + if (ret)
> > > + return ERR_PTR(ret);
> > > +
> > > + ret = qcom_msi_irq_setup(msi);
> > > + if (ret) {
> > > + qcom_msi_deinit(msi);
> > > + return ERR_PTR(ret);
> > > + }
> > > +
> > > + qcom_msi_config(msi->msi_domain);
> > > + return msi;
> > > +}
> > > +
> > > +static int qcom_pcie_ecam_suspend_noirq(struct device *dev)
> > > +{
> > > + return pm_runtime_put_sync(dev);
> > > +}
> > > +
> > > +static int qcom_pcie_ecam_resume_noirq(struct device *dev)
> > > +{
> > > + return pm_runtime_get_sync(dev);
> > > +}
> > > +
> > > +static int qcom_pcie_ecam_probe(struct platform_device *pdev)
> > > +{
> > > + struct device *dev = &pdev->dev;
> > > + struct qcom_msi *msi;
> > > + int ret;
> > > +
> > > + ret = devm_pm_runtime_enable(dev);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + ret = pm_runtime_resume_and_get(dev);
> > > + if (ret < 0) {
> > > + dev_err(dev, "fail to enable pcie controller: %d\n", ret);
> > > + return ret;
> > > + }
> > > +
> > > + msi = qcom_msi_init(dev);
> > > + if (IS_ERR(msi)) {
> > > + pm_runtime_put_sync(dev);
> > > + return PTR_ERR(msi);
> > > + }
> > > +
> > > + ret = pci_host_common_probe(pdev);
> > > + if (ret) {
> > > + dev_err(dev, "pci_host_common_probe() failed:%d\n", ret);
> > > + qcom_msi_deinit(msi);
> > > + pm_runtime_put_sync(dev);
> > > + }
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static const struct dev_pm_ops qcom_pcie_ecam_pm_ops = {
> > > + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_ecam_suspend_noirq,
> > > + qcom_pcie_ecam_resume_noirq)
> > > +};
> > > +
> > > +static const struct pci_ecam_ops qcom_pcie_ecam_ops = {
> > > + .pci_ops = {
> > > + .map_bus = pci_ecam_map_bus,
> > > + .read = pci_generic_config_read,
> > > + .write = pci_generic_config_write,
> > > + }
> > > +};
> > > +
> > > +static const struct of_device_id qcom_pcie_ecam_of_match[] = {
> > > + {
> > > + .compatible = "qcom,pcie-ecam-rc",
> > > + .data = &qcom_pcie_ecam_ops,
> > > + },
> > > + { },
> > > +};
> > > +MODULE_DEVICE_TABLE(of, qcom_pcie_ecam_of_match);
> > > +
> > > +static struct platform_driver qcom_pcie_ecam_driver = {
> > > + .probe = qcom_pcie_ecam_probe,
> > > + .driver = {
> > > + .name = "qcom-pcie-ecam-rc",
> > > + .suppress_bind_attrs = true,
> > > + .of_match_table = qcom_pcie_ecam_of_match,
> > > + .probe_type = PROBE_PREFER_ASYNCHRONOUS,
> > > + .pm = &qcom_pcie_ecam_pm_ops,
> > > + },
> > > +};
> > > +module_platform_driver(qcom_pcie_ecam_driver);
> > > +
> > > +MODULE_DESCRIPTION("Qualcomm PCIe ECAM root complex driver");
> > > +MODULE_LICENSE("GPL");
> > > --
> > > 2.7.4
> > >
> >
--
மணிவண்ணன் சதாசிவம்
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-06 4:17 ` Manivannan Sadhasivam
@ 2024-04-08 18:57 ` Mayank Rana
2024-04-10 6:26 ` Manivannan Sadhasivam
2024-04-10 16:58 ` Rob Herring
0 siblings, 2 replies; 27+ messages in thread
From: Mayank Rana @ 2024-04-08 18:57 UTC (permalink / raw)
To: Manivannan Sadhasivam
Cc: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
krzysztof.kozlowski+dt, conor+dt, devicetree, linux-arm-msm,
quic_ramkri, quic_nkela, quic_shazhuss, quic_msarkar,
quic_nitegupt
Hi Mani
On 4/5/2024 9:17 PM, Manivannan Sadhasivam wrote:
> On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
>> Hi Mani
>>
>> On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
>>> On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
>>>> On some of Qualcomm platform, firmware configures PCIe controller into
>>>> ECAM mode allowing static memory allocation for configuration space of
>>>> supported bus range. Firmware also takes care of bringing up PCIe PHY
>>>> and performing required operation to bring PCIe link into D0. Firmware
>>>> also manages system resources (e.g. clocks/regulators/resets/ bus voting).
>>>> Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
>>>> root complex and connected PCIe devices. Firmware won't be enumerating
>>>> or powering up PCIe root complex until this driver invokes power domain
>>>> based notification to bring PCIe link into D0/D3cold mode.
>>>>
>>>
>>> Is this an in-house PCIe IP of Qualcomm or the same DWC IP that is used in other
>>> SoCs?
>>>
>>> - Mani
>> Driver is validated on SA8775p-ride platform using PCIe DWC IP for
>> now.Although this driver doesn't need to know used PCIe controller and PHY
>> IP as well programming sequence as that would be taken care by firmware.
>>
>
> Ok, so it is the same IP but firmware is controlling the resources now. This
> information should be present in the commit message.
>
> Btw, there is an existing generic ECAM host controller driver:
> drivers/pci/controller/pci-host-generic.c
>
> This driver is already being used by several vendors as well. So we should try
> to extend it for Qcom usecase also.
I did review pci-host-generic.c driver for usage. although there are
more functionalityneeded for use case purpose as below:
1. MSI functionality
2. Suspend/Resume
3. Wakeup Functionality (not part of current change, but would be added
later)
4. Here this driver provides way to virtualized PCIe controller. So VMs
only talk to a generic ECAM whereas HW is only directed accessed by
service VM.
5. Adding more Auto based safety use cases related implementation
Hence keeping pci-host-generic.c as generic driver where above
functionality may not be needed. Also here we may add more functionality
using PM runtime based GenPD/Power Domain with SCMI communication with
firmware.
>>>> This driver also support MSI functionality using PCIe controller based
>>>> MSI controller as GIC ITS based MSI functionality is not available on
>>>> some of platform.
>>>>
>
> So is this the same internal MSI controller in the DWC IP? If so, then we
> already have the MSI implementation in
> drivers/pci/controller/dwc/pcie-designware-host.c and that should be reused here
> instead of duplicating the code.
If you are referring just MSI implementation as duplication code than I
agree with you.
Although proposed new driver is agnostic to specific PCIe controller
related IP. Currently we are using PCIe DWC controller based MSI
controller for MSI functionality using controller specific SPIs.
Although I am looking into implementation where we can use free SPIs
(there is no free SPIs available on SA877p-ride platform) or extended
SPIs to use for MSI functionality so we don't need to use PCIe
controller based MSI controller. extended SPI based MSI functionality
related work is under progress and eventually will replace current
proposed solution based MSI implementation. With that we would have
generic enough implementation for MSI functionality using free
SPIs/extended SPIs with this new driver.
> - Mani
>
Regards,
Mayank
>>>> Signed-off-by: Mayank Rana <quic_mrana@quicinc.com>
>>>> ---
>>>> drivers/pci/controller/Kconfig | 12 +
>>>> drivers/pci/controller/Makefile | 1 +
>>>> drivers/pci/controller/pcie-qcom-ecam.c | 575 ++++++++++++++++++++++++++++++++
>>>> 3 files changed, 588 insertions(+)
>>>> create mode 100644 drivers/pci/controller/pcie-qcom-ecam.c
>>>>
>>>> diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
>>>> index e534c02..abbd9f2 100644
>>>> --- a/drivers/pci/controller/Kconfig
>>>> +++ b/drivers/pci/controller/Kconfig
>>>> @@ -353,6 +353,18 @@ config PCIE_XILINX_CPM
>>>> Say 'Y' here if you want kernel support for the
>>>> Xilinx Versal CPM host bridge.
>>>> +config PCIE_QCOM_ECAM
>>>> + tristate "QCOM PCIe ECAM host controller"
>>>> + depends on ARCH_QCOM && PCI
>>>> + depends on OF
>>>> + select PCI_MSI
>>>> + select PCI_HOST_COMMON
>>>> + select IRQ_DOMAIN
>>>> + help
>>>> + Say 'Y' here if you want to use ECAM shift mode compatible Qualcomm
>>>> + PCIe root host controller. The controller is programmed using firmware
>>>> + to support ECAM compatible memory address space.
>>>> +
>>>> source "drivers/pci/controller/cadence/Kconfig"
>>>> source "drivers/pci/controller/dwc/Kconfig"
>>>> source "drivers/pci/controller/mobiveil/Kconfig"
>>>> diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
>>>> index f2b19e6..2f1ee1e 100644
>>>> --- a/drivers/pci/controller/Makefile
>>>> +++ b/drivers/pci/controller/Makefile
>>>> @@ -40,6 +40,7 @@ obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
>>>> obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o
>>>> obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o
>>>> obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
>>>> +obj-$(CONFIG_PCIE_QCOM_ECAM) += pcie-qcom-ecam.o
>>>> # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
>>>> obj-y += dwc/
>>>> diff --git a/drivers/pci/controller/pcie-qcom-ecam.c b/drivers/pci/controller/pcie-qcom-ecam.c
>>>> new file mode 100644
>>>> index 00000000..5b4c68b
>>>> --- /dev/null
>>>> +++ b/drivers/pci/controller/pcie-qcom-ecam.c
>>>> @@ -0,0 +1,575 @@
>>>> +// SPDX-License-Identifier: GPL-2.0-only
>>>> +/*
>>>> + * Qualcomm PCIe ECAM root host controller driver
>>>> + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
>>>> + */
>>>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>>>> +
>>>> +#include <linux/irq.h>
>>>> +#include <linux/irqchip/chained_irq.h>
>>>> +#include <linux/irqdomain.h>
>>>> +#include <linux/kernel.h>
>>>> +#include <linux/module.h>
>>>> +#include <linux/msi.h>
>>>> +#include <linux/of_address.h>
>>>> +#include <linux/of_irq.h>
>>>> +#include <linux/of_pci.h>
>>>> +#include <linux/pci.h>
>>>> +#include <linux/pci-ecam.h>
>>>> +#include <linux/platform_device.h>
>>>> +#include <linux/pm_domain.h>
>>>> +#include <linux/pm_runtime.h>
>>>> +#include <linux/slab.h>
>>>> +#include <linux/types.h>
>>>> +
>>>> +#define PCIE_MSI_CTRL_BASE (0x820)
>>>> +#define PCIE_MSI_CTRL_SIZE (0x68)
>>>> +#define PCIE_MSI_CTRL_ADDR_OFFS (0x0)
>>>> +#define PCIE_MSI_CTRL_UPPER_ADDR_OFFS (0x4)
>>>> +#define PCIE_MSI_CTRL_INT_N_EN_OFFS(n) (0x8 + 0xc * (n))
>>>> +#define PCIE_MSI_CTRL_INT_N_MASK_OFFS(n) (0xc + 0xc * (n))
>>>> +#define PCIE_MSI_CTRL_INT_N_STATUS_OFFS(n) (0x10 + 0xc * (n))
>>>> +
>>>> +#define MSI_DB_ADDR 0xa0000000
>>>> +#define MSI_IRQ_PER_GRP (32)
>>>> +
>>>> +/**
>>>> + * struct qcom_msi_irq - MSI IRQ information
>>>> + * @client: pointer to MSI client struct
>>>> + * @grp: group the irq belongs to
>>>> + * @grp_index: index in group
>>>> + * @hwirq: hwirq number
>>>> + * @virq: virq number
>>>> + * @pos: position in MSI bitmap
>>>> + */
>>>> +struct qcom_msi_irq {
>>>> + struct qcom_msi_client *client;
>>>> + struct qcom_msi_grp *grp;
>>>> + unsigned int grp_index;
>>>> + unsigned int hwirq;
>>>> + unsigned int virq;
>>>> + u32 pos;
>>>> +};
>>>> +
>>>> +/**
>>>> + * struct qcom_msi_grp - MSI group information
>>>> + * @int_en_reg: memory-mapped interrupt enable register address
>>>> + * @int_mask_reg: memory-mapped interrupt mask register address
>>>> + * @int_status_reg: memory-mapped interrupt status register address
>>>> + * @mask: tracks masked/unmasked MSI
>>>> + * @irqs: structure to MSI IRQ information
>>>> + */
>>>> +struct qcom_msi_grp {
>>>> + void __iomem *int_en_reg;
>>>> + void __iomem *int_mask_reg;
>>>> + void __iomem *int_status_reg;
>>>> + u32 mask;
>>>> + struct qcom_msi_irq irqs[MSI_IRQ_PER_GRP];
>>>> +};
>>>> +
>>>> +/**
>>>> + * struct qcom_msi - PCIe controller based MSI controller information
>>>> + * @clients: list for tracking clients
>>>> + * @dev: platform device node
>>>> + * @nr_hwirqs: total number of hardware IRQs
>>>> + * @nr_virqs: total number of virqs
>>>> + * @nr_grps: total number of groups
>>>> + * @grps: pointer to all groups information
>>>> + * @bitmap: tracks used/unused MSI
>>>> + * @mutex: for modifying MSI client list and bitmap
>>>> + * @inner_domain: parent domain; gen irq related
>>>> + * @msi_domain: child domain; pcie related
>>>> + * @msi_db_addr: MSI doorbell address
>>>> + * @cfg_lock: lock for configuring MSI controller registers
>>>> + * @pcie_msi_cfg: memory-mapped MSI controller register space
>>>> + */
>>>> +struct qcom_msi {
>>>> + struct list_head clients;
>>>> + struct device *dev;
>>>> + int nr_hwirqs;
>>>> + int nr_virqs;
>>>> + int nr_grps;
>>>> + struct qcom_msi_grp *grps;
>>>> + unsigned long *bitmap;
>>>> + struct mutex mutex;
>>>> + struct irq_domain *inner_domain;
>>>> + struct irq_domain *msi_domain;
>>>> + phys_addr_t msi_db_addr;
>>>> + spinlock_t cfg_lock;
>>>> + void __iomem *pcie_msi_cfg;
>>>> +};
>>>> +
>>>> +/**
>>>> + * struct qcom_msi_client - structure for each client of MSI controller
>>>> + * @node: list to track number of MSI clients
>>>> + * @msi: client specific MSI controller based resource pointer
>>>> + * @dev: client's dev of pci_dev
>>>> + * @nr_irqs: number of irqs allocated for client
>>>> + * @msi_addr: MSI doorbell address
>>>> + */
>>>> +struct qcom_msi_client {
>>>> + struct list_head node;
>>>> + struct qcom_msi *msi;
>>>> + struct device *dev;
>>>> + unsigned int nr_irqs;
>>>> + phys_addr_t msi_addr;
>>>> +};
>>>> +
>>>> +static void qcom_msi_handler(struct irq_desc *desc)
>>>> +{
>>>> + struct irq_chip *chip = irq_desc_get_chip(desc);
>>>> + struct qcom_msi_grp *msi_grp;
>>>> + u32 status;
>>>> + int i;
>>>> +
>>>> + chained_irq_enter(chip, desc);
>>>> +
>>>> + msi_grp = irq_desc_get_handler_data(desc);
>>>> + status = readl_relaxed(msi_grp->int_status_reg);
>>>> + status ^= (msi_grp->mask & status);
>>>> + writel(status, msi_grp->int_status_reg);
>>>> +
>>>> + for (i = 0; status; i++, status >>= 1)
>>>> + if (status & 0x1)
>>>> + generic_handle_irq(msi_grp->irqs[i].virq);
>>>> +
>>>> + chained_irq_exit(chip, desc);
>>>> +}
>>>> +
>>>> +static void qcom_msi_mask_irq(struct irq_data *data)
>>>> +{
>>>> + struct irq_data *parent_data;
>>>> + struct qcom_msi_irq *msi_irq;
>>>> + struct qcom_msi_grp *msi_grp;
>>>> + struct qcom_msi *msi;
>>>> + unsigned long flags;
>>>> +
>>>> + parent_data = data->parent_data;
>>>> + if (!parent_data)
>>>> + return;
>>>> +
>>>> + msi_irq = irq_data_get_irq_chip_data(parent_data);
>>>> + msi = msi_irq->client->msi;
>>>> + msi_grp = msi_irq->grp;
>>>> +
>>>> + spin_lock_irqsave(&msi->cfg_lock, flags);
>>>> + pci_msi_mask_irq(data);
>>>> + msi_grp->mask |= BIT(msi_irq->grp_index);
>>>> + writel(msi_grp->mask, msi_grp->int_mask_reg);
>>>> + spin_unlock_irqrestore(&msi->cfg_lock, flags);
>>>> +}
>>>> +
>>>> +static void qcom_msi_unmask_irq(struct irq_data *data)
>>>> +{
>>>> + struct irq_data *parent_data;
>>>> + struct qcom_msi_irq *msi_irq;
>>>> + struct qcom_msi_grp *msi_grp;
>>>> + struct qcom_msi *msi;
>>>> + unsigned long flags;
>>>> +
>>>> + parent_data = data->parent_data;
>>>> + if (!parent_data)
>>>> + return;
>>>> +
>>>> + msi_irq = irq_data_get_irq_chip_data(parent_data);
>>>> + msi = msi_irq->client->msi;
>>>> + msi_grp = msi_irq->grp;
>>>> +
>>>> + spin_lock_irqsave(&msi->cfg_lock, flags);
>>>> + msi_grp->mask &= ~BIT(msi_irq->grp_index);
>>>> + writel(msi_grp->mask, msi_grp->int_mask_reg);
>>>> + pci_msi_unmask_irq(data);
>>>> + spin_unlock_irqrestore(&msi->cfg_lock, flags);
>>>> +}
>>>> +
>>>> +static struct irq_chip qcom_msi_irq_chip = {
>>>> + .name = "qcom_pci_msi",
>>>> + .irq_enable = qcom_msi_unmask_irq,
>>>> + .irq_disable = qcom_msi_mask_irq,
>>>> + .irq_mask = qcom_msi_mask_irq,
>>>> + .irq_unmask = qcom_msi_unmask_irq,
>>>> +};
>>>> +
>>>> +static int qcom_msi_domain_prepare(struct irq_domain *domain, struct device *dev,
>>>> + int nvec, msi_alloc_info_t *arg)
>>>> +{
>>>> + struct qcom_msi *msi = domain->parent->host_data;
>>>> + struct qcom_msi_client *client;
>>>> +
>>>> + client = kzalloc(sizeof(*client), GFP_KERNEL);
>>>> + if (!client)
>>>> + return -ENOMEM;
>>>> +
>>>> + client->msi = msi;
>>>> + client->dev = dev;
>>>> + client->msi_addr = msi->msi_db_addr;
>>>> + mutex_lock(&msi->mutex);
>>>> + list_add_tail(&client->node, &msi->clients);
>>>> + mutex_unlock(&msi->mutex);
>>>> +
>>>> + /* zero out struct for pcie msi framework */
>>>> + memset(arg, 0, sizeof(*arg));
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static struct msi_domain_ops qcom_msi_domain_ops = {
>>>> + .msi_prepare = qcom_msi_domain_prepare,
>>>> +};
>>>> +
>>>> +static struct msi_domain_info qcom_msi_domain_info = {
>>>> + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
>>>> + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
>>>> + .ops = &qcom_msi_domain_ops,
>>>> + .chip = &qcom_msi_irq_chip,
>>>> +};
>>>> +
>>>> +static int qcom_msi_irq_set_affinity(struct irq_data *data,
>>>> + const struct cpumask *mask, bool force)
>>>> +{
>>>> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
>>>> + int ret = 0;
>>>> +
>>>> + if (!parent_data)
>>>> + return -ENODEV;
>>>> +
>>>> + /* set affinity for MSI HW IRQ */
>>>> + if (parent_data->chip->irq_set_affinity)
>>>> + ret = parent_data->chip->irq_set_affinity(parent_data, mask, force);
>>>> +
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static void qcom_msi_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
>>>> +{
>>>> + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
>>>> + struct qcom_msi_irq *msi_irq = irq_data_get_irq_chip_data(data);
>>>> + struct qcom_msi_client *client = msi_irq->client;
>>>> +
>>>> + if (!parent_data)
>>>> + return;
>>>> +
>>>> + msg->address_lo = lower_32_bits(client->msi_addr);
>>>> + msg->address_hi = upper_32_bits(client->msi_addr);
>>>> + msg->data = msi_irq->pos;
>>>> +}
>>>> +
>>>> +static struct irq_chip qcom_msi_bottom_irq_chip = {
>>>> + .name = "qcom_msi",
>>>> + .irq_set_affinity = qcom_msi_irq_set_affinity,
>>>> + .irq_compose_msi_msg = qcom_msi_irq_compose_msi_msg,
>>>> +};
>>>> +
>>>> +static int qcom_msi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
>>>> + unsigned int nr_irqs, void *args)
>>>> +{
>>>> + struct device *dev = ((msi_alloc_info_t *)args)->desc->dev;
>>>> + struct qcom_msi_client *tmp, *client = NULL;
>>>> + struct qcom_msi *msi = domain->host_data;
>>>> + int i, ret = 0;
>>>> + int pos;
>>>> +
>>>> + mutex_lock(&msi->mutex);
>>>> + list_for_each_entry(tmp, &msi->clients, node) {
>>>> + if (tmp->dev == dev) {
>>>> + client = tmp;
>>>> + break;
>>>> + }
>>>> + }
>>>> +
>>>> + if (!client) {
>>>> + dev_err(msi->dev, "failed to find MSI client dev\n");
>>>> + ret = -ENODEV;
>>>> + goto out;
>>>> + }
>>>> +
>>>> + pos = bitmap_find_next_zero_area(msi->bitmap, msi->nr_virqs, 0,
>>>> + nr_irqs, nr_irqs - 1);
>>>> + if (pos > msi->nr_virqs) {
>>>> + ret = -ENOSPC;
>>>> + goto out;
>>>> + }
>>>> +
>>>> + bitmap_set(msi->bitmap, pos, nr_irqs);
>>>> + for (i = 0; i < nr_irqs; i++) {
>>>> + u32 grp = pos / MSI_IRQ_PER_GRP;
>>>> + u32 index = pos % MSI_IRQ_PER_GRP;
>>>> + struct qcom_msi_irq *msi_irq = &msi->grps[grp].irqs[index];
>>>> +
>>>> + msi_irq->virq = virq + i;
>>>> + msi_irq->client = client;
>>>> + irq_domain_set_info(domain, msi_irq->virq,
>>>> + msi_irq->hwirq,
>>>> + &qcom_msi_bottom_irq_chip, msi_irq,
>>>> + handle_simple_irq, NULL, NULL);
>>>> + client->nr_irqs++;
>>>> + pos++;
>>>> + }
>>>> +out:
>>>> + mutex_unlock(&msi->mutex);
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static void qcom_msi_irq_domain_free(struct irq_domain *domain, unsigned int virq,
>>>> + unsigned int nr_irqs)
>>>> +{
>>>> + struct irq_data *data = irq_domain_get_irq_data(domain, virq);
>>>> + struct qcom_msi_client *client;
>>>> + struct qcom_msi_irq *msi_irq;
>>>> + struct qcom_msi *msi;
>>>> +
>>>> + if (!data)
>>>> + return;
>>>> +
>>>> + msi_irq = irq_data_get_irq_chip_data(data);
>>>> + client = msi_irq->client;
>>>> + msi = client->msi;
>>>> +
>>>> + mutex_lock(&msi->mutex);
>>>> + bitmap_clear(msi->bitmap, msi_irq->pos, nr_irqs);
>>>> +
>>>> + client->nr_irqs -= nr_irqs;
>>>> + if (!client->nr_irqs) {
>>>> + list_del(&client->node);
>>>> + kfree(client);
>>>> + }
>>>> + mutex_unlock(&msi->mutex);
>>>> +
>>>> + irq_domain_free_irqs_parent(domain, virq, nr_irqs);
>>>> +}
>>>> +
>>>> +static const struct irq_domain_ops msi_domain_ops = {
>>>> + .alloc = qcom_msi_irq_domain_alloc,
>>>> + .free = qcom_msi_irq_domain_free,
>>>> +};
>>>> +
>>>> +static int qcom_msi_alloc_domains(struct qcom_msi *msi)
>>>> +{
>>>> + msi->inner_domain = irq_domain_add_linear(NULL, msi->nr_virqs,
>>>> + &msi_domain_ops, msi);
>>>> + if (!msi->inner_domain) {
>>>> + dev_err(msi->dev, "failed to create IRQ inner domain\n");
>>>> + return -ENOMEM;
>>>> + }
>>>> +
>>>> + msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->dev->of_node),
>>>> + &qcom_msi_domain_info, msi->inner_domain);
>>>> + if (!msi->msi_domain) {
>>>> + dev_err(msi->dev, "failed to create MSI domain\n");
>>>> + irq_domain_remove(msi->inner_domain);
>>>> + return -ENOMEM;
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static int qcom_msi_irq_setup(struct qcom_msi *msi)
>>>> +{
>>>> + struct qcom_msi_grp *msi_grp;
>>>> + struct qcom_msi_irq *msi_irq;
>>>> + int i, index, ret;
>>>> + unsigned int irq;
>>>> +
>>>> + /* setup each MSI group. nr_hwirqs == nr_grps */
>>>> + for (i = 0; i < msi->nr_hwirqs; i++) {
>>>> + irq = irq_of_parse_and_map(msi->dev->of_node, i);
>>>> + if (!irq) {
>>>> + dev_err(msi->dev,
>>>> + "MSI: failed to parse/map interrupt\n");
>>>> + ret = -ENODEV;
>>>> + goto free_irqs;
>>>> + }
>>>> +
>>>> + msi_grp = &msi->grps[i];
>>>> + msi_grp->int_en_reg = msi->pcie_msi_cfg +
>>>> + PCIE_MSI_CTRL_INT_N_EN_OFFS(i);
>>>> + msi_grp->int_mask_reg = msi->pcie_msi_cfg +
>>>> + PCIE_MSI_CTRL_INT_N_MASK_OFFS(i);
>>>> + msi_grp->int_status_reg = msi->pcie_msi_cfg +
>>>> + PCIE_MSI_CTRL_INT_N_STATUS_OFFS(i);
>>>> +
>>>> + for (index = 0; index < MSI_IRQ_PER_GRP; index++) {
>>>> + msi_irq = &msi_grp->irqs[index];
>>>> +
>>>> + msi_irq->grp = msi_grp;
>>>> + msi_irq->grp_index = index;
>>>> + msi_irq->pos = (i * MSI_IRQ_PER_GRP) + index;
>>>> + msi_irq->hwirq = irq;
>>>> + }
>>>> +
>>>> + irq_set_chained_handler_and_data(irq, qcom_msi_handler, msi_grp);
>>>> + }
>>>> +
>>>> + return 0;
>>>> +
>>>> +free_irqs:
>>>> + for (--i; i >= 0; i--) {
>>>> + irq = msi->grps[i].irqs[0].hwirq;
>>>> +
>>>> + irq_set_chained_handler_and_data(irq, NULL, NULL);
>>>> + irq_dispose_mapping(irq);
>>>> + }
>>>> +
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static void qcom_msi_config(struct irq_domain *domain)
>>>> +{
>>>> + struct qcom_msi *msi;
>>>> + int i;
>>>> +
>>>> + msi = domain->parent->host_data;
>>>> +
>>>> + /* program termination address */
>>>> + writel(msi->msi_db_addr, msi->pcie_msi_cfg + PCIE_MSI_CTRL_ADDR_OFFS);
>>>> + writel(0, msi->pcie_msi_cfg + PCIE_MSI_CTRL_UPPER_ADDR_OFFS);
>>>> +
>>>> + /* restore mask and enable all interrupts for each group */
>>>> + for (i = 0; i < msi->nr_grps; i++) {
>>>> + struct qcom_msi_grp *msi_grp = &msi->grps[i];
>>>> +
>>>> + writel(msi_grp->mask, msi_grp->int_mask_reg);
>>>> + writel(~0, msi_grp->int_en_reg);
>>>> + }
>>>> +}
>>>> +
>>>> +static void qcom_msi_deinit(struct qcom_msi *msi)
>>>> +{
>>>> + irq_domain_remove(msi->msi_domain);
>>>> + irq_domain_remove(msi->inner_domain);
>>>> +}
>>>> +
>>>> +static struct qcom_msi *qcom_msi_init(struct device *dev)
>>>> +{
>>>> + struct qcom_msi *msi;
>>>> + u64 addr;
>>>> + int ret;
>>>> +
>>>> + msi = devm_kzalloc(dev, sizeof(*msi), GFP_KERNEL);
>>>> + if (!msi)
>>>> + return ERR_PTR(-ENOMEM);
>>>> +
>>>> + msi->dev = dev;
>>>> + mutex_init(&msi->mutex);
>>>> + spin_lock_init(&msi->cfg_lock);
>>>> + INIT_LIST_HEAD(&msi->clients);
>>>> +
>>>> + msi->msi_db_addr = MSI_DB_ADDR;
>>>> + msi->nr_hwirqs = of_irq_count(dev->of_node);
>>>> + if (!msi->nr_hwirqs) {
>>>> + dev_err(msi->dev, "no hwirqs found\n");
>>>> + return ERR_PTR(-ENODEV);
>>>> + }
>>>> +
>>>> + if (of_property_read_reg(dev->of_node, 0, &addr, NULL) < 0) {
>>>> + dev_err(msi->dev, "failed to get reg address\n");
>>>> + return ERR_PTR(-ENODEV);
>>>> + }
>>>> +
>>>> + dev_dbg(msi->dev, "hwirq:%d pcie_msi_cfg:%llx\n", msi->nr_hwirqs, addr);
>>>> + msi->pcie_msi_cfg = devm_ioremap(dev, addr + PCIE_MSI_CTRL_BASE, PCIE_MSI_CTRL_SIZE);
>>>> + if (!msi->pcie_msi_cfg)
>>>> + return ERR_PTR(-ENOMEM);
>>>> +
>>>> + msi->nr_virqs = msi->nr_hwirqs * MSI_IRQ_PER_GRP;
>>>> + msi->nr_grps = msi->nr_hwirqs;
>>>> + msi->grps = devm_kcalloc(dev, msi->nr_grps, sizeof(*msi->grps), GFP_KERNEL);
>>>> + if (!msi->grps)
>>>> + return ERR_PTR(-ENOMEM);
>>>> +
>>>> + msi->bitmap = devm_kcalloc(dev, BITS_TO_LONGS(msi->nr_virqs),
>>>> + sizeof(*msi->bitmap), GFP_KERNEL);
>>>> + if (!msi->bitmap)
>>>> + return ERR_PTR(-ENOMEM);
>>>> +
>>>> + ret = qcom_msi_alloc_domains(msi);
>>>> + if (ret)
>>>> + return ERR_PTR(ret);
>>>> +
>>>> + ret = qcom_msi_irq_setup(msi);
>>>> + if (ret) {
>>>> + qcom_msi_deinit(msi);
>>>> + return ERR_PTR(ret);
>>>> + }
>>>> +
>>>> + qcom_msi_config(msi->msi_domain);
>>>> + return msi;
>>>> +}
>>>> +
>>>> +static int qcom_pcie_ecam_suspend_noirq(struct device *dev)
>>>> +{
>>>> + return pm_runtime_put_sync(dev);
>>>> +}
>>>> +
>>>> +static int qcom_pcie_ecam_resume_noirq(struct device *dev)
>>>> +{
>>>> + return pm_runtime_get_sync(dev);
>>>> +}
>>>> +
>>>> +static int qcom_pcie_ecam_probe(struct platform_device *pdev)
>>>> +{
>>>> + struct device *dev = &pdev->dev;
>>>> + struct qcom_msi *msi;
>>>> + int ret;
>>>> +
>>>> + ret = devm_pm_runtime_enable(dev);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + ret = pm_runtime_resume_and_get(dev);
>>>> + if (ret < 0) {
>>>> + dev_err(dev, "fail to enable pcie controller: %d\n", ret);
>>>> + return ret;
>>>> + }
>>>> +
>>>> + msi = qcom_msi_init(dev);
>>>> + if (IS_ERR(msi)) {
>>>> + pm_runtime_put_sync(dev);
>>>> + return PTR_ERR(msi);
>>>> + }
>>>> +
>>>> + ret = pci_host_common_probe(pdev);
>>>> + if (ret) {
>>>> + dev_err(dev, "pci_host_common_probe() failed:%d\n", ret);
>>>> + qcom_msi_deinit(msi);
>>>> + pm_runtime_put_sync(dev);
>>>> + }
>>>> +
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static const struct dev_pm_ops qcom_pcie_ecam_pm_ops = {
>>>> + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_ecam_suspend_noirq,
>>>> + qcom_pcie_ecam_resume_noirq)
>>>> +};
>>>> +
>>>> +static const struct pci_ecam_ops qcom_pcie_ecam_ops = {
>>>> + .pci_ops = {
>>>> + .map_bus = pci_ecam_map_bus,
>>>> + .read = pci_generic_config_read,
>>>> + .write = pci_generic_config_write,
>>>> + }
>>>> +};
>>>> +
>>>> +static const struct of_device_id qcom_pcie_ecam_of_match[] = {
>>>> + {
>>>> + .compatible = "qcom,pcie-ecam-rc",
>>>> + .data = &qcom_pcie_ecam_ops,
>>>> + },
>>>> + { },
>>>> +};
>>>> +MODULE_DEVICE_TABLE(of, qcom_pcie_ecam_of_match);
>>>> +
>>>> +static struct platform_driver qcom_pcie_ecam_driver = {
>>>> + .probe = qcom_pcie_ecam_probe,
>>>> + .driver = {
>>>> + .name = "qcom-pcie-ecam-rc",
>>>> + .suppress_bind_attrs = true,
>>>> + .of_match_table = qcom_pcie_ecam_of_match,
>>>> + .probe_type = PROBE_PREFER_ASYNCHRONOUS,
>>>> + .pm = &qcom_pcie_ecam_pm_ops,
>>>> + },
>>>> +};
>>>> +module_platform_driver(qcom_pcie_ecam_driver);
>>>> +
>>>> +MODULE_DESCRIPTION("Qualcomm PCIe ECAM root complex driver");
>>>> +MODULE_LICENSE("GPL");
>>>> --
>>>> 2.7.4
>>>>
>>>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex
2024-04-04 19:30 ` Krzysztof Kozlowski
@ 2024-04-08 19:09 ` Mayank Rana
2024-04-09 6:21 ` Krzysztof Kozlowski
0 siblings, 1 reply; 27+ messages in thread
From: Mayank Rana @ 2024-04-08 19:09 UTC (permalink / raw)
To: Krzysztof Kozlowski, linux-pci, lpieralisi, kw, robh, bhelgaas,
andersson, manivannan.sadhasivam, krzysztof.kozlowski+dt,
conor+dt, devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
Hi Krzysztof
On 4/4/2024 12:30 PM, Krzysztof Kozlowski wrote:
> On 04/04/2024 21:11, Mayank Rana wrote:
>> On some of Qualcomm platform, firmware configures PCIe controller in RC
>
> On which?
>
> Your commit or binding must answer to all such questions.
>
>> mode with static iATU window mappings of configuration space for entire
>> supported bus range in ECAM compatible mode. Firmware also manages PCIe
>> PHY as well required system resources. Here document properties and
>> required configuration to power up QCOM PCIe ECAM compatible root complex
>> and PHY for PCIe functionality.
>>
>> Signed-off-by: Mayank Rana <quic_mrana@quicinc.com>
>> ---
>> .../devicetree/bindings/pci/qcom,pcie-ecam.yaml | 94 ++++++++++++++++++++++
>> 1 file changed, 94 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml b/Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
>> new file mode 100644
>> index 00000000..c209f12
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/pci/qcom,pcie-ecam.yaml
>> @@ -0,0 +1,94 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/pci/qcom,pcie-ecam.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Qualcomm ECAM compliant PCI express root complex
>> +
>> +description: |
> Do not need '|' unless you need to preserve formatting.
ACK
>
>> + Qualcomm SOC based ECAM compatible PCIe root complex supporting MSI controller.
>
> Which SoC?
ACK
>> + Firmware configures PCIe controller in RC mode with static iATU window mappings
>> + of configuration space for entire supported bus range in ECAM compatible mode.
>> +
>> +maintainers:
>> + - Mayank Rana <quic_mrana@quicinc.com>
>> +
>> +allOf:
>> + - $ref: /schemas/pci/pci-bus.yaml#
>> + - $ref: /schemas/power-domain/power-domain-consumer.yaml
>> +
>> +properties:
>> + compatible:
>> + const: qcom,pcie-ecam-rc
>
> No, this must have SoC specific compatibles.
This driver is proposed to work with any PCIe controller supported ECAM
functionality on Qualcomm platform
where firmware running on other VM/processor is controlling PCIe PHY and
controller for PCIe link up functionality.
Do you still suggest to have SoC specific compatibles here ?
>> +
>> + reg:
>> + minItems: 1
>
> maxItems instead
>
>> + description: ECAM address space starting from root port till supported bus range
>> +
>> + interrupts:
>> + minItems: 1
>> + maxItems: 8
>
> This is way too unspecific.
will review and update.
>> +
>> + ranges:
>> + minItems: 2
>> + maxItems: 3
>
> Why variable?
It depends on how ECAM configured to support 32-bit and 64-bit based
prefetch address space.
So there are different combination of prefetch (32-bit or 64-bit or
both) and non-prefetch (32-bit), and IO address space available. hence
kept it as variable with based on required use case and address space
availability.
>> +
>> + iommu-map:
>> + minItems: 1
>> + maxItems: 16
>
> Why variable?
>
> Open existing bindings and look how it is done.
ok. will review and update as needed.
>
>> +
>> + power-domains:
>> + maxItems: 1
>> + description: A phandle to node which is able support way to communicate with firmware
>> + for enabling PCIe controller and PHY as well managing all system resources needed to
>> + make both controller and PHY operational for PCIe functionality.
>
> This description does not tell me much. Say something specific. And drop
> redundant parts like phandle.
ok. will rephrase it.
>
>> +
>> + dma-coherent: true
>> +
>> +required:
>> + - compatible
>> + - reg
>> + - interrupts
>> + - ranges
>> + - power-domains
>> + - device_type
>> + - linux,pci-domain
>> + - bus-range
>> +
>> +unevaluatedProperties: false
>> +
>> +examples:
>> + - |
>> + #include <dt-bindings/interrupt-controller/arm-gic.h>
>> + soc {
>> + #address-cells = <2>;
>> + #size-cells = <2>;
>> + pcie0: pci@1c00000 {
>> + compatible = "qcom,pcie-ecam-rc";
>> + reg = <0x4 0x00000000 0 0x10000000>;
>> + device_type = "pci";
>> + #address-cells = <3>;
>> + #size-cells = <2>;
>> + ranges = <0x01000000 0x0 0x40000000 0x0 0x40000000 0x0 0x100000>,
>> + <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>,
>> + <0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x100000>;
>
> Follow DTS coding style about placement and alignment.
>
> Best regards,
> Krzysztof
>
Regards,
Mayank
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex
2024-04-08 19:09 ` Mayank Rana
@ 2024-04-09 6:21 ` Krzysztof Kozlowski
2024-04-18 18:56 ` Mayank Rana
0 siblings, 1 reply; 27+ messages in thread
From: Krzysztof Kozlowski @ 2024-04-09 6:21 UTC (permalink / raw)
To: Mayank Rana, linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
On 08/04/2024 21:09, Mayank Rana wrote:
>>> + Firmware configures PCIe controller in RC mode with static iATU window mappings
>>> + of configuration space for entire supported bus range in ECAM compatible mode.
>>> +
>>> +maintainers:
>>> + - Mayank Rana <quic_mrana@quicinc.com>
>>> +
>>> +allOf:
>>> + - $ref: /schemas/pci/pci-bus.yaml#
>>> + - $ref: /schemas/power-domain/power-domain-consumer.yaml
>>> +
>>> +properties:
>>> + compatible:
>>> + const: qcom,pcie-ecam-rc
>>
>> No, this must have SoC specific compatibles.
> This driver is proposed to work with any PCIe controller supported ECAM
> functionality on Qualcomm platform
> where firmware running on other VM/processor is controlling PCIe PHY and
> controller for PCIe link up functionality.
> Do you still suggest to have SoC specific compatibles here ?
What does the writing-bindings document say? Why this is different than
all other bindings?
>>> +
>>> + reg:
>>> + minItems: 1
>>
>> maxItems instead
>>
>>> + description: ECAM address space starting from root port till supported bus range
>>> +
>>> + interrupts:
>>> + minItems: 1
>>> + maxItems: 8
>>
>> This is way too unspecific.
> will review and update.
>>> +
>>> + ranges:
>>> + minItems: 2
>>> + maxItems: 3
>>
>> Why variable?
> It depends on how ECAM configured to support 32-bit and 64-bit based
> prefetch address space.
> So there are different combination of prefetch (32-bit or 64-bit or
> both) and non-prefetch (32-bit), and IO address space available. hence
> kept it as variable with based on required use case and address space
> availability.
Really? So same device has it configured once for 32 once for 64-bit
address space? Randomly?
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-08 18:57 ` Mayank Rana
@ 2024-04-10 6:26 ` Manivannan Sadhasivam
2024-04-10 16:58 ` Rob Herring
1 sibling, 0 replies; 27+ messages in thread
From: Manivannan Sadhasivam @ 2024-04-10 6:26 UTC (permalink / raw)
To: Mayank Rana
Cc: linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
krzysztof.kozlowski+dt, conor+dt, devicetree, linux-arm-msm,
quic_ramkri, quic_nkela, quic_shazhuss, quic_msarkar,
quic_nitegupt
On Mon, Apr 08, 2024 at 11:57:58AM -0700, Mayank Rana wrote:
> Hi Mani
>
> On 4/5/2024 9:17 PM, Manivannan Sadhasivam wrote:
> > On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
> > > Hi Mani
> > >
> > > On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
> > > > On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
> > > > > On some of Qualcomm platform, firmware configures PCIe controller into
> > > > > ECAM mode allowing static memory allocation for configuration space of
> > > > > supported bus range. Firmware also takes care of bringing up PCIe PHY
> > > > > and performing required operation to bring PCIe link into D0. Firmware
> > > > > also manages system resources (e.g. clocks/regulators/resets/ bus voting).
> > > > > Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
> > > > > root complex and connected PCIe devices. Firmware won't be enumerating
> > > > > or powering up PCIe root complex until this driver invokes power domain
> > > > > based notification to bring PCIe link into D0/D3cold mode.
> > > > >
> > > >
> > > > Is this an in-house PCIe IP of Qualcomm or the same DWC IP that is used in other
> > > > SoCs?
> > > >
> > > > - Mani
> > > Driver is validated on SA8775p-ride platform using PCIe DWC IP for
> > > now.Although this driver doesn't need to know used PCIe controller and PHY
> > > IP as well programming sequence as that would be taken care by firmware.
> > >
> >
> > Ok, so it is the same IP but firmware is controlling the resources now. This
> > information should be present in the commit message.
> >
> > Btw, there is an existing generic ECAM host controller driver:
> > drivers/pci/controller/pci-host-generic.c
> >
> > This driver is already being used by several vendors as well. So we should try
> > to extend it for Qcom usecase also.
> I did review pci-host-generic.c driver for usage. although there are more
> functionalityneeded for use case purpose as below:
> 1. MSI functionality
> 2. Suspend/Resume
> 3. Wakeup Functionality (not part of current change, but would be added
> later)
> 4. Here this driver provides way to virtualized PCIe controller. So VMs only
> talk to a generic ECAM whereas HW is only directed accessed by service VM.
> 5. Adding more Auto based safety use cases related implementation
>
> Hence keeping pci-host-generic.c as generic driver where above functionality
> may not be needed. Also here we may add more functionality using PM runtime
> based GenPD/Power Domain with SCMI communication with firmware.
>
Ok. I think it is fine for now. But why should this driver be called
'pcie-qcom-ecam'? For sure the driver is ECAM compatible, but that's not the
only factor, right? I think it is better to call it as 'pcie-qcom-generic'. Even
though 'generic' may bring some ambiguity, I think still it is a better naming.
> > > > > This driver also support MSI functionality using PCIe controller based
> > > > > MSI controller as GIC ITS based MSI functionality is not available on
> > > > > some of platform.
> > > > >
> >
> > So is this the same internal MSI controller in the DWC IP? If so, then we
> > already have the MSI implementation in
> > drivers/pci/controller/dwc/pcie-designware-host.c and that should be reused here
> > instead of duplicating the code.
> If you are referring just MSI implementation as duplication code than I
> agree with you.
> Although proposed new driver is agnostic to specific PCIe controller related
> IP. Currently we are using PCIe DWC controller based MSI controller for MSI
> functionality using controller specific SPIs. Although I am looking into
> implementation where we can use free SPIs (there is no free SPIs available
> on SA877p-ride platform) or extended
> SPIs to use for MSI functionality so we don't need to use PCIe controller
> based MSI controller. extended SPI based MSI functionality related work is
> under progress and eventually will replace current proposed solution based
> MSI implementation. With that we would have generic enough implementation
> for MSI functionality using free SPIs/extended SPIs with this new driver.\
Ok for keeping it in the driver.
But this driver depends on the SCMI firmware design, that is still being
discussed. There should be a note in the cover letter about it.
Also, the discussion on whether to use a new compatible or a property is not yet
concluded afair. So that should also be mentioned (More importantly, this driver
should not get merged till that discussion is concluded).
I'm planning to do the code review later this week.
- Mani
> > - Mani
> >
> Regards,
> Mayank
> > > > > Signed-off-by: Mayank Rana <quic_mrana@quicinc.com>
> > > > > ---
> > > > > drivers/pci/controller/Kconfig | 12 +
> > > > > drivers/pci/controller/Makefile | 1 +
> > > > > drivers/pci/controller/pcie-qcom-ecam.c | 575 ++++++++++++++++++++++++++++++++
> > > > > 3 files changed, 588 insertions(+)
> > > > > create mode 100644 drivers/pci/controller/pcie-qcom-ecam.c
> > > > >
> > > > > diff --git a/drivers/pci/controller/Kconfig b/drivers/pci/controller/Kconfig
> > > > > index e534c02..abbd9f2 100644
> > > > > --- a/drivers/pci/controller/Kconfig
> > > > > +++ b/drivers/pci/controller/Kconfig
> > > > > @@ -353,6 +353,18 @@ config PCIE_XILINX_CPM
> > > > > Say 'Y' here if you want kernel support for the
> > > > > Xilinx Versal CPM host bridge.
> > > > > +config PCIE_QCOM_ECAM
> > > > > + tristate "QCOM PCIe ECAM host controller"
> > > > > + depends on ARCH_QCOM && PCI
> > > > > + depends on OF
> > > > > + select PCI_MSI
> > > > > + select PCI_HOST_COMMON
> > > > > + select IRQ_DOMAIN
> > > > > + help
> > > > > + Say 'Y' here if you want to use ECAM shift mode compatible Qualcomm
> > > > > + PCIe root host controller. The controller is programmed using firmware
> > > > > + to support ECAM compatible memory address space.
> > > > > +
> > > > > source "drivers/pci/controller/cadence/Kconfig"
> > > > > source "drivers/pci/controller/dwc/Kconfig"
> > > > > source "drivers/pci/controller/mobiveil/Kconfig"
> > > > > diff --git a/drivers/pci/controller/Makefile b/drivers/pci/controller/Makefile
> > > > > index f2b19e6..2f1ee1e 100644
> > > > > --- a/drivers/pci/controller/Makefile
> > > > > +++ b/drivers/pci/controller/Makefile
> > > > > @@ -40,6 +40,7 @@ obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o
> > > > > obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o
> > > > > obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o
> > > > > obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o
> > > > > +obj-$(CONFIG_PCIE_QCOM_ECAM) += pcie-qcom-ecam.o
> > > > > # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
> > > > > obj-y += dwc/
> > > > > diff --git a/drivers/pci/controller/pcie-qcom-ecam.c b/drivers/pci/controller/pcie-qcom-ecam.c
> > > > > new file mode 100644
> > > > > index 00000000..5b4c68b
> > > > > --- /dev/null
> > > > > +++ b/drivers/pci/controller/pcie-qcom-ecam.c
> > > > > @@ -0,0 +1,575 @@
> > > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > > +/*
> > > > > + * Qualcomm PCIe ECAM root host controller driver
> > > > > + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
> > > > > + */
> > > > > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> > > > > +
> > > > > +#include <linux/irq.h>
> > > > > +#include <linux/irqchip/chained_irq.h>
> > > > > +#include <linux/irqdomain.h>
> > > > > +#include <linux/kernel.h>
> > > > > +#include <linux/module.h>
> > > > > +#include <linux/msi.h>
> > > > > +#include <linux/of_address.h>
> > > > > +#include <linux/of_irq.h>
> > > > > +#include <linux/of_pci.h>
> > > > > +#include <linux/pci.h>
> > > > > +#include <linux/pci-ecam.h>
> > > > > +#include <linux/platform_device.h>
> > > > > +#include <linux/pm_domain.h>
> > > > > +#include <linux/pm_runtime.h>
> > > > > +#include <linux/slab.h>
> > > > > +#include <linux/types.h>
> > > > > +
> > > > > +#define PCIE_MSI_CTRL_BASE (0x820)
> > > > > +#define PCIE_MSI_CTRL_SIZE (0x68)
> > > > > +#define PCIE_MSI_CTRL_ADDR_OFFS (0x0)
> > > > > +#define PCIE_MSI_CTRL_UPPER_ADDR_OFFS (0x4)
> > > > > +#define PCIE_MSI_CTRL_INT_N_EN_OFFS(n) (0x8 + 0xc * (n))
> > > > > +#define PCIE_MSI_CTRL_INT_N_MASK_OFFS(n) (0xc + 0xc * (n))
> > > > > +#define PCIE_MSI_CTRL_INT_N_STATUS_OFFS(n) (0x10 + 0xc * (n))
> > > > > +
> > > > > +#define MSI_DB_ADDR 0xa0000000
> > > > > +#define MSI_IRQ_PER_GRP (32)
> > > > > +
> > > > > +/**
> > > > > + * struct qcom_msi_irq - MSI IRQ information
> > > > > + * @client: pointer to MSI client struct
> > > > > + * @grp: group the irq belongs to
> > > > > + * @grp_index: index in group
> > > > > + * @hwirq: hwirq number
> > > > > + * @virq: virq number
> > > > > + * @pos: position in MSI bitmap
> > > > > + */
> > > > > +struct qcom_msi_irq {
> > > > > + struct qcom_msi_client *client;
> > > > > + struct qcom_msi_grp *grp;
> > > > > + unsigned int grp_index;
> > > > > + unsigned int hwirq;
> > > > > + unsigned int virq;
> > > > > + u32 pos;
> > > > > +};
> > > > > +
> > > > > +/**
> > > > > + * struct qcom_msi_grp - MSI group information
> > > > > + * @int_en_reg: memory-mapped interrupt enable register address
> > > > > + * @int_mask_reg: memory-mapped interrupt mask register address
> > > > > + * @int_status_reg: memory-mapped interrupt status register address
> > > > > + * @mask: tracks masked/unmasked MSI
> > > > > + * @irqs: structure to MSI IRQ information
> > > > > + */
> > > > > +struct qcom_msi_grp {
> > > > > + void __iomem *int_en_reg;
> > > > > + void __iomem *int_mask_reg;
> > > > > + void __iomem *int_status_reg;
> > > > > + u32 mask;
> > > > > + struct qcom_msi_irq irqs[MSI_IRQ_PER_GRP];
> > > > > +};
> > > > > +
> > > > > +/**
> > > > > + * struct qcom_msi - PCIe controller based MSI controller information
> > > > > + * @clients: list for tracking clients
> > > > > + * @dev: platform device node
> > > > > + * @nr_hwirqs: total number of hardware IRQs
> > > > > + * @nr_virqs: total number of virqs
> > > > > + * @nr_grps: total number of groups
> > > > > + * @grps: pointer to all groups information
> > > > > + * @bitmap: tracks used/unused MSI
> > > > > + * @mutex: for modifying MSI client list and bitmap
> > > > > + * @inner_domain: parent domain; gen irq related
> > > > > + * @msi_domain: child domain; pcie related
> > > > > + * @msi_db_addr: MSI doorbell address
> > > > > + * @cfg_lock: lock for configuring MSI controller registers
> > > > > + * @pcie_msi_cfg: memory-mapped MSI controller register space
> > > > > + */
> > > > > +struct qcom_msi {
> > > > > + struct list_head clients;
> > > > > + struct device *dev;
> > > > > + int nr_hwirqs;
> > > > > + int nr_virqs;
> > > > > + int nr_grps;
> > > > > + struct qcom_msi_grp *grps;
> > > > > + unsigned long *bitmap;
> > > > > + struct mutex mutex;
> > > > > + struct irq_domain *inner_domain;
> > > > > + struct irq_domain *msi_domain;
> > > > > + phys_addr_t msi_db_addr;
> > > > > + spinlock_t cfg_lock;
> > > > > + void __iomem *pcie_msi_cfg;
> > > > > +};
> > > > > +
> > > > > +/**
> > > > > + * struct qcom_msi_client - structure for each client of MSI controller
> > > > > + * @node: list to track number of MSI clients
> > > > > + * @msi: client specific MSI controller based resource pointer
> > > > > + * @dev: client's dev of pci_dev
> > > > > + * @nr_irqs: number of irqs allocated for client
> > > > > + * @msi_addr: MSI doorbell address
> > > > > + */
> > > > > +struct qcom_msi_client {
> > > > > + struct list_head node;
> > > > > + struct qcom_msi *msi;
> > > > > + struct device *dev;
> > > > > + unsigned int nr_irqs;
> > > > > + phys_addr_t msi_addr;
> > > > > +};
> > > > > +
> > > > > +static void qcom_msi_handler(struct irq_desc *desc)
> > > > > +{
> > > > > + struct irq_chip *chip = irq_desc_get_chip(desc);
> > > > > + struct qcom_msi_grp *msi_grp;
> > > > > + u32 status;
> > > > > + int i;
> > > > > +
> > > > > + chained_irq_enter(chip, desc);
> > > > > +
> > > > > + msi_grp = irq_desc_get_handler_data(desc);
> > > > > + status = readl_relaxed(msi_grp->int_status_reg);
> > > > > + status ^= (msi_grp->mask & status);
> > > > > + writel(status, msi_grp->int_status_reg);
> > > > > +
> > > > > + for (i = 0; status; i++, status >>= 1)
> > > > > + if (status & 0x1)
> > > > > + generic_handle_irq(msi_grp->irqs[i].virq);
> > > > > +
> > > > > + chained_irq_exit(chip, desc);
> > > > > +}
> > > > > +
> > > > > +static void qcom_msi_mask_irq(struct irq_data *data)
> > > > > +{
> > > > > + struct irq_data *parent_data;
> > > > > + struct qcom_msi_irq *msi_irq;
> > > > > + struct qcom_msi_grp *msi_grp;
> > > > > + struct qcom_msi *msi;
> > > > > + unsigned long flags;
> > > > > +
> > > > > + parent_data = data->parent_data;
> > > > > + if (!parent_data)
> > > > > + return;
> > > > > +
> > > > > + msi_irq = irq_data_get_irq_chip_data(parent_data);
> > > > > + msi = msi_irq->client->msi;
> > > > > + msi_grp = msi_irq->grp;
> > > > > +
> > > > > + spin_lock_irqsave(&msi->cfg_lock, flags);
> > > > > + pci_msi_mask_irq(data);
> > > > > + msi_grp->mask |= BIT(msi_irq->grp_index);
> > > > > + writel(msi_grp->mask, msi_grp->int_mask_reg);
> > > > > + spin_unlock_irqrestore(&msi->cfg_lock, flags);
> > > > > +}
> > > > > +
> > > > > +static void qcom_msi_unmask_irq(struct irq_data *data)
> > > > > +{
> > > > > + struct irq_data *parent_data;
> > > > > + struct qcom_msi_irq *msi_irq;
> > > > > + struct qcom_msi_grp *msi_grp;
> > > > > + struct qcom_msi *msi;
> > > > > + unsigned long flags;
> > > > > +
> > > > > + parent_data = data->parent_data;
> > > > > + if (!parent_data)
> > > > > + return;
> > > > > +
> > > > > + msi_irq = irq_data_get_irq_chip_data(parent_data);
> > > > > + msi = msi_irq->client->msi;
> > > > > + msi_grp = msi_irq->grp;
> > > > > +
> > > > > + spin_lock_irqsave(&msi->cfg_lock, flags);
> > > > > + msi_grp->mask &= ~BIT(msi_irq->grp_index);
> > > > > + writel(msi_grp->mask, msi_grp->int_mask_reg);
> > > > > + pci_msi_unmask_irq(data);
> > > > > + spin_unlock_irqrestore(&msi->cfg_lock, flags);
> > > > > +}
> > > > > +
> > > > > +static struct irq_chip qcom_msi_irq_chip = {
> > > > > + .name = "qcom_pci_msi",
> > > > > + .irq_enable = qcom_msi_unmask_irq,
> > > > > + .irq_disable = qcom_msi_mask_irq,
> > > > > + .irq_mask = qcom_msi_mask_irq,
> > > > > + .irq_unmask = qcom_msi_unmask_irq,
> > > > > +};
> > > > > +
> > > > > +static int qcom_msi_domain_prepare(struct irq_domain *domain, struct device *dev,
> > > > > + int nvec, msi_alloc_info_t *arg)
> > > > > +{
> > > > > + struct qcom_msi *msi = domain->parent->host_data;
> > > > > + struct qcom_msi_client *client;
> > > > > +
> > > > > + client = kzalloc(sizeof(*client), GFP_KERNEL);
> > > > > + if (!client)
> > > > > + return -ENOMEM;
> > > > > +
> > > > > + client->msi = msi;
> > > > > + client->dev = dev;
> > > > > + client->msi_addr = msi->msi_db_addr;
> > > > > + mutex_lock(&msi->mutex);
> > > > > + list_add_tail(&client->node, &msi->clients);
> > > > > + mutex_unlock(&msi->mutex);
> > > > > +
> > > > > + /* zero out struct for pcie msi framework */
> > > > > + memset(arg, 0, sizeof(*arg));
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +static struct msi_domain_ops qcom_msi_domain_ops = {
> > > > > + .msi_prepare = qcom_msi_domain_prepare,
> > > > > +};
> > > > > +
> > > > > +static struct msi_domain_info qcom_msi_domain_info = {
> > > > > + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
> > > > > + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX),
> > > > > + .ops = &qcom_msi_domain_ops,
> > > > > + .chip = &qcom_msi_irq_chip,
> > > > > +};
> > > > > +
> > > > > +static int qcom_msi_irq_set_affinity(struct irq_data *data,
> > > > > + const struct cpumask *mask, bool force)
> > > > > +{
> > > > > + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
> > > > > + int ret = 0;
> > > > > +
> > > > > + if (!parent_data)
> > > > > + return -ENODEV;
> > > > > +
> > > > > + /* set affinity for MSI HW IRQ */
> > > > > + if (parent_data->chip->irq_set_affinity)
> > > > > + ret = parent_data->chip->irq_set_affinity(parent_data, mask, force);
> > > > > +
> > > > > + return ret;
> > > > > +}
> > > > > +
> > > > > +static void qcom_msi_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
> > > > > +{
> > > > > + struct irq_data *parent_data = irq_get_irq_data(irqd_to_hwirq(data));
> > > > > + struct qcom_msi_irq *msi_irq = irq_data_get_irq_chip_data(data);
> > > > > + struct qcom_msi_client *client = msi_irq->client;
> > > > > +
> > > > > + if (!parent_data)
> > > > > + return;
> > > > > +
> > > > > + msg->address_lo = lower_32_bits(client->msi_addr);
> > > > > + msg->address_hi = upper_32_bits(client->msi_addr);
> > > > > + msg->data = msi_irq->pos;
> > > > > +}
> > > > > +
> > > > > +static struct irq_chip qcom_msi_bottom_irq_chip = {
> > > > > + .name = "qcom_msi",
> > > > > + .irq_set_affinity = qcom_msi_irq_set_affinity,
> > > > > + .irq_compose_msi_msg = qcom_msi_irq_compose_msi_msg,
> > > > > +};
> > > > > +
> > > > > +static int qcom_msi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
> > > > > + unsigned int nr_irqs, void *args)
> > > > > +{
> > > > > + struct device *dev = ((msi_alloc_info_t *)args)->desc->dev;
> > > > > + struct qcom_msi_client *tmp, *client = NULL;
> > > > > + struct qcom_msi *msi = domain->host_data;
> > > > > + int i, ret = 0;
> > > > > + int pos;
> > > > > +
> > > > > + mutex_lock(&msi->mutex);
> > > > > + list_for_each_entry(tmp, &msi->clients, node) {
> > > > > + if (tmp->dev == dev) {
> > > > > + client = tmp;
> > > > > + break;
> > > > > + }
> > > > > + }
> > > > > +
> > > > > + if (!client) {
> > > > > + dev_err(msi->dev, "failed to find MSI client dev\n");
> > > > > + ret = -ENODEV;
> > > > > + goto out;
> > > > > + }
> > > > > +
> > > > > + pos = bitmap_find_next_zero_area(msi->bitmap, msi->nr_virqs, 0,
> > > > > + nr_irqs, nr_irqs - 1);
> > > > > + if (pos > msi->nr_virqs) {
> > > > > + ret = -ENOSPC;
> > > > > + goto out;
> > > > > + }
> > > > > +
> > > > > + bitmap_set(msi->bitmap, pos, nr_irqs);
> > > > > + for (i = 0; i < nr_irqs; i++) {
> > > > > + u32 grp = pos / MSI_IRQ_PER_GRP;
> > > > > + u32 index = pos % MSI_IRQ_PER_GRP;
> > > > > + struct qcom_msi_irq *msi_irq = &msi->grps[grp].irqs[index];
> > > > > +
> > > > > + msi_irq->virq = virq + i;
> > > > > + msi_irq->client = client;
> > > > > + irq_domain_set_info(domain, msi_irq->virq,
> > > > > + msi_irq->hwirq,
> > > > > + &qcom_msi_bottom_irq_chip, msi_irq,
> > > > > + handle_simple_irq, NULL, NULL);
> > > > > + client->nr_irqs++;
> > > > > + pos++;
> > > > > + }
> > > > > +out:
> > > > > + mutex_unlock(&msi->mutex);
> > > > > + return ret;
> > > > > +}
> > > > > +
> > > > > +static void qcom_msi_irq_domain_free(struct irq_domain *domain, unsigned int virq,
> > > > > + unsigned int nr_irqs)
> > > > > +{
> > > > > + struct irq_data *data = irq_domain_get_irq_data(domain, virq);
> > > > > + struct qcom_msi_client *client;
> > > > > + struct qcom_msi_irq *msi_irq;
> > > > > + struct qcom_msi *msi;
> > > > > +
> > > > > + if (!data)
> > > > > + return;
> > > > > +
> > > > > + msi_irq = irq_data_get_irq_chip_data(data);
> > > > > + client = msi_irq->client;
> > > > > + msi = client->msi;
> > > > > +
> > > > > + mutex_lock(&msi->mutex);
> > > > > + bitmap_clear(msi->bitmap, msi_irq->pos, nr_irqs);
> > > > > +
> > > > > + client->nr_irqs -= nr_irqs;
> > > > > + if (!client->nr_irqs) {
> > > > > + list_del(&client->node);
> > > > > + kfree(client);
> > > > > + }
> > > > > + mutex_unlock(&msi->mutex);
> > > > > +
> > > > > + irq_domain_free_irqs_parent(domain, virq, nr_irqs);
> > > > > +}
> > > > > +
> > > > > +static const struct irq_domain_ops msi_domain_ops = {
> > > > > + .alloc = qcom_msi_irq_domain_alloc,
> > > > > + .free = qcom_msi_irq_domain_free,
> > > > > +};
> > > > > +
> > > > > +static int qcom_msi_alloc_domains(struct qcom_msi *msi)
> > > > > +{
> > > > > + msi->inner_domain = irq_domain_add_linear(NULL, msi->nr_virqs,
> > > > > + &msi_domain_ops, msi);
> > > > > + if (!msi->inner_domain) {
> > > > > + dev_err(msi->dev, "failed to create IRQ inner domain\n");
> > > > > + return -ENOMEM;
> > > > > + }
> > > > > +
> > > > > + msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->dev->of_node),
> > > > > + &qcom_msi_domain_info, msi->inner_domain);
> > > > > + if (!msi->msi_domain) {
> > > > > + dev_err(msi->dev, "failed to create MSI domain\n");
> > > > > + irq_domain_remove(msi->inner_domain);
> > > > > + return -ENOMEM;
> > > > > + }
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +static int qcom_msi_irq_setup(struct qcom_msi *msi)
> > > > > +{
> > > > > + struct qcom_msi_grp *msi_grp;
> > > > > + struct qcom_msi_irq *msi_irq;
> > > > > + int i, index, ret;
> > > > > + unsigned int irq;
> > > > > +
> > > > > + /* setup each MSI group. nr_hwirqs == nr_grps */
> > > > > + for (i = 0; i < msi->nr_hwirqs; i++) {
> > > > > + irq = irq_of_parse_and_map(msi->dev->of_node, i);
> > > > > + if (!irq) {
> > > > > + dev_err(msi->dev,
> > > > > + "MSI: failed to parse/map interrupt\n");
> > > > > + ret = -ENODEV;
> > > > > + goto free_irqs;
> > > > > + }
> > > > > +
> > > > > + msi_grp = &msi->grps[i];
> > > > > + msi_grp->int_en_reg = msi->pcie_msi_cfg +
> > > > > + PCIE_MSI_CTRL_INT_N_EN_OFFS(i);
> > > > > + msi_grp->int_mask_reg = msi->pcie_msi_cfg +
> > > > > + PCIE_MSI_CTRL_INT_N_MASK_OFFS(i);
> > > > > + msi_grp->int_status_reg = msi->pcie_msi_cfg +
> > > > > + PCIE_MSI_CTRL_INT_N_STATUS_OFFS(i);
> > > > > +
> > > > > + for (index = 0; index < MSI_IRQ_PER_GRP; index++) {
> > > > > + msi_irq = &msi_grp->irqs[index];
> > > > > +
> > > > > + msi_irq->grp = msi_grp;
> > > > > + msi_irq->grp_index = index;
> > > > > + msi_irq->pos = (i * MSI_IRQ_PER_GRP) + index;
> > > > > + msi_irq->hwirq = irq;
> > > > > + }
> > > > > +
> > > > > + irq_set_chained_handler_and_data(irq, qcom_msi_handler, msi_grp);
> > > > > + }
> > > > > +
> > > > > + return 0;
> > > > > +
> > > > > +free_irqs:
> > > > > + for (--i; i >= 0; i--) {
> > > > > + irq = msi->grps[i].irqs[0].hwirq;
> > > > > +
> > > > > + irq_set_chained_handler_and_data(irq, NULL, NULL);
> > > > > + irq_dispose_mapping(irq);
> > > > > + }
> > > > > +
> > > > > + return ret;
> > > > > +}
> > > > > +
> > > > > +static void qcom_msi_config(struct irq_domain *domain)
> > > > > +{
> > > > > + struct qcom_msi *msi;
> > > > > + int i;
> > > > > +
> > > > > + msi = domain->parent->host_data;
> > > > > +
> > > > > + /* program termination address */
> > > > > + writel(msi->msi_db_addr, msi->pcie_msi_cfg + PCIE_MSI_CTRL_ADDR_OFFS);
> > > > > + writel(0, msi->pcie_msi_cfg + PCIE_MSI_CTRL_UPPER_ADDR_OFFS);
> > > > > +
> > > > > + /* restore mask and enable all interrupts for each group */
> > > > > + for (i = 0; i < msi->nr_grps; i++) {
> > > > > + struct qcom_msi_grp *msi_grp = &msi->grps[i];
> > > > > +
> > > > > + writel(msi_grp->mask, msi_grp->int_mask_reg);
> > > > > + writel(~0, msi_grp->int_en_reg);
> > > > > + }
> > > > > +}
> > > > > +
> > > > > +static void qcom_msi_deinit(struct qcom_msi *msi)
> > > > > +{
> > > > > + irq_domain_remove(msi->msi_domain);
> > > > > + irq_domain_remove(msi->inner_domain);
> > > > > +}
> > > > > +
> > > > > +static struct qcom_msi *qcom_msi_init(struct device *dev)
> > > > > +{
> > > > > + struct qcom_msi *msi;
> > > > > + u64 addr;
> > > > > + int ret;
> > > > > +
> > > > > + msi = devm_kzalloc(dev, sizeof(*msi), GFP_KERNEL);
> > > > > + if (!msi)
> > > > > + return ERR_PTR(-ENOMEM);
> > > > > +
> > > > > + msi->dev = dev;
> > > > > + mutex_init(&msi->mutex);
> > > > > + spin_lock_init(&msi->cfg_lock);
> > > > > + INIT_LIST_HEAD(&msi->clients);
> > > > > +
> > > > > + msi->msi_db_addr = MSI_DB_ADDR;
> > > > > + msi->nr_hwirqs = of_irq_count(dev->of_node);
> > > > > + if (!msi->nr_hwirqs) {
> > > > > + dev_err(msi->dev, "no hwirqs found\n");
> > > > > + return ERR_PTR(-ENODEV);
> > > > > + }
> > > > > +
> > > > > + if (of_property_read_reg(dev->of_node, 0, &addr, NULL) < 0) {
> > > > > + dev_err(msi->dev, "failed to get reg address\n");
> > > > > + return ERR_PTR(-ENODEV);
> > > > > + }
> > > > > +
> > > > > + dev_dbg(msi->dev, "hwirq:%d pcie_msi_cfg:%llx\n", msi->nr_hwirqs, addr);
> > > > > + msi->pcie_msi_cfg = devm_ioremap(dev, addr + PCIE_MSI_CTRL_BASE, PCIE_MSI_CTRL_SIZE);
> > > > > + if (!msi->pcie_msi_cfg)
> > > > > + return ERR_PTR(-ENOMEM);
> > > > > +
> > > > > + msi->nr_virqs = msi->nr_hwirqs * MSI_IRQ_PER_GRP;
> > > > > + msi->nr_grps = msi->nr_hwirqs;
> > > > > + msi->grps = devm_kcalloc(dev, msi->nr_grps, sizeof(*msi->grps), GFP_KERNEL);
> > > > > + if (!msi->grps)
> > > > > + return ERR_PTR(-ENOMEM);
> > > > > +
> > > > > + msi->bitmap = devm_kcalloc(dev, BITS_TO_LONGS(msi->nr_virqs),
> > > > > + sizeof(*msi->bitmap), GFP_KERNEL);
> > > > > + if (!msi->bitmap)
> > > > > + return ERR_PTR(-ENOMEM);
> > > > > +
> > > > > + ret = qcom_msi_alloc_domains(msi);
> > > > > + if (ret)
> > > > > + return ERR_PTR(ret);
> > > > > +
> > > > > + ret = qcom_msi_irq_setup(msi);
> > > > > + if (ret) {
> > > > > + qcom_msi_deinit(msi);
> > > > > + return ERR_PTR(ret);
> > > > > + }
> > > > > +
> > > > > + qcom_msi_config(msi->msi_domain);
> > > > > + return msi;
> > > > > +}
> > > > > +
> > > > > +static int qcom_pcie_ecam_suspend_noirq(struct device *dev)
> > > > > +{
> > > > > + return pm_runtime_put_sync(dev);
> > > > > +}
> > > > > +
> > > > > +static int qcom_pcie_ecam_resume_noirq(struct device *dev)
> > > > > +{
> > > > > + return pm_runtime_get_sync(dev);
> > > > > +}
> > > > > +
> > > > > +static int qcom_pcie_ecam_probe(struct platform_device *pdev)
> > > > > +{
> > > > > + struct device *dev = &pdev->dev;
> > > > > + struct qcom_msi *msi;
> > > > > + int ret;
> > > > > +
> > > > > + ret = devm_pm_runtime_enable(dev);
> > > > > + if (ret)
> > > > > + return ret;
> > > > > +
> > > > > + ret = pm_runtime_resume_and_get(dev);
> > > > > + if (ret < 0) {
> > > > > + dev_err(dev, "fail to enable pcie controller: %d\n", ret);
> > > > > + return ret;
> > > > > + }
> > > > > +
> > > > > + msi = qcom_msi_init(dev);
> > > > > + if (IS_ERR(msi)) {
> > > > > + pm_runtime_put_sync(dev);
> > > > > + return PTR_ERR(msi);
> > > > > + }
> > > > > +
> > > > > + ret = pci_host_common_probe(pdev);
> > > > > + if (ret) {
> > > > > + dev_err(dev, "pci_host_common_probe() failed:%d\n", ret);
> > > > > + qcom_msi_deinit(msi);
> > > > > + pm_runtime_put_sync(dev);
> > > > > + }
> > > > > +
> > > > > + return ret;
> > > > > +}
> > > > > +
> > > > > +static const struct dev_pm_ops qcom_pcie_ecam_pm_ops = {
> > > > > + NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_ecam_suspend_noirq,
> > > > > + qcom_pcie_ecam_resume_noirq)
> > > > > +};
> > > > > +
> > > > > +static const struct pci_ecam_ops qcom_pcie_ecam_ops = {
> > > > > + .pci_ops = {
> > > > > + .map_bus = pci_ecam_map_bus,
> > > > > + .read = pci_generic_config_read,
> > > > > + .write = pci_generic_config_write,
> > > > > + }
> > > > > +};
> > > > > +
> > > > > +static const struct of_device_id qcom_pcie_ecam_of_match[] = {
> > > > > + {
> > > > > + .compatible = "qcom,pcie-ecam-rc",
> > > > > + .data = &qcom_pcie_ecam_ops,
> > > > > + },
> > > > > + { },
> > > > > +};
> > > > > +MODULE_DEVICE_TABLE(of, qcom_pcie_ecam_of_match);
> > > > > +
> > > > > +static struct platform_driver qcom_pcie_ecam_driver = {
> > > > > + .probe = qcom_pcie_ecam_probe,
> > > > > + .driver = {
> > > > > + .name = "qcom-pcie-ecam-rc",
> > > > > + .suppress_bind_attrs = true,
> > > > > + .of_match_table = qcom_pcie_ecam_of_match,
> > > > > + .probe_type = PROBE_PREFER_ASYNCHRONOUS,
> > > > > + .pm = &qcom_pcie_ecam_pm_ops,
> > > > > + },
> > > > > +};
> > > > > +module_platform_driver(qcom_pcie_ecam_driver);
> > > > > +
> > > > > +MODULE_DESCRIPTION("Qualcomm PCIe ECAM root complex driver");
> > > > > +MODULE_LICENSE("GPL");
> > > > > --
> > > > > 2.7.4
> > > > >
> > > >
> >
--
மணிவண்ணன் சதாசிவம்
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-08 18:57 ` Mayank Rana
2024-04-10 6:26 ` Manivannan Sadhasivam
@ 2024-04-10 16:58 ` Rob Herring
2024-04-15 23:30 ` Mayank Rana
1 sibling, 1 reply; 27+ messages in thread
From: Rob Herring @ 2024-04-10 16:58 UTC (permalink / raw)
To: Mayank Rana
Cc: Manivannan Sadhasivam, linux-pci, lpieralisi, kw, bhelgaas,
andersson, krzysztof.kozlowski+dt, conor+dt, devicetree,
linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
On Mon, Apr 08, 2024 at 11:57:58AM -0700, Mayank Rana wrote:
> Hi Mani
>
> On 4/5/2024 9:17 PM, Manivannan Sadhasivam wrote:
> > On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
> > > Hi Mani
> > >
> > > On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
> > > > On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
> > > > > On some of Qualcomm platform, firmware configures PCIe controller into
> > > > > ECAM mode allowing static memory allocation for configuration space of
> > > > > supported bus range. Firmware also takes care of bringing up PCIe PHY
> > > > > and performing required operation to bring PCIe link into D0. Firmware
> > > > > also manages system resources (e.g. clocks/regulators/resets/ bus voting).
> > > > > Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
> > > > > root complex and connected PCIe devices. Firmware won't be enumerating
> > > > > or powering up PCIe root complex until this driver invokes power domain
> > > > > based notification to bring PCIe link into D0/D3cold mode.
> > > > >
> > > >
> > > > Is this an in-house PCIe IP of Qualcomm or the same DWC IP that is used in other
> > > > SoCs?
> > > >
> > > > - Mani
> > > Driver is validated on SA8775p-ride platform using PCIe DWC IP for
> > > now.Although this driver doesn't need to know used PCIe controller and PHY
> > > IP as well programming sequence as that would be taken care by firmware.
> > >
> >
> > Ok, so it is the same IP but firmware is controlling the resources now. This
> > information should be present in the commit message.
> >
> > Btw, there is an existing generic ECAM host controller driver:
> > drivers/pci/controller/pci-host-generic.c
> >
> > This driver is already being used by several vendors as well. So we should try
> > to extend it for Qcom usecase also.
I would take it a bit further and say if you need your own driver, then
just use the default QCom driver. Perhaps extend it to support ECAM.
Better yet, copy your firmware setup and always configure the QCom h/w
to use ECAM.
If you want to extend the generic driver, that's fine, but we don't need
a 3rd.
> I did review pci-host-generic.c driver for usage. although there are more
> functionalityneeded for use case purpose as below:
> 1. MSI functionality
Pretty sure the generic driver already supports that.
> 2. Suspend/Resume
Others might want that to work as well.
> 3. Wakeup Functionality (not part of current change, but would be added
> later)
Others might want that to work as well.
> 4. Here this driver provides way to virtualized PCIe controller. So VMs only
> talk to a generic ECAM whereas HW is only directed accessed by service VM.
That's the existing driver. If if doesn't work for a VM, fix the VM.
> 5. Adding more Auto based safety use cases related implementation
Now that's just hand waving.
> Hence keeping pci-host-generic.c as generic driver where above functionality
> may not be needed.
Duplicating things to avoid touching existing drivers is not how kernel
development works.
Rob
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-10 16:58 ` Rob Herring
@ 2024-04-15 23:30 ` Mayank Rana
2024-05-31 22:47 ` Mayank Rana
0 siblings, 1 reply; 27+ messages in thread
From: Mayank Rana @ 2024-04-15 23:30 UTC (permalink / raw)
To: Rob Herring
Cc: Manivannan Sadhasivam, linux-pci, lpieralisi, kw, bhelgaas,
andersson, krzysztof.kozlowski+dt, conor+dt, devicetree,
linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
Hi Rob
Excuse me for late response on this (was OOO).
On 4/10/2024 9:58 AM, Rob Herring wrote:
> On Mon, Apr 08, 2024 at 11:57:58AM -0700, Mayank Rana wrote:
>> Hi Mani
>>
>> On 4/5/2024 9:17 PM, Manivannan Sadhasivam wrote:
>>> On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
>>>> Hi Mani
>>>>
>>>> On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
>>>>> On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
>>>>>> On some of Qualcomm platform, firmware configures PCIe controller into
>>>>>> ECAM mode allowing static memory allocation for configuration space of
>>>>>> supported bus range. Firmware also takes care of bringing up PCIe PHY
>>>>>> and performing required operation to bring PCIe link into D0. Firmware
>>>>>> also manages system resources (e.g. clocks/regulators/resets/ bus voting).
>>>>>> Hence add Qualcomm PCIe ECAM root complex driver which enumerates PCIe
>>>>>> root complex and connected PCIe devices. Firmware won't be enumerating
>>>>>> or powering up PCIe root complex until this driver invokes power domain
>>>>>> based notification to bring PCIe link into D0/D3cold mode.
>>>>>>
>>>>>
>>>>> Is this an in-house PCIe IP of Qualcomm or the same DWC IP that is used in other
>>>>> SoCs?
>>>>>
>>>>> - Mani
>>>> Driver is validated on SA8775p-ride platform using PCIe DWC IP for
>>>> now.Although this driver doesn't need to know used PCIe controller and PHY
>>>> IP as well programming sequence as that would be taken care by firmware.
>>>>
>>>
>>> Ok, so it is the same IP but firmware is controlling the resources now. This
>>> information should be present in the commit message.
>>>
>>> Btw, there is an existing generic ECAM host controller driver:
>>> drivers/pci/controller/pci-host-generic.c
>>>
>>> This driver is already being used by several vendors as well. So we should try
>>> to extend it for Qcom usecase also.
>
> I would take it a bit further and say if you need your own driver, then
> just use the default QCom driver. Perhaps extend it to support ECAM.
> Better yet, copy your firmware setup and always configure the QCom h/w
> to use ECAM.
Good suggestion. Although here we are having 2 set of requirements:
1. ECAM configuration
2. Managing PCIe controller and PHY resources and programming from
firmware as well
Hence it is not feasible to use default QCOM driver.
> If you want to extend the generic driver, that's fine, but we don't need
> a 3rd.
I did consider this part before coming up with new driver. Although I
felt that
below mentioned functionality may not look more generic to be part of
pci-host-generic.c driver.
>> I did review pci-host-generic.c driver for usage. although there are more
>> functionalityneeded for use case purpose as below:
>> 1. MSI functionality
>
> Pretty sure the generic driver already supports that.
I don't find any MSI support with pci-host-generic.c driver.
>> 2. Suspend/Resume
>
> Others might want that to work as well.
Others firmware won't have way to handle D3cold and D0 functionality
handling as
needed here for supporting suspend/resume as I don't find any interface
for pci-host-generic.c driver to notify firmware. here we are having way
to talk to firmware using GenPD based power domain usage to communicate
with firmware.
>> 3. Wakeup Functionality (not part of current change, but would be added
>> later)
>
> Others might want that to work as well.
possible if suspend/resume support is available or used.
>> 4. Here this driver provides way to virtualized PCIe controller. So VMs only
>> talk to a generic ECAM whereas HW is only directed accessed by service VM.
>
> That's the existing driver. If if doesn't work for a VM, fix the VM.
Correct.
>> 5. Adding more Auto based safety use cases related implementation
>
> Now that's just hand waving.
Here I am trying to provide new set of changes plan to be added as part
of required functionality.
>> Hence keeping pci-host-generic.c as generic driver where above functionality
>> may not be needed.
>
> Duplicating things to avoid touching existing drivers is not how kernel
> development works.
I shall try your suggestion and see how it looks in terms of code
changes. Perhaps then we can have more clarity in terms of adding more
functionality into generic or having separate driver.
> Rob
Regards,
Mayank
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex
2024-04-09 6:21 ` Krzysztof Kozlowski
@ 2024-04-18 18:56 ` Mayank Rana
2024-04-18 20:53 ` Krzysztof Kozlowski
0 siblings, 1 reply; 27+ messages in thread
From: Mayank Rana @ 2024-04-18 18:56 UTC (permalink / raw)
To: Krzysztof Kozlowski, linux-pci, lpieralisi, kw, robh, bhelgaas,
andersson, manivannan.sadhasivam, krzysztof.kozlowski+dt,
conor+dt, devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
On 4/8/2024 11:21 PM, Krzysztof Kozlowski wrote:
> On 08/04/2024 21:09, Mayank Rana wrote:
>>>> + Firmware configures PCIe controller in RC mode with static iATU window mappings
>>>> + of configuration space for entire supported bus range in ECAM compatible mode.
>>>> +
>>>> +maintainers:
>>>> + - Mayank Rana <quic_mrana@quicinc.com>
>>>> +
>>>> +allOf:
>>>> + - $ref: /schemas/pci/pci-bus.yaml#
>>>> + - $ref: /schemas/power-domain/power-domain-consumer.yaml
>>>> +
>>>> +properties:
>>>> + compatible:
>>>> + const: qcom,pcie-ecam-rc
>>>
>>> No, this must have SoC specific compatibles.
>> This driver is proposed to work with any PCIe controller supported ECAM
>> functionality on Qualcomm platform
>> where firmware running on other VM/processor is controlling PCIe PHY and
>> controller for PCIe link up functionality.
>> Do you still suggest to have SoC specific compatibles here ?
>
> What does the writing-bindings document say? Why this is different than
> all other bindings?
Thank you for all your review comment and suggestions.
If it is must to have SOC name, then I am not sure how
pci-host-generic.c driver having non SOC prefix for standard ECAM
driver. I am here saying this is QCOM vendor specific generic ECAM
driver. saying that it seems that I would be updating now
pci-host-generic.c driver to add generic functionality as Rob suggested
part of review comment. With
that I am seeing possible options as i.e. continue using default generic
compatible as "pcie-host-ecam-generic" OR use new as
"qcom,pcie-host-ecam-generic". will this work ?>>>> +
>>>> + reg:
>>>> + minItems: 1
>>>
>>> maxItems instead
>>>
>>>> + description: ECAM address space starting from root port till supported bus range
>>>> +
>>>> + interrupts:
>>>> + minItems: 1
>>>> + maxItems: 8
>>>
>>> This is way too unspecific.
>> will review and update.
>>>> +
>>>> + ranges:
>>>> + minItems: 2
>>>> + maxItems: 3
>>>
>>> Why variable?
>> It depends on how ECAM configured to support 32-bit and 64-bit based
>> prefetch address space.
>> So there are different combination of prefetch (32-bit or 64-bit or
>> both) and non-prefetch (32-bit), and IO address space available. hence
>> kept it as variable with based on required use case and address space
>> availability.
>
> Really? So same device has it configured once for 32 once for 64-bit
> address space? Randomly?
no. as binding is not saying for any specific SOC. Depends on memory map
on particular SOC, how PCIe address space available based on that this
would change for particular SOC variant.
> Best regards,
> Krzysztof
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex
2024-04-18 18:56 ` Mayank Rana
@ 2024-04-18 20:53 ` Krzysztof Kozlowski
0 siblings, 0 replies; 27+ messages in thread
From: Krzysztof Kozlowski @ 2024-04-18 20:53 UTC (permalink / raw)
To: Mayank Rana, linux-pci, lpieralisi, kw, robh, bhelgaas, andersson,
manivannan.sadhasivam, krzysztof.kozlowski+dt, conor+dt,
devicetree
Cc: linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
On 18/04/2024 20:56, Mayank Rana wrote:
>
>
> On 4/8/2024 11:21 PM, Krzysztof Kozlowski wrote:
>> On 08/04/2024 21:09, Mayank Rana wrote:
>>>>> + Firmware configures PCIe controller in RC mode with static iATU window mappings
>>>>> + of configuration space for entire supported bus range in ECAM compatible mode.
>>>>> +
>>>>> +maintainers:
>>>>> + - Mayank Rana <quic_mrana@quicinc.com>
>>>>> +
>>>>> +allOf:
>>>>> + - $ref: /schemas/pci/pci-bus.yaml#
>>>>> + - $ref: /schemas/power-domain/power-domain-consumer.yaml
>>>>> +
>>>>> +properties:
>>>>> + compatible:
>>>>> + const: qcom,pcie-ecam-rc
>>>>
>>>> No, this must have SoC specific compatibles.
>>> This driver is proposed to work with any PCIe controller supported ECAM
>>> functionality on Qualcomm platform
>>> where firmware running on other VM/processor is controlling PCIe PHY and
>>> controller for PCIe link up functionality.
>>> Do you still suggest to have SoC specific compatibles here ?
>>
>> What does the writing-bindings document say? Why this is different than
>> all other bindings?
> Thank you for all your review comment and suggestions.
>
> If it is must to have SOC name, then I am not sure how
> pci-host-generic.c driver having non SOC prefix for standard ECAM
> driver. I am here saying this is QCOM vendor specific generic ECAM
> driver. saying that it seems that I would be updating now
> pci-host-generic.c driver to add generic functionality as Rob suggested
I don't see any problem here. I talk about bindings, not driver. You can
have also fallback, so how is it different than from existing code?
> part of review comment. With
> that I am seeing possible options as i.e. continue using default generic
> compatible as "pcie-host-ecam-generic" OR use new as
> "qcom,pcie-host-ecam-generic". will this work ?>>>> +
Compatible and bindings focus on the hardware, so just write them
describing the hardware. You keep asking it in context of driver, but I
would say it does not matter. Is this generic hardware/firmware
implementation or not?
>>>>> + reg:
>>>>> + minItems: 1
>>>>
>>>> maxItems instead
>>>>
>>>>> + description: ECAM address space starting from root port till supported bus range
>>>>> +
>>>>> + interrupts:
>>>>> + minItems: 1
>>>>> + maxItems: 8
>>>>
>>>> This is way too unspecific.
>>> will review and update.
>>>>> +
>>>>> + ranges:
>>>>> + minItems: 2
>>>>> + maxItems: 3
>>>>
>>>> Why variable?
>>> It depends on how ECAM configured to support 32-bit and 64-bit based
>>> prefetch address space.
>>> So there are different combination of prefetch (32-bit or 64-bit or
>>> both) and non-prefetch (32-bit), and IO address space available. hence
>>> kept it as variable with based on required use case and address space
>>> availability.
>>
>> Really? So same device has it configured once for 32 once for 64-bit
>> address space? Randomly?
> no. as binding is not saying for any specific SOC. Depends on memory map
> on particular SOC, how PCIe address space available based on that this
So specific to the SoC, so this is not variable.
> would change for particular SOC variant.
So this is not variable and you did not provide sufficient
argumentation. You basically did not provide any argument, just
disagreed with me. Bindings must be specific and all fields should be
constrained, when reasonable.
Best regards,
Krzysztof
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-04-15 23:30 ` Mayank Rana
@ 2024-05-31 22:47 ` Mayank Rana
2024-06-06 2:39 ` Manivannan Sadhasivam
0 siblings, 1 reply; 27+ messages in thread
From: Mayank Rana @ 2024-05-31 22:47 UTC (permalink / raw)
To: Rob Herring
Cc: Manivannan Sadhasivam, linux-pci, lpieralisi, kw, bhelgaas,
andersson, krzysztof.kozlowski+dt, conor+dt, devicetree,
linux-arm-msm, quic_ramkri, quic_nkela, quic_shazhuss,
quic_msarkar, quic_nitegupt
Hi Rob / Mani
On 4/15/2024 4:30 PM, Mayank Rana wrote:
> Hi Rob
>
> Excuse me for late response on this (was OOO).
> On 4/10/2024 9:58 AM, Rob Herring wrote:
>> On Mon, Apr 08, 2024 at 11:57:58AM -0700, Mayank Rana wrote:
>>> Hi Mani
>>>
>>> On 4/5/2024 9:17 PM, Manivannan Sadhasivam wrote:
>>>> On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
>>>>> Hi Mani
>>>>>
>>>>> On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
>>>>>> On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
>>>>>>> On some of Qualcomm platform, firmware configures PCIe controller
>>>>>>> into
>>>>>>> ECAM mode allowing static memory allocation for configuration
>>>>>>> space of
>>>>>>> supported bus range. Firmware also takes care of bringing up PCIe
>>>>>>> PHY
>>>>>>> and performing required operation to bring PCIe link into D0.
>>>>>>> Firmware
>>>>>>> also manages system resources (e.g. clocks/regulators/resets/ bus
>>>>>>> voting).
>>>>>>> Hence add Qualcomm PCIe ECAM root complex driver which enumerates
>>>>>>> PCIe
>>>>>>> root complex and connected PCIe devices. Firmware won't be
>>>>>>> enumerating
>>>>>>> or powering up PCIe root complex until this driver invokes power
>>>>>>> domain
>>>>>>> based notification to bring PCIe link into D0/D3cold mode.
>>>>>>>
>>>>>>
>>>>>> Is this an in-house PCIe IP of Qualcomm or the same DWC IP that is
>>>>>> used in other
>>>>>> SoCs?
>>>>>>
>>>>>> - Mani
>>>>> Driver is validated on SA8775p-ride platform using PCIe DWC IP for
>>>>> now.Although this driver doesn't need to know used PCIe controller
>>>>> and PHY
>>>>> IP as well programming sequence as that would be taken care by
>>>>> firmware.
>>>>>
>>>>
>>>> Ok, so it is the same IP but firmware is controlling the resources
>>>> now. This
>>>> information should be present in the commit message.
>>>>
>>>> Btw, there is an existing generic ECAM host controller driver:
>>>> drivers/pci/controller/pci-host-generic.c
>>>>
>>>> This driver is already being used by several vendors as well. So we
>>>> should try
>>>> to extend it for Qcom usecase also.
>>
>> I would take it a bit further and say if you need your own driver, then
>> just use the default QCom driver. Perhaps extend it to support ECAM.
>> Better yet, copy your firmware setup and always configure the QCom h/w
>> to use ECAM.
> Good suggestion. Although here we are having 2 set of requirements:
> 1. ECAM configuration
> 2. Managing PCIe controller and PHY resources and programming from
> firmware as well
> Hence it is not feasible to use default QCOM driver.
>> If you want to extend the generic driver, that's fine, but we don't need
>> a 3rd.
> I did consider this part before coming up with new driver. Although I
> felt that
> below mentioned functionality may not look more generic to be part of
> pci-host-generic.c driver.
>>> I did review pci-host-generic.c driver for usage. although there are
>>> more
>>> functionalityneeded for use case purpose as below:
>>> 1. MSI functionality
>>
>> Pretty sure the generic driver already supports that.
> I don't find any MSI support with pci-host-generic.c driver.
>>> 2. Suspend/Resume
>>
>> Others might want that to work as well.
> Others firmware won't have way to handle D3cold and D0 functionality
> handling as
> needed here for supporting suspend/resume as I don't find any interface
> for pci-host-generic.c driver to notify firmware. here we are having way
> to talk to firmware using GenPD based power domain usage to communicate
> with firmware.
>
>>> 3. Wakeup Functionality (not part of current change, but would be added
>>> later)
>>
>> Others might want that to work as well.
> possible if suspend/resume support is available or used.
>>> 4. Here this driver provides way to virtualized PCIe controller. So
>>> VMs only
>>> talk to a generic ECAM whereas HW is only directed accessed by
>>> service VM.
>>
>> That's the existing driver. If if doesn't work for a VM, fix the VM.
> Correct.
>>> 5. Adding more Auto based safety use cases related implementation
>>
>> Now that's just hand waving.
> Here I am trying to provide new set of changes plan to be added as part
> of required functionality.
>
>>> Hence keeping pci-host-generic.c as generic driver where above
>>> functionality
>>> may not be needed.
>>
>> Duplicating things to avoid touching existing drivers is not how kernel
>> development works.
> I shall try your suggestion and see how it looks in terms of code
> changes. Perhaps then we can have more clarity in terms of adding more
> functionality into generic or having separate driver.
I just learnt that previously dwc related PCIe ECAM driver and MSI
controller driver tried out as:
https://lore.kernel.org/linux-pci/20170821192907.8695-1-ard.biesheuvel@linaro.org/
Although there were few concerns at that time. Due to that having dwc
specific MSI functionality based driver was dropped, and
pci-host-generic.c driver is being updated using with dwc/snps specific
ECAM operation.
In current discussion, it seems that we are discussing to have identical
approach here.
Atleast on Qualcomm SA8775p platform, I don't have any other way to
support MSI functionality i.e. extended SPI or ITS/LPI based MSI or
using GICv2m functionality are not supported.
I don't see any other approach other than MSI based implementation
within pci-host-generic.c driver for dwc/snps based MSI controller.
Do you have any suggestion on this ?
Regards,
Mayank
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-05-31 22:47 ` Mayank Rana
@ 2024-06-06 2:39 ` Manivannan Sadhasivam
2024-06-10 17:17 ` Mayank Rana
0 siblings, 1 reply; 27+ messages in thread
From: Manivannan Sadhasivam @ 2024-06-06 2:39 UTC (permalink / raw)
To: Mayank Rana
Cc: Rob Herring, linux-pci, lpieralisi, kw, bhelgaas, andersson,
krzysztof.kozlowski+dt, conor+dt, devicetree, linux-arm-msm,
quic_ramkri, quic_nkela, quic_shazhuss, quic_msarkar,
quic_nitegupt
On Fri, May 31, 2024 at 03:47:24PM -0700, Mayank Rana wrote:
> Hi Rob / Mani
>
> On 4/15/2024 4:30 PM, Mayank Rana wrote:
> > Hi Rob
> >
> > Excuse me for late response on this (was OOO).
> > On 4/10/2024 9:58 AM, Rob Herring wrote:
> > > On Mon, Apr 08, 2024 at 11:57:58AM -0700, Mayank Rana wrote:
> > > > Hi Mani
> > > >
> > > > On 4/5/2024 9:17 PM, Manivannan Sadhasivam wrote:
> > > > > On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
> > > > > > Hi Mani
> > > > > >
> > > > > > On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
> > > > > > > On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
> > > > > > > > On some of Qualcomm platform, firmware
> > > > > > > > configures PCIe controller into
> > > > > > > > ECAM mode allowing static memory allocation for
> > > > > > > > configuration space of
> > > > > > > > supported bus range. Firmware also takes care of
> > > > > > > > bringing up PCIe PHY
> > > > > > > > and performing required operation to bring PCIe
> > > > > > > > link into D0. Firmware
> > > > > > > > also manages system resources (e.g.
> > > > > > > > clocks/regulators/resets/ bus voting).
> > > > > > > > Hence add Qualcomm PCIe ECAM root complex driver
> > > > > > > > which enumerates PCIe
> > > > > > > > root complex and connected PCIe devices.
> > > > > > > > Firmware won't be enumerating
> > > > > > > > or powering up PCIe root complex until this
> > > > > > > > driver invokes power domain
> > > > > > > > based notification to bring PCIe link into D0/D3cold mode.
> > > > > > > >
> > > > > > >
> > > > > > > Is this an in-house PCIe IP of Qualcomm or the same
> > > > > > > DWC IP that is used in other
> > > > > > > SoCs?
> > > > > > >
> > > > > > > - Mani
> > > > > > Driver is validated on SA8775p-ride platform using PCIe DWC IP for
> > > > > > now.Although this driver doesn't need to know used PCIe
> > > > > > controller and PHY
> > > > > > IP as well programming sequence as that would be taken
> > > > > > care by firmware.
> > > > > >
> > > > >
> > > > > Ok, so it is the same IP but firmware is controlling the
> > > > > resources now. This
> > > > > information should be present in the commit message.
> > > > >
> > > > > Btw, there is an existing generic ECAM host controller driver:
> > > > > drivers/pci/controller/pci-host-generic.c
> > > > >
> > > > > This driver is already being used by several vendors as
> > > > > well. So we should try
> > > > > to extend it for Qcom usecase also.
> > >
> > > I would take it a bit further and say if you need your own driver, then
> > > just use the default QCom driver. Perhaps extend it to support ECAM.
> > > Better yet, copy your firmware setup and always configure the QCom h/w
> > > to use ECAM.
> > Good suggestion. Although here we are having 2 set of requirements:
> > 1. ECAM configuration
> > 2. Managing PCIe controller and PHY resources and programming from
> > firmware as well
> > Hence it is not feasible to use default QCOM driver.
> > > If you want to extend the generic driver, that's fine, but we don't need
> > > a 3rd.
> > I did consider this part before coming up with new driver. Although I
> > felt that
> > below mentioned functionality may not look more generic to be part of
> > pci-host-generic.c driver.
> > > > I did review pci-host-generic.c driver for usage. although there
> > > > are more
> > > > functionalityneeded for use case purpose as below:
> > > > 1. MSI functionality
> > >
> > > Pretty sure the generic driver already supports that.
> > I don't find any MSI support with pci-host-generic.c driver.
> > > > 2. Suspend/Resume
> > >
> > > Others might want that to work as well.
> > Others firmware won't have way to handle D3cold and D0 functionality
> > handling as
> > needed here for supporting suspend/resume as I don't find any interface
> > for pci-host-generic.c driver to notify firmware. here we are having way
> > to talk to firmware using GenPD based power domain usage to communicate
> > with firmware.
> >
> > > > 3. Wakeup Functionality (not part of current change, but would be added
> > > > later)
> > >
> > > Others might want that to work as well.
> > possible if suspend/resume support is available or used.
> > > > 4. Here this driver provides way to virtualized PCIe controller.
> > > > So VMs only
> > > > talk to a generic ECAM whereas HW is only directed accessed by
> > > > service VM.
> > >
> > > That's the existing driver. If if doesn't work for a VM, fix the VM.
> > Correct.
> > > > 5. Adding more Auto based safety use cases related implementation
> > >
> > > Now that's just hand waving.
> > Here I am trying to provide new set of changes plan to be added as part
> > of required functionality.
> >
> > > > Hence keeping pci-host-generic.c as generic driver where above
> > > > functionality
> > > > may not be needed.
> > >
> > > Duplicating things to avoid touching existing drivers is not how kernel
> > > development works.
> > I shall try your suggestion and see how it looks in terms of code
> > changes. Perhaps then we can have more clarity in terms of adding more
> > functionality into generic or having separate driver.
> I just learnt that previously dwc related PCIe ECAM driver and MSI
> controller driver tried out as:
>
> https://lore.kernel.org/linux-pci/20170821192907.8695-1-ard.biesheuvel@linaro.org/
>
> Although there were few concerns at that time. Due to that having dwc
> specific MSI functionality based driver was dropped, and pci-host-generic.c
> driver is being updated using with dwc/snps specific ECAM operation.
>
> In current discussion, it seems that we are discussing to have identical
> approach here.
>
> Atleast on Qualcomm SA8775p platform, I don't have any other way to support
> MSI functionality i.e. extended SPI or ITS/LPI based MSI or using GICv2m
> functionality are not supported.
>
> I don't see any other approach other than MSI based implementation within
> pci-host-generic.c driver for dwc/snps based MSI controller.
>
> Do you have any suggestion on this ?
>
Since this ECAM driver is going to be used in newer Qcom SoCs, why can't you use
GICv3 for MSI handling?
- Mani
--
மணிவண்ணன் சதாசிவம்
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-06-06 2:39 ` Manivannan Sadhasivam
@ 2024-06-10 17:17 ` Mayank Rana
2024-06-12 6:14 ` Manivannan Sadhasivam
0 siblings, 1 reply; 27+ messages in thread
From: Mayank Rana @ 2024-06-10 17:17 UTC (permalink / raw)
To: Manivannan Sadhasivam
Cc: Rob Herring, linux-pci, lpieralisi, kw, bhelgaas, andersson,
krzysztof.kozlowski+dt, conor+dt, devicetree, linux-arm-msm,
quic_ramkri, quic_nkela, quic_shazhuss, quic_msarkar,
quic_nitegupt
On 6/5/2024 7:39 PM, Manivannan Sadhasivam wrote:
> On Fri, May 31, 2024 at 03:47:24PM -0700, Mayank Rana wrote:
>> Hi Rob / Mani
>>
>> On 4/15/2024 4:30 PM, Mayank Rana wrote:
>>> Hi Rob
>>>
>>> Excuse me for late response on this (was OOO).
>>> On 4/10/2024 9:58 AM, Rob Herring wrote:
>>>> On Mon, Apr 08, 2024 at 11:57:58AM -0700, Mayank Rana wrote:
>>>>> Hi Mani
>>>>>
>>>>> On 4/5/2024 9:17 PM, Manivannan Sadhasivam wrote:
>>>>>> On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
>>>>>>> Hi Mani
>>>>>>>
>>>>>>> On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
>>>>>>>> On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
>>>>>>>>> On some of Qualcomm platform, firmware
>>>>>>>>> configures PCIe controller into
>>>>>>>>> ECAM mode allowing static memory allocation for
>>>>>>>>> configuration space of
>>>>>>>>> supported bus range. Firmware also takes care of
>>>>>>>>> bringing up PCIe PHY
>>>>>>>>> and performing required operation to bring PCIe
>>>>>>>>> link into D0. Firmware
>>>>>>>>> also manages system resources (e.g.
>>>>>>>>> clocks/regulators/resets/ bus voting).
>>>>>>>>> Hence add Qualcomm PCIe ECAM root complex driver
>>>>>>>>> which enumerates PCIe
>>>>>>>>> root complex and connected PCIe devices.
>>>>>>>>> Firmware won't be enumerating
>>>>>>>>> or powering up PCIe root complex until this
>>>>>>>>> driver invokes power domain
>>>>>>>>> based notification to bring PCIe link into D0/D3cold mode.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Is this an in-house PCIe IP of Qualcomm or the same
>>>>>>>> DWC IP that is used in other
>>>>>>>> SoCs?
>>>>>>>>
>>>>>>>> - Mani
>>>>>>> Driver is validated on SA8775p-ride platform using PCIe DWC IP for
>>>>>>> now.Although this driver doesn't need to know used PCIe
>>>>>>> controller and PHY
>>>>>>> IP as well programming sequence as that would be taken
>>>>>>> care by firmware.
>>>>>>>
>>>>>>
>>>>>> Ok, so it is the same IP but firmware is controlling the
>>>>>> resources now. This
>>>>>> information should be present in the commit message.
>>>>>>
>>>>>> Btw, there is an existing generic ECAM host controller driver:
>>>>>> drivers/pci/controller/pci-host-generic.c
>>>>>>
>>>>>> This driver is already being used by several vendors as
>>>>>> well. So we should try
>>>>>> to extend it for Qcom usecase also.
>>>>
>>>> I would take it a bit further and say if you need your own driver, then
>>>> just use the default QCom driver. Perhaps extend it to support ECAM.
>>>> Better yet, copy your firmware setup and always configure the QCom h/w
>>>> to use ECAM.
>>> Good suggestion. Although here we are having 2 set of requirements:
>>> 1. ECAM configuration
>>> 2. Managing PCIe controller and PHY resources and programming from
>>> firmware as well
>>> Hence it is not feasible to use default QCOM driver.
>>>> If you want to extend the generic driver, that's fine, but we don't need
>>>> a 3rd.
>>> I did consider this part before coming up with new driver. Although I
>>> felt that
>>> below mentioned functionality may not look more generic to be part of
>>> pci-host-generic.c driver.
>>>>> I did review pci-host-generic.c driver for usage. although there
>>>>> are more
>>>>> functionalityneeded for use case purpose as below:
>>>>> 1. MSI functionality
>>>>
>>>> Pretty sure the generic driver already supports that.
>>> I don't find any MSI support with pci-host-generic.c driver.
>>>>> 2. Suspend/Resume
>>>>
>>>> Others might want that to work as well.
>>> Others firmware won't have way to handle D3cold and D0 functionality
>>> handling as
>>> needed here for supporting suspend/resume as I don't find any interface
>>> for pci-host-generic.c driver to notify firmware. here we are having way
>>> to talk to firmware using GenPD based power domain usage to communicate
>>> with firmware.
>>>
>>>>> 3. Wakeup Functionality (not part of current change, but would be added
>>>>> later)
>>>>
>>>> Others might want that to work as well.
>>> possible if suspend/resume support is available or used.
>>>>> 4. Here this driver provides way to virtualized PCIe controller.
>>>>> So VMs only
>>>>> talk to a generic ECAM whereas HW is only directed accessed by
>>>>> service VM.
>>>>
>>>> That's the existing driver. If if doesn't work for a VM, fix the VM.
>>> Correct.
>>>>> 5. Adding more Auto based safety use cases related implementation
>>>>
>>>> Now that's just hand waving.
>>> Here I am trying to provide new set of changes plan to be added as part
>>> of required functionality.
>>>
>>>>> Hence keeping pci-host-generic.c as generic driver where above
>>>>> functionality
>>>>> may not be needed.
>>>>
>>>> Duplicating things to avoid touching existing drivers is not how kernel
>>>> development works.
>>> I shall try your suggestion and see how it looks in terms of code
>>> changes. Perhaps then we can have more clarity in terms of adding more
>>> functionality into generic or having separate driver.
>> I just learnt that previously dwc related PCIe ECAM driver and MSI
>> controller driver tried out as:
>>
>> https://lore.kernel.org/linux-pci/20170821192907.8695-1-ard.biesheuvel@linaro.org/
>>
>> Although there were few concerns at that time. Due to that having dwc
>> specific MSI functionality based driver was dropped, and pci-host-generic.c
>> driver is being updated using with dwc/snps specific ECAM operation.
>>
>> In current discussion, it seems that we are discussing to have identical
>> approach here.
>>
>> Atleast on Qualcomm SA8775p platform, I don't have any other way to support
>> MSI functionality i.e. extended SPI or ITS/LPI based MSI or using GICv2m
>> functionality are not supported.
>>
>> I don't see any other approach other than MSI based implementation within
>> pci-host-generic.c driver for dwc/snps based MSI controller.
>>
>> Do you have any suggestion on this ?
>>
>
> Since this ECAM driver is going to be used in newer Qcom SoCs, why can't you use
> GICv3 for MSI handling?
Yes, that is plan further as look like we have limitation on just SA8775.
So I see two options here:
1. Update pcie-host-generic.c without MSI based functionality, and leave
with MSI functionality differently on SA8775
2. Also possible to make pcie-host-designware.c based MSI functionality
as separate driver, and try to use with pcie-host-generic.c driver. That
way we would still use existing MSI related code base, and able to use
with ECAM driver.
Do you see using above option 2 as good way to allow SNPS/DWC based MSI
controller functionality with ECAM and Non-ECAM driver ?
Regards,
Mayank
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-06-10 17:17 ` Mayank Rana
@ 2024-06-12 6:14 ` Manivannan Sadhasivam
2024-06-17 18:09 ` Mayank Rana
0 siblings, 1 reply; 27+ messages in thread
From: Manivannan Sadhasivam @ 2024-06-12 6:14 UTC (permalink / raw)
To: Mayank Rana
Cc: Rob Herring, linux-pci, lpieralisi, kw, bhelgaas, andersson,
krzysztof.kozlowski+dt, conor+dt, devicetree, linux-arm-msm,
quic_ramkri, quic_nkela, quic_shazhuss, quic_msarkar,
quic_nitegupt
On Mon, Jun 10, 2024 at 10:17:31AM -0700, Mayank Rana wrote:
>
> On 6/5/2024 7:39 PM, Manivannan Sadhasivam wrote:
> > On Fri, May 31, 2024 at 03:47:24PM -0700, Mayank Rana wrote:
> > > Hi Rob / Mani
> > >
> > > On 4/15/2024 4:30 PM, Mayank Rana wrote:
> > > > Hi Rob
> > > >
> > > > Excuse me for late response on this (was OOO).
> > > > On 4/10/2024 9:58 AM, Rob Herring wrote:
> > > > > On Mon, Apr 08, 2024 at 11:57:58AM -0700, Mayank Rana wrote:
> > > > > > Hi Mani
> > > > > >
> > > > > > On 4/5/2024 9:17 PM, Manivannan Sadhasivam wrote:
> > > > > > > On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
> > > > > > > > Hi Mani
> > > > > > > >
> > > > > > > > On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
> > > > > > > > > On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
> > > > > > > > > > On some of Qualcomm platform, firmware
> > > > > > > > > > configures PCIe controller into
> > > > > > > > > > ECAM mode allowing static memory allocation for
> > > > > > > > > > configuration space of
> > > > > > > > > > supported bus range. Firmware also takes care of
> > > > > > > > > > bringing up PCIe PHY
> > > > > > > > > > and performing required operation to bring PCIe
> > > > > > > > > > link into D0. Firmware
> > > > > > > > > > also manages system resources (e.g.
> > > > > > > > > > clocks/regulators/resets/ bus voting).
> > > > > > > > > > Hence add Qualcomm PCIe ECAM root complex driver
> > > > > > > > > > which enumerates PCIe
> > > > > > > > > > root complex and connected PCIe devices.
> > > > > > > > > > Firmware won't be enumerating
> > > > > > > > > > or powering up PCIe root complex until this
> > > > > > > > > > driver invokes power domain
> > > > > > > > > > based notification to bring PCIe link into D0/D3cold mode.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > Is this an in-house PCIe IP of Qualcomm or the same
> > > > > > > > > DWC IP that is used in other
> > > > > > > > > SoCs?
> > > > > > > > >
> > > > > > > > > - Mani
> > > > > > > > Driver is validated on SA8775p-ride platform using PCIe DWC IP for
> > > > > > > > now.Although this driver doesn't need to know used PCIe
> > > > > > > > controller and PHY
> > > > > > > > IP as well programming sequence as that would be taken
> > > > > > > > care by firmware.
> > > > > > > >
> > > > > > >
> > > > > > > Ok, so it is the same IP but firmware is controlling the
> > > > > > > resources now. This
> > > > > > > information should be present in the commit message.
> > > > > > >
> > > > > > > Btw, there is an existing generic ECAM host controller driver:
> > > > > > > drivers/pci/controller/pci-host-generic.c
> > > > > > >
> > > > > > > This driver is already being used by several vendors as
> > > > > > > well. So we should try
> > > > > > > to extend it for Qcom usecase also.
> > > > >
> > > > > I would take it a bit further and say if you need your own driver, then
> > > > > just use the default QCom driver. Perhaps extend it to support ECAM.
> > > > > Better yet, copy your firmware setup and always configure the QCom h/w
> > > > > to use ECAM.
> > > > Good suggestion. Although here we are having 2 set of requirements:
> > > > 1. ECAM configuration
> > > > 2. Managing PCIe controller and PHY resources and programming from
> > > > firmware as well
> > > > Hence it is not feasible to use default QCOM driver.
> > > > > If you want to extend the generic driver, that's fine, but we don't need
> > > > > a 3rd.
> > > > I did consider this part before coming up with new driver. Although I
> > > > felt that
> > > > below mentioned functionality may not look more generic to be part of
> > > > pci-host-generic.c driver.
> > > > > > I did review pci-host-generic.c driver for usage. although there
> > > > > > are more
> > > > > > functionalityneeded for use case purpose as below:
> > > > > > 1. MSI functionality
> > > > >
> > > > > Pretty sure the generic driver already supports that.
> > > > I don't find any MSI support with pci-host-generic.c driver.
> > > > > > 2. Suspend/Resume
> > > > >
> > > > > Others might want that to work as well.
> > > > Others firmware won't have way to handle D3cold and D0 functionality
> > > > handling as
> > > > needed here for supporting suspend/resume as I don't find any interface
> > > > for pci-host-generic.c driver to notify firmware. here we are having way
> > > > to talk to firmware using GenPD based power domain usage to communicate
> > > > with firmware.
> > > >
> > > > > > 3. Wakeup Functionality (not part of current change, but would be added
> > > > > > later)
> > > > >
> > > > > Others might want that to work as well.
> > > > possible if suspend/resume support is available or used.
> > > > > > 4. Here this driver provides way to virtualized PCIe controller.
> > > > > > So VMs only
> > > > > > talk to a generic ECAM whereas HW is only directed accessed by
> > > > > > service VM.
> > > > >
> > > > > That's the existing driver. If if doesn't work for a VM, fix the VM.
> > > > Correct.
> > > > > > 5. Adding more Auto based safety use cases related implementation
> > > > >
> > > > > Now that's just hand waving.
> > > > Here I am trying to provide new set of changes plan to be added as part
> > > > of required functionality.
> > > >
> > > > > > Hence keeping pci-host-generic.c as generic driver where above
> > > > > > functionality
> > > > > > may not be needed.
> > > > >
> > > > > Duplicating things to avoid touching existing drivers is not how kernel
> > > > > development works.
> > > > I shall try your suggestion and see how it looks in terms of code
> > > > changes. Perhaps then we can have more clarity in terms of adding more
> > > > functionality into generic or having separate driver.
> > > I just learnt that previously dwc related PCIe ECAM driver and MSI
> > > controller driver tried out as:
> > >
> > > https://lore.kernel.org/linux-pci/20170821192907.8695-1-ard.biesheuvel@linaro.org/
> > >
> > > Although there were few concerns at that time. Due to that having dwc
> > > specific MSI functionality based driver was dropped, and pci-host-generic.c
> > > driver is being updated using with dwc/snps specific ECAM operation.
> > >
> > > In current discussion, it seems that we are discussing to have identical
> > > approach here.
> > >
> > > Atleast on Qualcomm SA8775p platform, I don't have any other way to support
> > > MSI functionality i.e. extended SPI or ITS/LPI based MSI or using GICv2m
> > > functionality are not supported.
> > >
> > > I don't see any other approach other than MSI based implementation within
> > > pci-host-generic.c driver for dwc/snps based MSI controller.
> > >
> > > Do you have any suggestion on this ?
> > >
> >
> > Since this ECAM driver is going to be used in newer Qcom SoCs, why can't you use
> > GICv3 for MSI handling?
> Yes, that is plan further as look like we have limitation on just SA8775.
> So I see two options here:
> 1. Update pcie-host-generic.c without MSI based functionality, and leave
> with MSI functionality differently on SA8775
> 2. Also possible to make pcie-host-designware.c based MSI functionality as
> separate driver, and try to use with pcie-host-generic.c driver. That way we
> would still use existing MSI related code base, and able to use with ECAM
> driver.
>
> Do you see using above option 2 as good way to allow SNPS/DWC based MSI
> controller functionality with ECAM and Non-ECAM driver ?
>
IMO, it is not worth splitting the code just for one platform since you said the
future ECAM based platforms will not require DWC MSI.
But if you have a strong requirement to use upstream DWC MSI for SA8775, then
you can do the split.
- Mani
--
மணிவண்ணன் சதாசிவம்
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver
2024-06-12 6:14 ` Manivannan Sadhasivam
@ 2024-06-17 18:09 ` Mayank Rana
0 siblings, 0 replies; 27+ messages in thread
From: Mayank Rana @ 2024-06-17 18:09 UTC (permalink / raw)
To: Manivannan Sadhasivam
Cc: Rob Herring, linux-pci, lpieralisi, kw, bhelgaas, andersson,
krzysztof.kozlowski+dt, conor+dt, devicetree, linux-arm-msm,
quic_ramkri, quic_nkela, quic_shazhuss, quic_msarkar,
quic_nitegupt
On 6/11/2024 11:14 PM, Manivannan Sadhasivam wrote:
> On Mon, Jun 10, 2024 at 10:17:31AM -0700, Mayank Rana wrote:
>>
>> On 6/5/2024 7:39 PM, Manivannan Sadhasivam wrote:
>>> On Fri, May 31, 2024 at 03:47:24PM -0700, Mayank Rana wrote:
>>>> Hi Rob / Mani
>>>>
>>>> On 4/15/2024 4:30 PM, Mayank Rana wrote:
>>>>> Hi Rob
>>>>>
>>>>> Excuse me for late response on this (was OOO).
>>>>> On 4/10/2024 9:58 AM, Rob Herring wrote:
>>>>>> On Mon, Apr 08, 2024 at 11:57:58AM -0700, Mayank Rana wrote:
>>>>>>> Hi Mani
>>>>>>>
>>>>>>> On 4/5/2024 9:17 PM, Manivannan Sadhasivam wrote:
>>>>>>>> On Fri, Apr 05, 2024 at 10:41:15AM -0700, Mayank Rana wrote:
>>>>>>>>> Hi Mani
>>>>>>>>>
>>>>>>>>> On 4/4/2024 10:30 PM, Manivannan Sadhasivam wrote:
>>>>>>>>>> On Thu, Apr 04, 2024 at 12:11:24PM -0700, Mayank Rana wrote:
>>>>>>>>>>> On some of Qualcomm platform, firmware
>>>>>>>>>>> configures PCIe controller into
>>>>>>>>>>> ECAM mode allowing static memory allocation for
>>>>>>>>>>> configuration space of
>>>>>>>>>>> supported bus range. Firmware also takes care of
>>>>>>>>>>> bringing up PCIe PHY
>>>>>>>>>>> and performing required operation to bring PCIe
>>>>>>>>>>> link into D0. Firmware
>>>>>>>>>>> also manages system resources (e.g.
>>>>>>>>>>> clocks/regulators/resets/ bus voting).
>>>>>>>>>>> Hence add Qualcomm PCIe ECAM root complex driver
>>>>>>>>>>> which enumerates PCIe
>>>>>>>>>>> root complex and connected PCIe devices.
>>>>>>>>>>> Firmware won't be enumerating
>>>>>>>>>>> or powering up PCIe root complex until this
>>>>>>>>>>> driver invokes power domain
>>>>>>>>>>> based notification to bring PCIe link into D0/D3cold mode.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Is this an in-house PCIe IP of Qualcomm or the same
>>>>>>>>>> DWC IP that is used in other
>>>>>>>>>> SoCs?
>>>>>>>>>>
>>>>>>>>>> - Mani
>>>>>>>>> Driver is validated on SA8775p-ride platform using PCIe DWC IP for
>>>>>>>>> now.Although this driver doesn't need to know used PCIe
>>>>>>>>> controller and PHY
>>>>>>>>> IP as well programming sequence as that would be taken
>>>>>>>>> care by firmware.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Ok, so it is the same IP but firmware is controlling the
>>>>>>>> resources now. This
>>>>>>>> information should be present in the commit message.
>>>>>>>>
>>>>>>>> Btw, there is an existing generic ECAM host controller driver:
>>>>>>>> drivers/pci/controller/pci-host-generic.c
>>>>>>>>
>>>>>>>> This driver is already being used by several vendors as
>>>>>>>> well. So we should try
>>>>>>>> to extend it for Qcom usecase also.
>>>>>>
>>>>>> I would take it a bit further and say if you need your own driver, then
>>>>>> just use the default QCom driver. Perhaps extend it to support ECAM.
>>>>>> Better yet, copy your firmware setup and always configure the QCom h/w
>>>>>> to use ECAM.
>>>>> Good suggestion. Although here we are having 2 set of requirements:
>>>>> 1. ECAM configuration
>>>>> 2. Managing PCIe controller and PHY resources and programming from
>>>>> firmware as well
>>>>> Hence it is not feasible to use default QCOM driver.
>>>>>> If you want to extend the generic driver, that's fine, but we don't need
>>>>>> a 3rd.
>>>>> I did consider this part before coming up with new driver. Although I
>>>>> felt that
>>>>> below mentioned functionality may not look more generic to be part of
>>>>> pci-host-generic.c driver.
>>>>>>> I did review pci-host-generic.c driver for usage. although there
>>>>>>> are more
>>>>>>> functionalityneeded for use case purpose as below:
>>>>>>> 1. MSI functionality
>>>>>>
>>>>>> Pretty sure the generic driver already supports that.
>>>>> I don't find any MSI support with pci-host-generic.c driver.
>>>>>>> 2. Suspend/Resume
>>>>>>
>>>>>> Others might want that to work as well.
>>>>> Others firmware won't have way to handle D3cold and D0 functionality
>>>>> handling as
>>>>> needed here for supporting suspend/resume as I don't find any interface
>>>>> for pci-host-generic.c driver to notify firmware. here we are having way
>>>>> to talk to firmware using GenPD based power domain usage to communicate
>>>>> with firmware.
>>>>>
>>>>>>> 3. Wakeup Functionality (not part of current change, but would be added
>>>>>>> later)
>>>>>>
>>>>>> Others might want that to work as well.
>>>>> possible if suspend/resume support is available or used.
>>>>>>> 4. Here this driver provides way to virtualized PCIe controller.
>>>>>>> So VMs only
>>>>>>> talk to a generic ECAM whereas HW is only directed accessed by
>>>>>>> service VM.
>>>>>>
>>>>>> That's the existing driver. If if doesn't work for a VM, fix the VM.
>>>>> Correct.
>>>>>>> 5. Adding more Auto based safety use cases related implementation
>>>>>>
>>>>>> Now that's just hand waving.
>>>>> Here I am trying to provide new set of changes plan to be added as part
>>>>> of required functionality.
>>>>>
>>>>>>> Hence keeping pci-host-generic.c as generic driver where above
>>>>>>> functionality
>>>>>>> may not be needed.
>>>>>>
>>>>>> Duplicating things to avoid touching existing drivers is not how kernel
>>>>>> development works.
>>>>> I shall try your suggestion and see how it looks in terms of code
>>>>> changes. Perhaps then we can have more clarity in terms of adding more
>>>>> functionality into generic or having separate driver.
>>>> I just learnt that previously dwc related PCIe ECAM driver and MSI
>>>> controller driver tried out as:
>>>>
>>>> https://lore.kernel.org/linux-pci/20170821192907.8695-1-ard.biesheuvel@linaro.org/
>>>>
>>>> Although there were few concerns at that time. Due to that having dwc
>>>> specific MSI functionality based driver was dropped, and pci-host-generic.c
>>>> driver is being updated using with dwc/snps specific ECAM operation.
>>>>
>>>> In current discussion, it seems that we are discussing to have identical
>>>> approach here.
>>>>
>>>> Atleast on Qualcomm SA8775p platform, I don't have any other way to support
>>>> MSI functionality i.e. extended SPI or ITS/LPI based MSI or using GICv2m
>>>> functionality are not supported.
>>>>
>>>> I don't see any other approach other than MSI based implementation within
>>>> pci-host-generic.c driver for dwc/snps based MSI controller.
>>>>
>>>> Do you have any suggestion on this ?
>>>>
>>>
>>> Since this ECAM driver is going to be used in newer Qcom SoCs, why can't you use
>>> GICv3 for MSI handling?
>> Yes, that is plan further as look like we have limitation on just SA8775.
>> So I see two options here:
>> 1. Update pcie-host-generic.c without MSI based functionality, and leave
>> with MSI functionality differently on SA8775
>> 2. Also possible to make pcie-host-designware.c based MSI functionality as
>> separate driver, and try to use with pcie-host-generic.c driver. That way we
>> would still use existing MSI related code base, and able to use with ECAM
>> driver.
>>
>> Do you see using above option 2 as good way to allow SNPS/DWC based MSI
>> controller functionality with ECAM and Non-ECAM driver ?
>>
>
> IMO, it is not worth splitting the code just for one platform since you said the
> future ECAM based platforms will not require DWC MSI.
>
> But if you have a strong requirement to use upstream DWC MSI for SA8775, then
> you can do the split.
I feel it is better to have DWC MSI mechanism available in split fashion
so other driver like ECAM
driver can utilize. So will update patchset here for review purpose.
> - Mani
>
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2024-06-17 18:09 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-04 19:11 [RFC PATCH 0/2] Add Qualcomm PCIe ECAM root complex driver Mayank Rana
2024-04-04 19:11 ` [RFC PATCH 1/2] dt-bindings: pcie: Document QCOM PCIE ECAM compatible root complex Mayank Rana
2024-04-04 19:30 ` Krzysztof Kozlowski
2024-04-08 19:09 ` Mayank Rana
2024-04-09 6:21 ` Krzysztof Kozlowski
2024-04-18 18:56 ` Mayank Rana
2024-04-18 20:53 ` Krzysztof Kozlowski
2024-04-04 19:11 ` [RFC PATCH 2/2] PCI: Add Qualcomm PCIe ECAM root complex driver Mayank Rana
2024-04-04 19:33 ` Krzysztof Kozlowski
2024-04-05 5:30 ` Manivannan Sadhasivam
2024-04-05 17:41 ` Mayank Rana
2024-04-06 4:17 ` Manivannan Sadhasivam
2024-04-08 18:57 ` Mayank Rana
2024-04-10 6:26 ` Manivannan Sadhasivam
2024-04-10 16:58 ` Rob Herring
2024-04-15 23:30 ` Mayank Rana
2024-05-31 22:47 ` Mayank Rana
2024-06-06 2:39 ` Manivannan Sadhasivam
2024-06-10 17:17 ` Mayank Rana
2024-06-12 6:14 ` Manivannan Sadhasivam
2024-06-17 18:09 ` Mayank Rana
2024-04-05 18:30 ` Bjorn Helgaas
2024-04-06 0:43 ` Mayank Rana
2024-04-04 19:33 ` [RFC PATCH 0/2] " Krzysztof Kozlowski
2024-04-04 23:02 ` Mayank Rana
2024-04-05 6:50 ` Krzysztof Kozlowski
2024-04-05 17:45 ` Mayank Rana
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).