devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Add PCIe support to Sophgo SG2042 SoC
@ 2024-12-09  7:19 Chen Wang
  2024-12-09  7:19 ` [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
                   ` (4 more replies)
  0 siblings, 5 replies; 21+ messages in thread
From: Chen Wang @ 2024-12-09  7:19 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Sophgo's SG2042 SoC uses Cadence PCIe core to implement RC mode.

SG2042 PCIe controller supports two ways to report MSI:

Method A, the PICe controller implements an MSI interrupt controller
inside, and connect to PLIC upward through one interrupt line. Provides
memory-mapped msi address, and by programming the upper 32 bits of the
address to zero, it can be compatible with old pcie devices that only
support 32-bit msi address.

Method B, the PICe controller connects to PLIC upward through an
independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
controller provides multiple(up to 32) interrupt sources to PLIC.
Compared with the first method, the advantage is that the interrupt
source is expanded, but because for SG2042, the msi address provided
by the MSI controller is fixed and only supports 64-bit address(> 2^32),
it is not compatible with old pcie devices that only support 32-bit
msi address.

This patchset depends on another patchset for the SG2042 MSI controller[msi].
If you need to test the DTS part, you need to apply the corresponding
patchset.

Link: https://lore.kernel.org/linux-riscv/cover.1733726057.git.unicorn_wang@outlook.com/ [msi]

Thanks,
Chen

---

Changes in v2:
  The patch series is based on v6.13-rc2.

  Fixed following issues as per comments from Rob Herring, Bjorn Helgaas, thanks.

  - Improve driver binding description
    - Define a new embeded object property msi to replace the "sophgo,internal-msi".
    - Rename "sophgo,link-id" to "sophgo,pcie-port" as per suggestion from Bjorn,
      and add more explanaion for this property.
    - Use msi-parent.
  - Improve driver code:
    - Improve coding style.
    - Fix a bug and make sure num_applied_vecs updated with the max value.
    - Use the MSI parent model.
    - Remove .cpu_addr_fixup.
    - Reorder Kconfig menu item to keep them in alphabetical order by vendor.

Changes in v1:
  The patch series is based on v6.12-rc7. You can simply review or test the
  patches at the link [1].

Link: https://lore.kernel.org/linux-riscv/cover.1731303328.git.unicorn_wang@outlook.com/ [1]
---

Chen Wang (5):
  dt-bindings: pci: Add Sophgo SG2042 PCIe host
  PCI: sg2042: Add Sophgo SG2042 PCIe driver
  dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
  riscv: sophgo: dts: add pcie controllers for SG2042
  riscv: sophgo: dts: enable pcie for PioneerBox

 .../devicetree/bindings/mfd/syscon.yaml       |   2 +
 .../bindings/pci/sophgo,sg2042-pcie-host.yaml | 141 +++++
 .../boot/dts/sophgo/sg2042-milkv-pioneer.dts  |  12 +
 arch/riscv/boot/dts/sophgo/sg2042.dtsi        |  89 +++
 drivers/pci/controller/cadence/Kconfig        |  11 +
 drivers/pci/controller/cadence/Makefile       |   1 +
 drivers/pci/controller/cadence/pcie-sg2042.c  | 534 ++++++++++++++++++
 7 files changed, 790 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
 create mode 100644 drivers/pci/controller/cadence/pcie-sg2042.c


base-commit: fac04efc5c793dccbd07e2d59af9f90b7fc0dca4
-- 
2.34.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-09  7:19 [PATCH v2 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
@ 2024-12-09  7:19 ` Chen Wang
  2024-12-10 17:33   ` Bjorn Helgaas
  2024-12-09  7:19 ` [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 21+ messages in thread
From: Chen Wang @ 2024-12-09  7:19 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Add binding for Sophgo SG2042 PCIe host controller.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
---
 .../bindings/pci/sophgo,sg2042-pcie-host.yaml | 141 ++++++++++++++++++
 1 file changed, 141 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml

diff --git a/Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml b/Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
new file mode 100644
index 000000000000..aec31ec97092
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
@@ -0,0 +1,141 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/sophgo,sg2042-pcie-host.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Sophgo SG2042 PCIe Host (Cadence PCIe Wrapper)
+
+description:
+  Sophgo SG2042 PCIe host controller is based on the Cadence PCIe core.
+
+maintainers:
+  - Chen Wang <unicorn_wang@outlook.com>
+
+properties:
+  compatible:
+    const: sophgo,sg2042-pcie-host
+
+  reg:
+    maxItems: 2
+
+  reg-names:
+    items:
+      - const: reg
+      - const: cfg
+
+  vendor-id:
+    const: 0x1f1c
+
+  device-id:
+    const: 0x2042
+
+  msi:
+    type: object
+    $ref: /schemas/interrupt-controller/msi-controller.yaml#
+    unevaluatedProperties: false
+
+    properties:
+      compatible:
+        items:
+          - const: sophgo,sg2042-pcie-msi
+
+      interrupts:
+        maxItems: 1
+
+      interrupt-names:
+        const: msi
+
+  msi-parent: true
+
+  sophgo,pcie-port:
+    $ref: /schemas/types.yaml#/definitions/uint32
+    description: |
+      SG2042 uses Cadence IP, every IP is composed of 2 cores(called link0
+      & link1 as Cadence's term). "sophgo,pcie-port" is used to identify which
+      core/link the pcie host controller node corresponds to.
+
+      The Cadence IP has two modes of operation, selected by a strap pin.
+
+      In the single-link mode, the Cadence PCIe core instance associated
+      with Link0 is connected to all the lanes and the Cadence PCIe core
+      instance associated with Link1 is inactive.
+
+      In the dual-link mode, the Cadence PCIe core instance associated
+      with Link0 is connected to the lower half of the lanes and the
+      Cadence PCIe core instance associated with Link1 is connected to
+      the upper half of the lanes.
+
+      SG2042 contains 2 Cadence IPs and configures the Cores as below:
+
+                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
+                     |                                |                 |
+      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
+                     |                                |                 |
+                     +-- Core(Link1) <---> disabled   +-----------------+
+
+                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
+                     |                                |                 |
+      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
+                     |                                |                 |
+                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
+
+      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
+      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
+
+      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
+      RC(Link)s may share different bits of the same register. For example,
+      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.
+
+      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
+      this we can know what registers(bits) we should use.
+
+  sophgo,syscon-pcie-ctrl:
+    $ref: /schemas/types.yaml#/definitions/phandle
+    description:
+      Phandle to the PCIe System Controller DT node. It's required to
+      access some MSI operation registers shared by PCIe RCs.
+
+allOf:
+  - $ref: cdns-pcie-host.yaml#
+
+required:
+  - compatible
+  - reg
+  - reg-names
+  - vendor-id
+  - device-id
+  - sophgo,syscon-pcie-ctrl
+  - sophgo,pcie-port
+
+additionalProperties: true
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/irq.h>
+
+    pcie@62000000 {
+      compatible = "sophgo,sg2042-pcie-host";
+      device_type = "pci";
+      reg = <0x62000000  0x00800000>,
+            <0x48000000  0x00001000>;
+      reg-names = "reg", "cfg";
+      #address-cells = <3>;
+      #size-cells = <2>;
+      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
+               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
+      bus-range = <0x80 0xbf>;
+      vendor-id = <0x1f1c>;
+      device-id = <0x2042>;
+      cdns,no-bar-match-nbits = <48>;
+      sophgo,pcie-port = <0>;
+      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
+      msi-parent = <&msi_pcie>;
+      msi_pcie: msi {
+        compatible = "sophgo,sg2042-pcie-msi";
+        msi-controller;
+        interrupt-parent = <&intc>;
+        interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
+        interrupt-names = "msi";
+      };
+    };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2024-12-09  7:19 [PATCH v2 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
  2024-12-09  7:19 ` [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
@ 2024-12-09  7:19 ` Chen Wang
  2024-12-10 17:31   ` Bjorn Helgaas
                     ` (2 more replies)
  2024-12-09  7:20 ` [PATCH v2 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible Chen Wang
                   ` (2 subsequent siblings)
  4 siblings, 3 replies; 21+ messages in thread
From: Chen Wang @ 2024-12-09  7:19 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Add support for PCIe controller in SG2042 SoC. The controller
uses the Cadence PCIe core programmed by pcie-cadence*.c. The
PCIe controller will work in host mode only.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
---
 drivers/pci/controller/cadence/Kconfig       |  11 +
 drivers/pci/controller/cadence/Makefile      |   1 +
 drivers/pci/controller/cadence/pcie-sg2042.c | 534 +++++++++++++++++++
 3 files changed, 546 insertions(+)
 create mode 100644 drivers/pci/controller/cadence/pcie-sg2042.c

diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig
index 8a0044bb3989..ddf86bbe687d 100644
--- a/drivers/pci/controller/cadence/Kconfig
+++ b/drivers/pci/controller/cadence/Kconfig
@@ -42,6 +42,16 @@ config PCIE_CADENCE_PLAT_EP
 	  endpoint mode. This PCIe controller may be embedded into many
 	  different vendors SoCs.
 
+config PCIE_SG2042
+	bool "Sophgo SG2042 PCIe controller (host mode)"
+	depends on ARCH_SOPHGO || COMPILE_TEST
+	depends on OF
+	select PCIE_CADENCE_HOST
+	help
+	  Say Y here if you want to support the Sophgo SG2042 PCIe platform
+	  controller in host mode. Sophgo SG2042 PCIe controller uses Cadence
+	  PCIe core.
+
 config PCI_J721E
 	bool
 
@@ -67,4 +77,5 @@ config PCI_J721E_EP
 	  Say Y here if you want to support the TI J721E PCIe platform
 	  controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe
 	  core.
+
 endmenu
diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile
index 9bac5fb2f13d..89aa316f54ac 100644
--- a/drivers/pci/controller/cadence/Makefile
+++ b/drivers/pci/controller/cadence/Makefile
@@ -4,3 +4,4 @@ obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
 obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
 obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
 obj-$(CONFIG_PCI_J721E) += pci-j721e.o
+obj-$(CONFIG_PCIE_SG2042) += pcie-sg2042.o
\ No newline at end of file
diff --git a/drivers/pci/controller/cadence/pcie-sg2042.c b/drivers/pci/controller/cadence/pcie-sg2042.c
new file mode 100644
index 000000000000..78893d3b5c47
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-sg2042.c
@@ -0,0 +1,534 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * pcie-sg2042 - PCIe controller driver for Sophgo SG2042 SoC
+ *
+ * Copyright (C) 2024 Sophgo Technology Inc.
+ * Copyright (C) 2024 Chen Wang <unicorn_wang@outlook.com>
+ */
+
+#include <linux/bits.h>
+#include <linux/irq.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/kernel.h>
+#include <linux/mfd/syscon.h>
+#include <linux/msi.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/regmap.h>
+
+#include "../../../irqchip/irq-msi-lib.h"
+
+#include "pcie-cadence.h"
+
+/*
+ * SG2042 PCIe controller supports two ways to report MSI:
+ *
+ * - Method A, the PCIe controller implements an MSI interrupt controller
+ *   inside, and connect to PLIC upward through one interrupt line.
+ *   Provides memory-mapped MSI address, and by programming the upper 32
+ *   bits of the address to zero, it can be compatible with old PCIe devices
+ *   that only support 32-bit MSI address.
+ *
+ * - Method B, the PCIe controller connects to PLIC upward through an
+ *   independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
+ *   controller provides multiple(up to 32) interrupt sources to PLIC.
+ *   Compared with the first method, the advantage is that the interrupt
+ *   source is expanded, but because for SG2042, the MSI address provided by
+ *   the MSI controller is fixed and only supports 64-bit address(> 2^32),
+ *   it is not compatible with old PCIe devices that only support 32-bit MSI
+ *   address.
+ *
+ * Method A & B can be configured in DTS, default is Method B.
+ */
+
+#define MAX_MSI_IRQS		8
+#define MAX_MSI_IRQS_PER_CTRL	1
+#define MAX_MSI_CTRLS		(MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL)
+#define MSI_DEF_NUM_VECTORS	MAX_MSI_IRQS
+#define BYTE_NUM_PER_MSI_VEC	4
+
+#define REG_CLEAR		0x0804
+#define REG_STATUS		0x0810
+#define REG_LINK0_MSI_ADDR_SIZE	0x085C
+#define REG_LINK1_MSI_ADDR_SIZE	0x080C
+#define REG_LINK0_MSI_ADDR_LOW	0x0860
+#define REG_LINK0_MSI_ADDR_HIGH	0x0864
+#define REG_LINK1_MSI_ADDR_LOW	0x0868
+#define REG_LINK1_MSI_ADDR_HIGH	0x086C
+
+#define REG_CLEAR_LINK0_BIT	2
+#define REG_CLEAR_LINK1_BIT	3
+#define REG_STATUS_LINK0_BIT	2
+#define REG_STATUS_LINK1_BIT	3
+
+#define REG_LINK0_MSI_ADDR_SIZE_MASK	GENMASK(15, 0)
+#define REG_LINK1_MSI_ADDR_SIZE_MASK	GENMASK(31, 16)
+
+struct sg2042_pcie {
+	struct cdns_pcie	*cdns_pcie;
+
+	struct regmap		*syscon;
+
+	u32			port;
+
+	struct irq_domain	*msi_domain;
+
+	int			msi_irq;
+
+	dma_addr_t		msi_phys;
+	void			*msi_virt;
+
+	u32			num_applied_vecs; /* used to speed up ISR */
+
+	raw_spinlock_t		msi_lock;
+	DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
+};
+
+static void sg2042_pcie_msi_clear_status(struct sg2042_pcie *pcie)
+{
+	u32 status, clr_msi_in_bit;
+
+	if (pcie->port == 1)
+		clr_msi_in_bit = BIT(REG_CLEAR_LINK1_BIT);
+	else
+		clr_msi_in_bit = BIT(REG_CLEAR_LINK0_BIT);
+
+	regmap_read(pcie->syscon, REG_CLEAR, &status);
+	status |= clr_msi_in_bit;
+	regmap_write(pcie->syscon, REG_CLEAR, status);
+
+	/* need write 0 to reset, hardware can not reset automatically */
+	status &= ~clr_msi_in_bit;
+	regmap_write(pcie->syscon, REG_CLEAR, status);
+}
+
+#ifdef CONFIG_SMP
+static int sg2042_pcie_msi_irq_set_affinity(struct irq_data *d,
+					    const struct cpumask *mask,
+					    bool force)
+{
+	if (d->parent_data)
+		return irq_chip_set_affinity_parent(d, mask, force);
+
+	return -EINVAL;
+}
+#endif /* CONFIG_SMP */
+
+static void sg2042_pcie_msi_irq_compose_msi_msg(struct irq_data *d,
+						struct msi_msg *msg)
+{
+	struct sg2042_pcie *pcie = irq_data_get_irq_chip_data(d);
+	struct device *dev = pcie->cdns_pcie->dev;
+
+	msg->address_lo = lower_32_bits(pcie->msi_phys) + BYTE_NUM_PER_MSI_VEC * d->hwirq;
+	msg->address_hi = upper_32_bits(pcie->msi_phys);
+	msg->data = 1;
+
+	if (d->hwirq > pcie->num_applied_vecs)
+		pcie->num_applied_vecs = d->hwirq;
+
+	dev_dbg(dev, "compose MSI msg hwirq[%ld] address_hi[%#x] address_lo[%#x]\n",
+		d->hwirq, msg->address_hi, msg->address_lo);
+}
+
+static void sg2042_pcie_msi_irq_ack(struct irq_data *d)
+{
+	struct sg2042_pcie *pcie = irq_data_get_irq_chip_data(d);
+
+	sg2042_pcie_msi_clear_status(pcie);
+}
+
+static struct irq_chip sg2042_pcie_msi_bottom_chip = {
+	.name			= "SG2042 PCIe PLIC-MSI translator",
+	.irq_ack		= sg2042_pcie_msi_irq_ack,
+	.irq_compose_msi_msg	= sg2042_pcie_msi_irq_compose_msi_msg,
+#ifdef CONFIG_SMP
+	.irq_set_affinity	= sg2042_pcie_msi_irq_set_affinity,
+#endif
+};
+
+static int sg2042_pcie_irq_domain_alloc(struct irq_domain *domain,
+					unsigned int virq, unsigned int nr_irqs,
+					void *args)
+{
+	struct sg2042_pcie *pcie = domain->host_data;
+	unsigned long flags;
+	u32 i;
+	int bit;
+
+	raw_spin_lock_irqsave(&pcie->msi_lock, flags);
+
+	bit = bitmap_find_free_region(pcie->msi_irq_in_use, MSI_DEF_NUM_VECTORS,
+				      order_base_2(nr_irqs));
+
+	raw_spin_unlock_irqrestore(&pcie->msi_lock, flags);
+
+	if (bit < 0)
+		return -ENOSPC;
+
+	for (i = 0; i < nr_irqs; i++)
+		irq_domain_set_info(domain, virq + i, bit + i,
+				    &sg2042_pcie_msi_bottom_chip,
+				    pcie, handle_edge_irq,
+				    NULL, NULL);
+
+	return 0;
+}
+
+static void sg2042_pcie_irq_domain_free(struct irq_domain *domain,
+					unsigned int virq, unsigned int nr_irqs)
+{
+	struct irq_data *d = irq_domain_get_irq_data(domain, virq);
+	struct sg2042_pcie *pcie = irq_data_get_irq_chip_data(d);
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&pcie->msi_lock, flags);
+
+	bitmap_release_region(pcie->msi_irq_in_use, d->hwirq,
+			      order_base_2(nr_irqs));
+
+	raw_spin_unlock_irqrestore(&pcie->msi_lock, flags);
+}
+
+static const struct irq_domain_ops sg2042_pcie_msi_domain_ops = {
+	.alloc	= sg2042_pcie_irq_domain_alloc,
+	.free	= sg2042_pcie_irq_domain_free,
+};
+
+static int sg2042_pcie_init_msi_data(struct sg2042_pcie *pcie)
+{
+	struct device *dev = pcie->cdns_pcie->dev;
+	u32 value;
+	int ret;
+
+	raw_spin_lock_init(&pcie->msi_lock);
+
+	/*
+	 * Though the PCIe controller can address >32-bit address space, to
+	 * facilitate endpoints that support only 32-bit MSI target address,
+	 * the mask is set to 32-bit to make sure that MSI target address is
+	 * always a 32-bit address
+	 */
+	ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
+	if (ret < 0)
+		return ret;
+
+	pcie->msi_virt = dma_alloc_coherent(dev, BYTE_NUM_PER_MSI_VEC * MAX_MSI_IRQS,
+					    &pcie->msi_phys, GFP_KERNEL);
+	if (!pcie->msi_virt)
+		return -ENOMEM;
+
+	/* Program the MSI address and size */
+	if (pcie->port == 1) {
+		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_LOW,
+			     lower_32_bits(pcie->msi_phys));
+		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_HIGH,
+			     upper_32_bits(pcie->msi_phys));
+
+		regmap_read(pcie->syscon, REG_LINK1_MSI_ADDR_SIZE, &value);
+		value = (value & REG_LINK1_MSI_ADDR_SIZE_MASK) | MAX_MSI_IRQS;
+		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_SIZE, value);
+	} else {
+		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_LOW,
+			     lower_32_bits(pcie->msi_phys));
+		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_HIGH,
+			     upper_32_bits(pcie->msi_phys));
+
+		regmap_read(pcie->syscon, REG_LINK0_MSI_ADDR_SIZE, &value);
+		value = (value & REG_LINK0_MSI_ADDR_SIZE_MASK) | (MAX_MSI_IRQS << 16);
+		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_SIZE, value);
+	}
+
+	return 0;
+}
+
+static irqreturn_t sg2042_pcie_msi_handle_irq(struct sg2042_pcie *pcie)
+{
+	u32 i, pos;
+	unsigned long val;
+	u32 status, num_vectors;
+	irqreturn_t ret = IRQ_NONE;
+
+	num_vectors = pcie->num_applied_vecs;
+	for (i = 0; i <= num_vectors; i++) {
+		status = readl((void *)(pcie->msi_virt + i * BYTE_NUM_PER_MSI_VEC));
+		if (!status)
+			continue;
+
+		ret = IRQ_HANDLED;
+		val = status;
+		pos = 0;
+		while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL,
+					    pos)) != MAX_MSI_IRQS_PER_CTRL) {
+			generic_handle_domain_irq(pcie->msi_domain,
+						  (i * MAX_MSI_IRQS_PER_CTRL) +
+						  pos);
+			pos++;
+		}
+		writel(0, ((void *)(pcie->msi_virt) + i * BYTE_NUM_PER_MSI_VEC));
+	}
+	return ret;
+}
+
+static void sg2042_pcie_msi_chained_isr(struct irq_desc *desc)
+{
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	u32 status, st_msi_in_bit;
+	struct sg2042_pcie *pcie;
+
+	chained_irq_enter(chip, desc);
+
+	pcie = irq_desc_get_handler_data(desc);
+	if (pcie->port == 1)
+		st_msi_in_bit = REG_STATUS_LINK1_BIT;
+	else
+		st_msi_in_bit = REG_STATUS_LINK0_BIT;
+
+	regmap_read(pcie->syscon, REG_STATUS, &status);
+	if ((status >> st_msi_in_bit) & 0x1) {
+		sg2042_pcie_msi_clear_status(pcie);
+
+		sg2042_pcie_msi_handle_irq(pcie);
+	}
+
+	chained_irq_exit(chip, desc);
+}
+
+#define SG2042_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS |	\
+					MSI_FLAG_USE_DEF_CHIP_OPS)
+
+#define SG2042_PCIE_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
+
+static struct msi_parent_ops sg2042_pcie_msi_parent_ops = {
+	.required_flags		= SG2042_PCIE_MSI_FLAGS_REQUIRED,
+	.supported_flags	= SG2042_PCIE_MSI_FLAGS_SUPPORTED,
+	.bus_select_mask	= MATCH_PCI_MSI,
+	.bus_select_token	= DOMAIN_BUS_NEXUS,
+	.prefix			= "SG2042-",
+	.init_dev_msi_info	= msi_lib_init_dev_msi_info,
+};
+
+static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie, struct device_node *msi_node)
+{
+	struct device *dev = pcie->cdns_pcie->dev;
+	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
+	struct irq_domain *parent_domain;
+	int ret = 0;
+
+	if (!of_property_read_bool(msi_node, "msi-controller"))
+		return -ENODEV;
+
+	ret = of_irq_get_byname(msi_node, "msi");
+	if (ret <= 0) {
+		dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
+		return ret;
+	}
+	pcie->msi_irq = ret;
+
+	irq_set_chained_handler_and_data(pcie->msi_irq,
+					 sg2042_pcie_msi_chained_isr, pcie);
+
+	parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
+						 &sg2042_pcie_msi_domain_ops, pcie);
+	if (!parent_domain) {
+		dev_err(dev, "%pfw: Failed to create IRQ domain\n", fwnode);
+		return -ENOMEM;
+	}
+	irq_domain_update_bus_token(parent_domain, DOMAIN_BUS_NEXUS);
+
+	parent_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
+	parent_domain->msi_parent_ops = &sg2042_pcie_msi_parent_ops;
+
+	pcie->msi_domain = parent_domain;
+
+	ret = sg2042_pcie_init_msi_data(pcie);
+	if (ret) {
+		dev_err(dev, "Failed to initialize MSI data!\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static void sg2042_pcie_free_msi(struct sg2042_pcie *pcie)
+{
+	struct device *dev = pcie->cdns_pcie->dev;
+
+	if (pcie->msi_irq)
+		irq_set_chained_handler_and_data(pcie->msi_irq, NULL, NULL);
+
+	if (pcie->msi_virt)
+		dma_free_coherent(dev, BYTE_NUM_PER_MSI_VEC * MAX_MSI_IRQS,
+				  pcie->msi_virt, pcie->msi_phys);
+}
+
+/*
+ * SG2042 only support 4-byte aligned access, so for the rootbus (i.e. to read
+ * the PCIe controller itself, read32 is required. For non-rootbus (i.e. to read
+ * the PCIe peripheral registers, supports 1/2/4 byte aligned access, so
+ * directly use read should be fine.
+ *
+ * The same is true for write.
+ */
+static int sg2042_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
+				   int where, int size, u32 *value)
+{
+	if (pci_is_root_bus(bus))
+		return pci_generic_config_read32(bus, devfn, where, size,
+						 value);
+
+	return pci_generic_config_read(bus, devfn, where, size, value);
+}
+
+static int sg2042_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
+				    int where, int size, u32 value)
+{
+	if (pci_is_root_bus(bus))
+		return pci_generic_config_write32(bus, devfn, where, size,
+						  value);
+
+	return pci_generic_config_write(bus, devfn, where, size, value);
+}
+
+static struct pci_ops sg2042_pcie_host_ops = {
+	.map_bus	= cdns_pci_map_bus,
+	.read		= sg2042_pcie_config_read,
+	.write		= sg2042_pcie_config_write,
+};
+
+/* Dummy ops which will be assigned to cdns_pcie.ops, which must be !NULL. */
+static const struct cdns_pcie_ops sg2042_cdns_pcie_ops = {};
+
+static int sg2042_pcie_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *np = dev->of_node;
+	struct pci_host_bridge *bridge;
+	struct device_node *np_syscon;
+	struct device_node *msi_node;
+	struct cdns_pcie *cdns_pcie;
+	struct sg2042_pcie *pcie;
+	struct cdns_pcie_rc *rc;
+	struct regmap *syscon;
+	int ret;
+
+	if (!IS_ENABLED(CONFIG_PCIE_CADENCE_HOST))
+		return -ENODEV;
+
+	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
+	if (!pcie)
+		return -ENOMEM;
+
+	bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
+	if (!bridge) {
+		dev_err(dev, "Failed to alloc host bridge!\n");
+		return -ENOMEM;
+	}
+
+	bridge->ops = &sg2042_pcie_host_ops;
+
+	rc = pci_host_bridge_priv(bridge);
+	cdns_pcie = &rc->pcie;
+	cdns_pcie->dev = dev;
+	cdns_pcie->ops = &sg2042_cdns_pcie_ops;
+	pcie->cdns_pcie = cdns_pcie;
+
+	np_syscon = of_parse_phandle(np, "sophgo,syscon-pcie-ctrl", 0);
+	if (!np_syscon) {
+		dev_err(dev, "Failed to get syscon node\n");
+		return -ENOMEM;
+	}
+	syscon = syscon_node_to_regmap(np_syscon);
+	if (IS_ERR(syscon)) {
+		dev_err(dev, "Failed to get regmap for syscon\n");
+		return -ENOMEM;
+	}
+	pcie->syscon = syscon;
+
+	if (of_property_read_u32(np, "sophgo,pcie-port", &pcie->port)) {
+		dev_err(dev, "Unable to parse sophgo,pcie-port\n");
+		return -EINVAL;
+	}
+
+	platform_set_drvdata(pdev, pcie);
+
+	pm_runtime_enable(dev);
+
+	ret = pm_runtime_get_sync(dev);
+	if (ret < 0) {
+		dev_err(dev, "pm_runtime_get_sync failed\n");
+		goto err_get_sync;
+	}
+
+	msi_node = of_parse_phandle(dev->of_node, "msi-parent", 0);
+	if (!msi_node) {
+		dev_err(dev, "Failed to get msi-parent!\n");
+		return -1;
+	}
+
+	if (of_device_is_compatible(msi_node, "sophgo,sg2042-pcie-msi")) {
+		ret = sg2042_pcie_setup_msi(pcie, msi_node);
+		if (ret < 0)
+			goto err_setup_msi;
+	}
+
+	ret = cdns_pcie_init_phy(dev, cdns_pcie);
+	if (ret) {
+		dev_err(dev, "Failed to init phy!\n");
+		goto err_setup_msi;
+	}
+
+	ret = cdns_pcie_host_setup(rc);
+	if (ret < 0) {
+		dev_err(dev, "Failed to setup host!\n");
+		goto err_host_setup;
+	}
+
+	return 0;
+
+err_host_setup:
+	cdns_pcie_disable_phy(cdns_pcie);
+
+err_setup_msi:
+	sg2042_pcie_free_msi(pcie);
+
+err_get_sync:
+	pm_runtime_put(dev);
+	pm_runtime_disable(dev);
+
+	return ret;
+}
+
+static void sg2042_pcie_shutdown(struct platform_device *pdev)
+{
+	struct sg2042_pcie *pcie = platform_get_drvdata(pdev);
+	struct cdns_pcie *cdns_pcie = pcie->cdns_pcie;
+	struct device *dev = &pdev->dev;
+
+	sg2042_pcie_free_msi(pcie);
+
+	cdns_pcie_disable_phy(cdns_pcie);
+
+	pm_runtime_put(dev);
+	pm_runtime_disable(dev);
+}
+
+static const struct of_device_id sg2042_pcie_of_match[] = {
+	{ .compatible = "sophgo,sg2042-pcie-host" },
+	{},
+};
+
+static struct platform_driver sg2042_pcie_driver = {
+	.driver = {
+		.name		= "sg2042-pcie",
+		.of_match_table	= sg2042_pcie_of_match,
+		.pm		= &cdns_pcie_pm_ops,
+	},
+	.probe		= sg2042_pcie_probe,
+	.shutdown	= sg2042_pcie_shutdown,
+};
+builtin_platform_driver(sg2042_pcie_driver);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
  2024-12-09  7:19 [PATCH v2 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
  2024-12-09  7:19 ` [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
  2024-12-09  7:19 ` [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
@ 2024-12-09  7:20 ` Chen Wang
  2024-12-10 17:32   ` Bjorn Helgaas
  2024-12-09  7:20 ` [PATCH v2 4/5] riscv: sophgo: dts: add pcie controllers for SG2042 Chen Wang
  2024-12-09  7:20 ` [PATCH v2 5/5] riscv: sophgo: dts: enable pcie for PioneerBox Chen Wang
  4 siblings, 1 reply; 21+ messages in thread
From: Chen Wang @ 2024-12-09  7:20 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Document SOPHGO SG2042 compatible for PCIe control registers.
These registers are shared by pcie controller nodes.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
Acked-by: Rob Herring (Arm) <robh@kernel.org>
---
 Documentation/devicetree/bindings/mfd/syscon.yaml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Documentation/devicetree/bindings/mfd/syscon.yaml b/Documentation/devicetree/bindings/mfd/syscon.yaml
index b414de4fa779..afd89aa0ae8b 100644
--- a/Documentation/devicetree/bindings/mfd/syscon.yaml
+++ b/Documentation/devicetree/bindings/mfd/syscon.yaml
@@ -107,6 +107,7 @@ select:
           - rockchip,rk3576-qos
           - rockchip,rk3588-qos
           - rockchip,rv1126-qos
+          - sophgo,sg2042-pcie-ctrl
           - st,spear1340-misc
           - stericsson,nomadik-pmu
           - starfive,jh7100-sysmain
@@ -205,6 +206,7 @@ properties:
           - rockchip,rk3576-qos
           - rockchip,rk3588-qos
           - rockchip,rv1126-qos
+          - sophgo,sg2042-pcie-ctrl
           - st,spear1340-misc
           - stericsson,nomadik-pmu
           - starfive,jh7100-sysmain
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 4/5] riscv: sophgo: dts: add pcie controllers for SG2042
  2024-12-09  7:19 [PATCH v2 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
                   ` (2 preceding siblings ...)
  2024-12-09  7:20 ` [PATCH v2 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible Chen Wang
@ 2024-12-09  7:20 ` Chen Wang
  2024-12-09  7:20 ` [PATCH v2 5/5] riscv: sophgo: dts: enable pcie for PioneerBox Chen Wang
  4 siblings, 0 replies; 21+ messages in thread
From: Chen Wang @ 2024-12-09  7:20 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Add PCIe controller nodes in DTS for Sophgo SG2042.
Default they are disabled.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
---
 arch/riscv/boot/dts/sophgo/sg2042.dtsi | 89 ++++++++++++++++++++++++++
 1 file changed, 89 insertions(+)

diff --git a/arch/riscv/boot/dts/sophgo/sg2042.dtsi b/arch/riscv/boot/dts/sophgo/sg2042.dtsi
index e62ac51ac55a..bbb7cabab9de 100644
--- a/arch/riscv/boot/dts/sophgo/sg2042.dtsi
+++ b/arch/riscv/boot/dts/sophgo/sg2042.dtsi
@@ -195,6 +195,95 @@ clkgen: clock-controller@7030012000 {
 			#clock-cells = <1>;
 		};
 
+		pcie_rc0: pcie@7060000000 {
+			compatible = "sophgo,sg2042-pcie-host";
+			device_type = "pci";
+			reg = <0x70 0x60000000  0x0 0x02000000>,
+			      <0x40 0x00000000  0x0 0x00001000>;
+			reg-names = "reg", "cfg";
+			linux,pci-domain = <0>;
+			#address-cells = <3>;
+			#size-cells = <2>;
+			ranges = <0x01000000 0x0  0xc0000000  0x40 0xc0000000  0x0 0x00400000>,
+				 <0x42000000 0x0  0xd0000000  0x40 0xd0000000  0x0 0x10000000>,
+				 <0x02000000 0x0  0xe0000000  0x40 0xe0000000  0x0 0x20000000>,
+				 <0x43000000 0x42 0x00000000  0x42 0x00000000  0x2 0x00000000>,
+				 <0x03000000 0x41 0x00000000  0x41 0x00000000  0x1 0x00000000>;
+			bus-range = <0x0 0x3f>;
+			vendor-id = <0x1f1c>;
+			device-id = <0x2042>;
+			cdns,no-bar-match-nbits = <48>;
+			sophgo,pcie-port = <0>;
+			sophgo,syscon-pcie-ctrl = <&cdns_pcie0_ctrl>;
+			msi-parent = <&msi>;
+			status = "disabled";
+		};
+
+		cdns_pcie0_ctrl: syscon@7061800000 {
+			compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
+			reg = <0x70 0x61800000 0x0 0x800000>;
+		};
+
+		pcie_rc1: pcie@7062000000 {
+			compatible = "sophgo,sg2042-pcie-host";
+			device_type = "pci";
+			reg = <0x70 0x62000000  0x0 0x00800000>,
+			      <0x48 0x00000000  0x0 0x00001000>;
+			reg-names = "reg", "cfg";
+			linux,pci-domain = <1>;
+			#address-cells = <3>;
+			#size-cells = <2>;
+			ranges = <0x01000000 0x0  0xc0800000  0x48 0xc0800000  0x0 0x00400000>,
+				 <0x42000000 0x0  0xd0000000  0x48 0xd0000000  0x0 0x10000000>,
+				 <0x02000000 0x0  0xe0000000  0x48 0xe0000000  0x0 0x20000000>,
+				 <0x03000000 0x49 0x00000000  0x49 0x00000000  0x1 0x00000000>,
+				 <0x43000000 0x4a 0x00000000  0x4a 0x00000000  0x2 0x00000000>;
+			bus-range = <0x80 0xbf>;
+			vendor-id = <0x1f1c>;
+			device-id = <0x2042>;
+			cdns,no-bar-match-nbits = <48>;
+			sophgo,pcie-port = <0>;
+			sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
+			msi-parent = <&msi_pcie>;
+			status = "disabled";
+			msi_pcie: msi {
+				compatible = "sophgo,sg2042-pcie-msi";
+				msi-controller;
+				interrupt-parent = <&intc>;
+				interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
+				interrupt-names = "msi";
+			};
+		};
+
+		pcie_rc2: pcie@7062800000 {
+			compatible = "sophgo,sg2042-pcie-host";
+			device_type = "pci";
+			reg = <0x70 0x62800000  0x0 0x00800000>,
+			      <0x4c 0x00000000  0x0 0x00001000>;
+			reg-names = "reg", "cfg";
+			linux,pci-domain = <2>;
+			#address-cells = <3>;
+			#size-cells = <2>;
+			ranges = <0x01000000 0x0  0xc0c00000  0x4c 0xc0c00000  0x0 0x00400000>,
+				 <0x42000000 0x0  0xf8000000  0x4c 0xf8000000  0x0 0x04000000>,
+				 <0x02000000 0x0  0xfc000000  0x4c 0xfc000000  0x0 0x04000000>,
+				 <0x43000000 0x4e 0x00000000  0x4e 0x00000000  0x2 0x00000000>,
+				 <0x03000000 0x4d 0x00000000  0x4d 0x00000000  0x1 0x00000000>;
+			bus-range = <0xc0 0xff>;
+			vendor-id = <0x1f1c>;
+			device-id = <0x2042>;
+			cdns,no-bar-match-nbits = <48>;
+			sophgo,pcie-port = <1>;
+			sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
+			msi-parent = <&msi>;
+			status = "disabled";
+		};
+
+		cdns_pcie1_ctrl: syscon@7063800000 {
+			compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
+			reg = <0x70 0x63800000 0x0 0x800000>;
+		};
+
 		clint_mswi: interrupt-controller@7094000000 {
 			compatible = "sophgo,sg2042-aclint-mswi", "thead,c900-aclint-mswi";
 			reg = <0x00000070 0x94000000 0x00000000 0x00004000>;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 5/5] riscv: sophgo: dts: enable pcie for PioneerBox
  2024-12-09  7:19 [PATCH v2 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
                   ` (3 preceding siblings ...)
  2024-12-09  7:20 ` [PATCH v2 4/5] riscv: sophgo: dts: add pcie controllers for SG2042 Chen Wang
@ 2024-12-09  7:20 ` Chen Wang
  4 siblings, 0 replies; 21+ messages in thread
From: Chen Wang @ 2024-12-09  7:20 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Enable pcie controllers for PioneerBox, which uses SG2042 SoC.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
---
 arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts b/arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts
index be596d01ff8d..e63445cc7d18 100644
--- a/arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts
+++ b/arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts
@@ -60,6 +60,18 @@ mcu: syscon@17 {
 	};
 };
 
+&pcie_rc0 {
+	status = "okay";
+};
+
+&pcie_rc1 {
+	status = "okay";
+};
+
+&pcie_rc2 {
+	status = "okay";
+};
+
 &sd {
 	bus-width = <4>;
 	no-sdio;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2024-12-09  7:19 ` [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
@ 2024-12-10 17:31   ` Bjorn Helgaas
  2024-12-19  3:23     ` Chen Wang
  2024-12-15  9:17   ` kernel test robot
  2024-12-15 12:04   ` kernel test robot
  2 siblings, 1 reply; 21+ messages in thread
From: Bjorn Helgaas @ 2024-12-10 17:31 UTC (permalink / raw)
  To: Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Mon, Dec 09, 2024 at 03:19:57PM +0800, Chen Wang wrote:
> From: Chen Wang <unicorn_wang@outlook.com>
> 
> Add support for PCIe controller in SG2042 SoC. The controller
> uses the Cadence PCIe core programmed by pcie-cadence*.c. The
> PCIe controller will work in host mode only.

> +++ b/drivers/pci/controller/cadence/Makefile
> @@ -4,3 +4,4 @@ obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
>  obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
>  obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
>  obj-$(CONFIG_PCI_J721E) += pci-j721e.o
> +obj-$(CONFIG_PCIE_SG2042) += pcie-sg2042.o
> \ No newline at end of file

Add the newline.

> +++ b/drivers/pci/controller/cadence/pcie-sg2042.c

> +#include "../../../irqchip/irq-msi-lib.h"

This is the only file outside drivers/irqchip/ that includes this.
What's special about this driver?  Maybe this is a hint that something
here belongs in drivers/irqchip/?

> +#ifdef CONFIG_SMP

No other drivers test CONFIG_SMP, why should this be different?

> +static int sg2042_pcie_msi_irq_set_affinity(struct irq_data *d,
> +					    const struct cpumask *mask,
> +					    bool force)
> +{
> +	if (d->parent_data)
> +		return irq_chip_set_affinity_parent(d, mask, force);
> +
> +	return -EINVAL;
> +}
> +#endif /* CONFIG_SMP */

> +static int sg2042_pcie_init_msi_data(struct sg2042_pcie *pcie)
> +{
> +	struct device *dev = pcie->cdns_pcie->dev;
> +	u32 value;
> +	int ret;
> +
> +	raw_spin_lock_init(&pcie->msi_lock);
> +
> +	/*
> +	 * Though the PCIe controller can address >32-bit address space, to
> +	 * facilitate endpoints that support only 32-bit MSI target address,
> +	 * the mask is set to 32-bit to make sure that MSI target address is
> +	 * always a 32-bit address
> +	 */
> +	ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));

Not sure this is needed.  Does DT dma-ranges not cover this?

> +static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie, struct device_node *msi_node)

Wrap to fit in 80 columns like the rest.

> +/*
> + * SG2042 only support 4-byte aligned access, so for the rootbus (i.e. to read
> + * the PCIe controller itself, read32 is required. For non-rootbus (i.e. to read

s/PCIe controller/Root Port/

> + * the PCIe peripheral registers, supports 1/2/4 byte aligned access, so
> + * directly use read should be fine.

s/use read/using read/

> +static int sg2042_pcie_probe(struct platform_device *pdev)
> +{
> +	struct device *dev = &pdev->dev;
> +	struct device_node *np = dev->of_node;
> +	struct pci_host_bridge *bridge;
> +	struct device_node *np_syscon;
> +	struct device_node *msi_node;
> +	struct cdns_pcie *cdns_pcie;
> +	struct sg2042_pcie *pcie;
> +	struct cdns_pcie_rc *rc;
> +	struct regmap *syscon;
> +	int ret;
> +
> +	if (!IS_ENABLED(CONFIG_PCIE_CADENCE_HOST))
> +		return -ENODEV;

I don't think this is needed since CONFIG_PCIE_SG2042 selects
PCIE_CADENCE_HOST.

Bjorn

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
  2024-12-09  7:20 ` [PATCH v2 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible Chen Wang
@ 2024-12-10 17:32   ` Bjorn Helgaas
  0 siblings, 0 replies; 21+ messages in thread
From: Bjorn Helgaas @ 2024-12-10 17:32 UTC (permalink / raw)
  To: Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Mon, Dec 09, 2024 at 03:20:14PM +0800, Chen Wang wrote:
> From: Chen Wang <unicorn_wang@outlook.com>
> 
> Document SOPHGO SG2042 compatible for PCIe control registers.
> These registers are shared by pcie controller nodes.

s/pcie/PCIe/ to be consistent (also in subject and other patches).

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-09  7:19 ` [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
@ 2024-12-10 17:33   ` Bjorn Helgaas
  2024-12-11  9:00     ` Chen Wang
  2024-12-19  2:34     ` Chen Wang
  0 siblings, 2 replies; 21+ messages in thread
From: Bjorn Helgaas @ 2024-12-10 17:33 UTC (permalink / raw)
  To: Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:
> Add binding for Sophgo SG2042 PCIe host controller.

> +  sophgo,pcie-port:

This is just an index, isn't it?  I don't see why it should include
"sophgo" unless it encodes something sophgo-specific.

> +    $ref: /schemas/types.yaml#/definitions/uint32
> +    description: |
> +      SG2042 uses Cadence IP, every IP is composed of 2 cores(called link0

Add space before "(".  More instances below.

> +      & link1 as Cadence's term). "sophgo,pcie-port" is used to identify which
> +      core/link the pcie host controller node corresponds to.

s/pcie/PCIe/ for consistency in the text.  More instances below.

> +      The Cadence IP has two modes of operation, selected by a strap pin.
> +
> +      In the single-link mode, the Cadence PCIe core instance associated
> +      with Link0 is connected to all the lanes and the Cadence PCIe core
> +      instance associated with Link1 is inactive.
> +
> +      In the dual-link mode, the Cadence PCIe core instance associated
> +      with Link0 is connected to the lower half of the lanes and the
> +      Cadence PCIe core instance associated with Link1 is connected to
> +      the upper half of the lanes.

I assume this means there are two separate Root Ports, one for Link0
and a second for Link1?

> +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
> +
> +                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
> +                     |                                |                 |
> +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
> +                     |                                |                 |
> +                     +-- Core(Link1) <---> disabled   +-----------------+
> +
> +                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
> +                     |                                |                 |
> +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> +                     |                                |                 |
> +                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
> +
> +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
> +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
> +
> +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
> +      RC(Link)s may share different bits of the same register. For example,
> +      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.

An RC doesn't have a Link.  A Root Port does.

> +      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
> +      this we can know what registers(bits) we should use.
> +
> +  sophgo,syscon-pcie-ctrl:
> +    $ref: /schemas/types.yaml#/definitions/phandle
> +    description:
> +      Phandle to the PCIe System Controller DT node. It's required to
> +      access some MSI operation registers shared by PCIe RCs.

I think this probably means "shared by PCIe Root Ports", not RCs.
It's unlikely that this hardware has multiple Root Complexes.

> +required:
> +  - compatible
> +  - reg
> +  - reg-names
> +  - vendor-id
> +  - device-id
> +  - sophgo,syscon-pcie-ctrl
> +  - sophgo,pcie-port

It looks like vendor-id and device-id apply to PCI devices, i.e.,
things that will show up in lspci, I assume Root Ports in this case.
Can we make this explicit in the DT, e.g., something like this?

  pcie@62000000 {
    compatible = "sophgo,sg2042-pcie-host";
    port0: pci@0,0 {
      vendor-id = <0x1f1c>;
      device-id = <0x2042>;
    };

> +additionalProperties: true
> +
> +examples:
> +  - |
> +    #include <dt-bindings/interrupt-controller/irq.h>
> +
> +    pcie@62000000 {
> +      compatible = "sophgo,sg2042-pcie-host";
> +      device_type = "pci";
> +      reg = <0x62000000  0x00800000>,
> +            <0x48000000  0x00001000>;
> +      reg-names = "reg", "cfg";
> +      #address-cells = <3>;
> +      #size-cells = <2>;
> +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
> +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
> +      bus-range = <0x80 0xbf>;
> +      vendor-id = <0x1f1c>;
> +      device-id = <0x2042>;
> +      cdns,no-bar-match-nbits = <48>;
> +      sophgo,pcie-port = <0>;
> +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
> +      msi-parent = <&msi_pcie>;
> +      msi_pcie: msi {
> +        compatible = "sophgo,sg2042-pcie-msi";
> +        msi-controller;
> +        interrupt-parent = <&intc>;
> +        interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
> +        interrupt-names = "msi";
> +      };
> +    };
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-10 17:33   ` Bjorn Helgaas
@ 2024-12-11  9:00     ` Chen Wang
  2024-12-11 19:20       ` Bjorn Helgaas
  2024-12-19  2:34     ` Chen Wang
  1 sibling, 1 reply; 21+ messages in thread
From: Chen Wang @ 2024-12-11  9:00 UTC (permalink / raw)
  To: Bjorn Helgaas, Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lee, lpieralisi, manivannan.sadhasivam,
	palmer, paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li


On 2024/12/11 1:33, Bjorn Helgaas wrote:
> On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:
>> Add binding for Sophgo SG2042 PCIe host controller.
>> +  sophgo,pcie-port:
> This is just an index, isn't it?  I don't see why it should include
> "sophgo" unless it encodes something sophgo-specific.
I previously understood that if it is not a standard attribute defined 
by the dts schema, such as pcie-port, which is defined by me, it must be 
prefixed with vendor. Is that right?
>
>> +    $ref: /schemas/types.yaml#/definitions/uint32
>> +    description: |
>> +      SG2042 uses Cadence IP, every IP is composed of 2 cores(called link0
> Add space before "(".  More instances below.
ok
>
>> +      & link1 as Cadence's term). "sophgo,pcie-port" is used to identify which
>> +      core/link the pcie host controller node corresponds to.
> s/pcie/PCIe/ for consistency in the text.  More instances below.
ok
>
>> +      The Cadence IP has two modes of operation, selected by a strap pin.
>> +
>> +      In the single-link mode, the Cadence PCIe core instance associated
>> +      with Link0 is connected to all the lanes and the Cadence PCIe core
>> +      instance associated with Link1 is inactive.
>> +
>> +      In the dual-link mode, the Cadence PCIe core instance associated
>> +      with Link0 is connected to the lower half of the lanes and the
>> +      Cadence PCIe core instance associated with Link1 is connected to
>> +      the upper half of the lanes.
> I assume this means there are two separate Root Ports, one for Link0
> and a second for Link1?
Yes. So the naming of pcie_rcX is wrong, I will correct it, thanks.
>
>> +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
>> +
>> +                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
>> +                     |                                |                 |
>> +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
>> +                     |                                |                 |
>> +                     +-- Core(Link1) <---> disabled   +-----------------+
>> +
>> +                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
>> +                     |                                |                 |
>> +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
>> +                     |                                |                 |
>> +                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
>> +
>> +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
>> +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
>> +
>> +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
>> +      RC(Link)s may share different bits of the same register. For example,
>> +      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.
> An RC doesn't have a Link.  A Root Port does.
>
>> +      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
>> +      this we can know what registers(bits) we should use.
>> +
>> +  sophgo,syscon-pcie-ctrl:
>> +    $ref: /schemas/types.yaml#/definitions/phandle
>> +    description:
>> +      Phandle to the PCIe System Controller DT node. It's required to
>> +      access some MSI operation registers shared by PCIe RCs.
> I think this probably means "shared by PCIe Root Ports", not RCs.
> It's unlikely that this hardware has multiple Root Complexes.
Ok, I will fix this.
>
>> +required:
>> +  - compatible
>> +  - reg
>> +  - reg-names
>> +  - vendor-id
>> +  - device-id
>> +  - sophgo,syscon-pcie-ctrl
>> +  - sophgo,pcie-port
> It looks like vendor-id and device-id apply to PCI devices, i.e.,
> things that will show up in lspci, I assume Root Ports in this case.
> Can we make this explicit in the DT, e.g., something like this?
>
>    pcie@62000000 {
>      compatible = "sophgo,sg2042-pcie-host";
>      port0: pci@0,0 {
>        vendor-id = <0x1f1c>;
>        device-id = <0x2042>;
>      };

Sorry, I don't understand your meaning very well.  Referring to the 
topology diagram I drew above, is it okay to write DTS as follows?

pcie@7060000000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // other properties
     pci@0,0 {
       vendor-id = <0x1f1c>;
       device-id = <0x2042>;
     };
}

pcie@7062000000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // other properties
     pci@0,0 {
       vendor-id = <0x1f1c>;
       device-id = <0x2042>;
     };
}

pcie@7062800000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // other properties
     pci@1,0 {
       vendor-id = <0x1f1c>;
       device-id = <0x2042>;
     };

}

And with this change, I can drop the “pcie-port”property and use the 
port name to figure out the port number, right?


>> +additionalProperties: true
>> +
>> +examples:
>> +  - |
>> +    #include <dt-bindings/interrupt-controller/irq.h>
>> +
>> +    pcie@62000000 {
>> +      compatible = "sophgo,sg2042-pcie-host";
>> +      device_type = "pci";
>> +      reg = <0x62000000  0x00800000>,
>> +            <0x48000000  0x00001000>;
>> +      reg-names = "reg", "cfg";
>> +      #address-cells = <3>;
>> +      #size-cells = <2>;
>> +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
>> +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
>> +      bus-range = <0x80 0xbf>;
>> +      vendor-id = <0x1f1c>;
>> +      device-id = <0x2042>;
>> +      cdns,no-bar-match-nbits = <48>;
>> +      sophgo,pcie-port = <0>;
>> +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>> +      msi-parent = <&msi_pcie>;
>> +      msi_pcie: msi {
>> +        compatible = "sophgo,sg2042-pcie-msi";
>> +        msi-controller;
>> +        interrupt-parent = <&intc>;
>> +        interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
>> +        interrupt-names = "msi";
>> +      };
>> +    };
>> -- 
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-11  9:00     ` Chen Wang
@ 2024-12-11 19:20       ` Bjorn Helgaas
  2024-12-17 13:10         ` Rob Herring
  0 siblings, 1 reply; 21+ messages in thread
From: Bjorn Helgaas @ 2024-12-11 19:20 UTC (permalink / raw)
  To: Chen Wang, Rob Herring, Krzysztof Kozlowski, Conor Dooley
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, guoren,
	inochiama, lee, lpieralisi, manivannan.sadhasivam, palmer,
	paul.walmsley, pbrobinson, devicetree, linux-kernel, linux-pci,
	linux-riscv, chao.wei, xiaoguang.xing, fengchun.li

[cc->to: Rob, Krzysztof, Conor because I'm not a DT expert and I'd
like their thoughts on this idea of describing Root Ports as separate
children]

On Wed, Dec 11, 2024 at 05:00:44PM +0800, Chen Wang wrote:
> On 2024/12/11 1:33, Bjorn Helgaas wrote:
> > On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:

> > > +      The Cadence IP has two modes of operation, selected by a strap pin.
> > > +
> > > +      In the single-link mode, the Cadence PCIe core instance associated
> > > +      with Link0 is connected to all the lanes and the Cadence PCIe core
> > > +      instance associated with Link1 is inactive.
> > > +
> > > +      In the dual-link mode, the Cadence PCIe core instance associated
> > > +      with Link0 is connected to the lower half of the lanes and the
> > > +      Cadence PCIe core instance associated with Link1 is connected to
> > > +      the upper half of the lanes.

> > > +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
> > > +
> > > +                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
> > > +                     |                                |                 |
> > > +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
> > > +                     |                                |                 |
> > > +                     +-- Core(Link1) <---> disabled   +-----------------+
> > > +
> > > +                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
> > > +                     |                                |                 |
> > > +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> > > +                     |                                |                 |
> > > +                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
> > > +
> > > +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
> > > +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
> > > +
> > > +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
> > > +      RC(Link)s may share different bits of the same register. For example,
> > > +      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.

> > > +      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
> > > +      this we can know what registers(bits) we should use.

> > > +required:
> > > +  - compatible
> > > +  - reg
> > > +  - reg-names
> > > +  - vendor-id
> > > +  - device-id
> > > +  - sophgo,syscon-pcie-ctrl
> > > +  - sophgo,pcie-port
> >
> > It looks like vendor-id and device-id apply to PCI devices, i.e.,
> > things that will show up in lspci, I assume Root Ports in this case.
> > Can we make this explicit in the DT, e.g., something like this?
> > 
> >    pcie@62000000 {
> >      compatible = "sophgo,sg2042-pcie-host";
> >      port0: pci@0,0 {
> >        vendor-id = <0x1f1c>;
> >        device-id = <0x2042>;
> >      };
> 
> Sorry, I don't understand your meaning very well.  Referring to the topology
> diagram I drew above, is it okay to write DTS as follows?
> 
> pcie@7060000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // other properties
>     pci@0,0 {
>       vendor-id = <0x1f1c>;
>       device-id = <0x2042>;
>     };
> }
> 
> pcie@7062000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // other properties
>     pci@0,0 {
>       vendor-id = <0x1f1c>;
>       device-id = <0x2042>;
>     };
> }
> 
> pcie@7062800000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // other properties
>     pci@1,0 {
>       vendor-id = <0x1f1c>;
>       device-id = <0x2042>;
>     };
> 
> }

Generally makes sense to me.  I'm suggesting that we should start
describing Root Ports as children of the host bridge node instead of 
mixing their properties into the host bridge itself.

Some properties apply to the host bridge, e.g., "bus-range" describes
the bus number aperture, and "ranges" describes the address
translation between the upstream CPU address space and the PCI address
space.

Others apply specifically to a Root Port, e.g., "num-lanes",
"max-link-speed", "phys", "vendor-id", "device-id".  I think it will
help if we can describe these in separate children, especially when
there are multiple Root Ports.

Documentation/devicetree/bindings/pci/pci.txt says a Root Port
should include a reg property that contains the bus/device/function
number of the RP, e.g.,
Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt has
this:

  pcie-controller@3000 {
     compatible = "nvidia,tegra30-pcie";
     pci@1,0 {
       reg = <0x000800 0 0 0 0>;
     };

where the "0x000800 0 0 0 0" means the "pci@1,0" Root Port is at
00:01.0 (bus 00, device 01, function 0).  I don't know what the "@1,0"
part means.

> And with this change, I can drop the “pcie-port”property and use the port
> name to figure out the port number, right?

Seems likely to me.

> > > +additionalProperties: true
> > > +
> > > +examples:
> > > +  - |
> > > +    #include <dt-bindings/interrupt-controller/irq.h>
> > > +
> > > +    pcie@62000000 {
> > > +      compatible = "sophgo,sg2042-pcie-host";
> > > +      device_type = "pci";
> > > +      reg = <0x62000000  0x00800000>,
> > > +            <0x48000000  0x00001000>;
> > > +      reg-names = "reg", "cfg";
> > > +      #address-cells = <3>;
> > > +      #size-cells = <2>;
> > > +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
> > > +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
> > > +      bus-range = <0x80 0xbf>;
> > > +      vendor-id = <0x1f1c>;
> > > +      device-id = <0x2042>;
> > > +      cdns,no-bar-match-nbits = <48>;
> > > +      sophgo,pcie-port = <0>;
> > > +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
> > > +      msi-parent = <&msi_pcie>;
> > > +      msi_pcie: msi {
> > > +        compatible = "sophgo,sg2042-pcie-msi";
> > > +        msi-controller;
> > > +        interrupt-parent = <&intc>;
> > > +        interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
> > > +        interrupt-names = "msi";
> > > +      };
> > > +    };

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2024-12-09  7:19 ` [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
  2024-12-10 17:31   ` Bjorn Helgaas
@ 2024-12-15  9:17   ` kernel test robot
  2024-12-15 12:04   ` kernel test robot
  2 siblings, 0 replies; 21+ messages in thread
From: kernel test robot @ 2024-12-15  9:17 UTC (permalink / raw)
  To: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang,
	conor+dt, guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas
  Cc: oe-kbuild-all

Hi Chen,

kernel test robot noticed the following build warnings:

[auto build test WARNING on fac04efc5c793dccbd07e2d59af9f90b7fc0dca4]

url:    https://github.com/intel-lab-lkp/linux/commits/Chen-Wang/dt-bindings-pci-Add-Sophgo-SG2042-PCIe-host/20241209-152613
base:   fac04efc5c793dccbd07e2d59af9f90b7fc0dca4
patch link:    https://lore.kernel.org/r/1d82eff3670f60df24228e5c83cf663c6dd61eaf.1733726572.git.unicorn_wang%40outlook.com
patch subject: [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
config: sh-randconfig-r071-20241215 (https://download.01.org/0day-ci/archive/20241215/202412151758.8ckkzHnf-lkp@intel.com/config)
compiler: sh4-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241215/202412151758.8ckkzHnf-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202412151758.8ckkzHnf-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from drivers/pci/controller/cadence/pcie-sg2042.c:23:
>> drivers/pci/controller/cadence/../../../irqchip/irq-msi-lib.h:25:39: warning: 'struct msi_domain_info' declared inside parameter list will not be visible outside of this definition or declaration
      25 |                                struct msi_domain_info *info);
         |                                       ^~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:306:15: error: variable 'sg2042_pcie_msi_parent_ops' has initializer but incomplete type
     306 | static struct msi_parent_ops sg2042_pcie_msi_parent_ops = {
         |               ^~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:307:10: error: 'struct msi_parent_ops' has no member named 'required_flags'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |          ^~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:301:41: error: 'MSI_FLAG_USE_DEF_DOM_OPS' undeclared here (not in a function)
     301 | #define SG2042_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS |      \
         |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:307:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_REQUIRED'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:302:41: error: 'MSI_FLAG_USE_DEF_CHIP_OPS' undeclared here (not in a function)
     302 |                                         MSI_FLAG_USE_DEF_CHIP_OPS)
         |                                         ^~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:307:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_REQUIRED'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/pci/controller/cadence/pcie-sg2042.c:301:40: warning: excess elements in struct initializer
     301 | #define SG2042_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS |      \
         |                                        ^
   drivers/pci/controller/cadence/pcie-sg2042.c:307:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_REQUIRED'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:301:40: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
     301 | #define SG2042_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS |      \
         |                                        ^
   drivers/pci/controller/cadence/pcie-sg2042.c:307:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_REQUIRED'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:308:10: error: 'struct msi_parent_ops' has no member named 'supported_flags'
     308 |         .supported_flags        = SG2042_PCIE_MSI_FLAGS_SUPPORTED,
         |          ^~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:304:41: error: 'MSI_GENERIC_FLAGS_MASK' undeclared here (not in a function)
     304 | #define SG2042_PCIE_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
         |                                         ^~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:308:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_SUPPORTED'
     308 |         .supported_flags        = SG2042_PCIE_MSI_FLAGS_SUPPORTED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:304:41: warning: excess elements in struct initializer
     304 | #define SG2042_PCIE_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
         |                                         ^~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:308:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_SUPPORTED'
     308 |         .supported_flags        = SG2042_PCIE_MSI_FLAGS_SUPPORTED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:304:41: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
     304 | #define SG2042_PCIE_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
         |                                         ^~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:308:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_SUPPORTED'
     308 |         .supported_flags        = SG2042_PCIE_MSI_FLAGS_SUPPORTED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:309:10: error: 'struct msi_parent_ops' has no member named 'bus_select_mask'
     309 |         .bus_select_mask        = MATCH_PCI_MSI,
         |          ^~~~~~~~~~~~~~~
>> drivers/pci/controller/cadence/../../../irqchip/irq-msi-lib.h:15:33: warning: excess elements in struct initializer
      15 | #define MATCH_PCI_MSI           (0)
         |                                 ^
   drivers/pci/controller/cadence/pcie-sg2042.c:309:35: note: in expansion of macro 'MATCH_PCI_MSI'
     309 |         .bus_select_mask        = MATCH_PCI_MSI,
         |                                   ^~~~~~~~~~~~~
   drivers/pci/controller/cadence/../../../irqchip/irq-msi-lib.h:15:33: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
      15 | #define MATCH_PCI_MSI           (0)
         |                                 ^
   drivers/pci/controller/cadence/pcie-sg2042.c:309:35: note: in expansion of macro 'MATCH_PCI_MSI'
     309 |         .bus_select_mask        = MATCH_PCI_MSI,
         |                                   ^~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:310:10: error: 'struct msi_parent_ops' has no member named 'bus_select_token'
     310 |         .bus_select_token       = DOMAIN_BUS_NEXUS,
         |          ^~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:310:35: warning: excess elements in struct initializer
     310 |         .bus_select_token       = DOMAIN_BUS_NEXUS,
         |                                   ^~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:310:35: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
   drivers/pci/controller/cadence/pcie-sg2042.c:311:10: error: 'struct msi_parent_ops' has no member named 'prefix'
     311 |         .prefix                 = "SG2042-",
         |          ^~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:311:35: warning: excess elements in struct initializer
     311 |         .prefix                 = "SG2042-",
         |                                   ^~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:311:35: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
   drivers/pci/controller/cadence/pcie-sg2042.c:312:10: error: 'struct msi_parent_ops' has no member named 'init_dev_msi_info'
     312 |         .init_dev_msi_info      = msi_lib_init_dev_msi_info,
         |          ^~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:312:35: warning: excess elements in struct initializer
     312 |         .init_dev_msi_info      = msi_lib_init_dev_msi_info,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:312:35: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
   drivers/pci/controller/cadence/pcie-sg2042.c: In function 'sg2042_pcie_setup_msi':
   drivers/pci/controller/cadence/pcie-sg2042.c:344:22: error: 'struct irq_domain' has no member named 'msi_parent_ops'
     344 |         parent_domain->msi_parent_ops = &sg2042_pcie_msi_parent_ops;
         |                      ^~
   drivers/pci/controller/cadence/pcie-sg2042.c: At top level:
   drivers/pci/controller/cadence/pcie-sg2042.c:306:30: error: storage size of 'sg2042_pcie_msi_parent_ops' isn't known
     306 | static struct msi_parent_ops sg2042_pcie_msi_parent_ops = {
         |                              ^~~~~~~~~~~~~~~~~~~~~~~~~~


vim +25 drivers/pci/controller/cadence/../../../irqchip/irq-msi-lib.h

72e257c6f058032 Thomas Gleixner 2024-06-23  11  
8c41ccec839c622 Thomas Gleixner 2024-06-23  12  #ifdef CONFIG_PCI_MSI
8c41ccec839c622 Thomas Gleixner 2024-06-23  13  #define MATCH_PCI_MSI		BIT(DOMAIN_BUS_PCI_MSI)
8c41ccec839c622 Thomas Gleixner 2024-06-23  14  #else
8c41ccec839c622 Thomas Gleixner 2024-06-23 @15  #define MATCH_PCI_MSI		(0)
8c41ccec839c622 Thomas Gleixner 2024-06-23  16  #endif
8c41ccec839c622 Thomas Gleixner 2024-06-23  17  
496436f4a514a3f Thomas Gleixner 2024-06-23  18  #define MATCH_PLATFORM_MSI	BIT(DOMAIN_BUS_PLATFORM_MSI)
496436f4a514a3f Thomas Gleixner 2024-06-23  19  
72e257c6f058032 Thomas Gleixner 2024-06-23  20  int msi_lib_irq_domain_select(struct irq_domain *d, struct irq_fwspec *fwspec,
72e257c6f058032 Thomas Gleixner 2024-06-23  21  			      enum irq_domain_bus_token bus_token);
72e257c6f058032 Thomas Gleixner 2024-06-23  22  
72e257c6f058032 Thomas Gleixner 2024-06-23  23  bool msi_lib_init_dev_msi_info(struct device *dev, struct irq_domain *domain,
72e257c6f058032 Thomas Gleixner 2024-06-23  24  			       struct irq_domain *real_parent,
72e257c6f058032 Thomas Gleixner 2024-06-23 @25  			       struct msi_domain_info *info);
72e257c6f058032 Thomas Gleixner 2024-06-23  26  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2024-12-09  7:19 ` [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
  2024-12-10 17:31   ` Bjorn Helgaas
  2024-12-15  9:17   ` kernel test robot
@ 2024-12-15 12:04   ` kernel test robot
  2 siblings, 0 replies; 21+ messages in thread
From: kernel test robot @ 2024-12-15 12:04 UTC (permalink / raw)
  To: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang,
	conor+dt, guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas
  Cc: oe-kbuild-all

Hi Chen,

kernel test robot noticed the following build errors:

[auto build test ERROR on fac04efc5c793dccbd07e2d59af9f90b7fc0dca4]

url:    https://github.com/intel-lab-lkp/linux/commits/Chen-Wang/dt-bindings-pci-Add-Sophgo-SG2042-PCIe-host/20241209-152613
base:   fac04efc5c793dccbd07e2d59af9f90b7fc0dca4
patch link:    https://lore.kernel.org/r/1d82eff3670f60df24228e5c83cf663c6dd61eaf.1733726572.git.unicorn_wang%40outlook.com
patch subject: [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
config: sh-randconfig-r071-20241215 (https://download.01.org/0day-ci/archive/20241215/202412151947.yDHMy3jh-lkp@intel.com/config)
compiler: sh4-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241215/202412151947.yDHMy3jh-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202412151947.yDHMy3jh-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from drivers/pci/controller/cadence/pcie-sg2042.c:23:
   drivers/pci/controller/cadence/../../../irqchip/irq-msi-lib.h:25:39: warning: 'struct msi_domain_info' declared inside parameter list will not be visible outside of this definition or declaration
      25 |                                struct msi_domain_info *info);
         |                                       ^~~~~~~~~~~~~~~
>> drivers/pci/controller/cadence/pcie-sg2042.c:306:15: error: variable 'sg2042_pcie_msi_parent_ops' has initializer but incomplete type
     306 | static struct msi_parent_ops sg2042_pcie_msi_parent_ops = {
         |               ^~~~~~~~~~~~~~
>> drivers/pci/controller/cadence/pcie-sg2042.c:307:10: error: 'struct msi_parent_ops' has no member named 'required_flags'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |          ^~~~~~~~~~~~~~
>> drivers/pci/controller/cadence/pcie-sg2042.c:301:41: error: 'MSI_FLAG_USE_DEF_DOM_OPS' undeclared here (not in a function)
     301 | #define SG2042_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS |      \
         |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:307:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_REQUIRED'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/pci/controller/cadence/pcie-sg2042.c:302:41: error: 'MSI_FLAG_USE_DEF_CHIP_OPS' undeclared here (not in a function)
     302 |                                         MSI_FLAG_USE_DEF_CHIP_OPS)
         |                                         ^~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:307:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_REQUIRED'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:301:40: warning: excess elements in struct initializer
     301 | #define SG2042_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS |      \
         |                                        ^
   drivers/pci/controller/cadence/pcie-sg2042.c:307:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_REQUIRED'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:301:40: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
     301 | #define SG2042_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS |      \
         |                                        ^
   drivers/pci/controller/cadence/pcie-sg2042.c:307:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_REQUIRED'
     307 |         .required_flags         = SG2042_PCIE_MSI_FLAGS_REQUIRED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/pci/controller/cadence/pcie-sg2042.c:308:10: error: 'struct msi_parent_ops' has no member named 'supported_flags'
     308 |         .supported_flags        = SG2042_PCIE_MSI_FLAGS_SUPPORTED,
         |          ^~~~~~~~~~~~~~~
>> drivers/pci/controller/cadence/pcie-sg2042.c:304:41: error: 'MSI_GENERIC_FLAGS_MASK' undeclared here (not in a function)
     304 | #define SG2042_PCIE_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
         |                                         ^~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:308:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_SUPPORTED'
     308 |         .supported_flags        = SG2042_PCIE_MSI_FLAGS_SUPPORTED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:304:41: warning: excess elements in struct initializer
     304 | #define SG2042_PCIE_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
         |                                         ^~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:308:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_SUPPORTED'
     308 |         .supported_flags        = SG2042_PCIE_MSI_FLAGS_SUPPORTED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:304:41: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
     304 | #define SG2042_PCIE_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
         |                                         ^~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:308:35: note: in expansion of macro 'SG2042_PCIE_MSI_FLAGS_SUPPORTED'
     308 |         .supported_flags        = SG2042_PCIE_MSI_FLAGS_SUPPORTED,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/pci/controller/cadence/pcie-sg2042.c:309:10: error: 'struct msi_parent_ops' has no member named 'bus_select_mask'
     309 |         .bus_select_mask        = MATCH_PCI_MSI,
         |          ^~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/../../../irqchip/irq-msi-lib.h:15:33: warning: excess elements in struct initializer
      15 | #define MATCH_PCI_MSI           (0)
         |                                 ^
   drivers/pci/controller/cadence/pcie-sg2042.c:309:35: note: in expansion of macro 'MATCH_PCI_MSI'
     309 |         .bus_select_mask        = MATCH_PCI_MSI,
         |                                   ^~~~~~~~~~~~~
   drivers/pci/controller/cadence/../../../irqchip/irq-msi-lib.h:15:33: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
      15 | #define MATCH_PCI_MSI           (0)
         |                                 ^
   drivers/pci/controller/cadence/pcie-sg2042.c:309:35: note: in expansion of macro 'MATCH_PCI_MSI'
     309 |         .bus_select_mask        = MATCH_PCI_MSI,
         |                                   ^~~~~~~~~~~~~
>> drivers/pci/controller/cadence/pcie-sg2042.c:310:10: error: 'struct msi_parent_ops' has no member named 'bus_select_token'
     310 |         .bus_select_token       = DOMAIN_BUS_NEXUS,
         |          ^~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:310:35: warning: excess elements in struct initializer
     310 |         .bus_select_token       = DOMAIN_BUS_NEXUS,
         |                                   ^~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:310:35: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
>> drivers/pci/controller/cadence/pcie-sg2042.c:311:10: error: 'struct msi_parent_ops' has no member named 'prefix'
     311 |         .prefix                 = "SG2042-",
         |          ^~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:311:35: warning: excess elements in struct initializer
     311 |         .prefix                 = "SG2042-",
         |                                   ^~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:311:35: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
>> drivers/pci/controller/cadence/pcie-sg2042.c:312:10: error: 'struct msi_parent_ops' has no member named 'init_dev_msi_info'
     312 |         .init_dev_msi_info      = msi_lib_init_dev_msi_info,
         |          ^~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:312:35: warning: excess elements in struct initializer
     312 |         .init_dev_msi_info      = msi_lib_init_dev_msi_info,
         |                                   ^~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/pci/controller/cadence/pcie-sg2042.c:312:35: note: (near initialization for 'sg2042_pcie_msi_parent_ops')
   drivers/pci/controller/cadence/pcie-sg2042.c: In function 'sg2042_pcie_setup_msi':
>> drivers/pci/controller/cadence/pcie-sg2042.c:344:22: error: 'struct irq_domain' has no member named 'msi_parent_ops'
     344 |         parent_domain->msi_parent_ops = &sg2042_pcie_msi_parent_ops;
         |                      ^~
   drivers/pci/controller/cadence/pcie-sg2042.c: At top level:
>> drivers/pci/controller/cadence/pcie-sg2042.c:306:30: error: storage size of 'sg2042_pcie_msi_parent_ops' isn't known
     306 | static struct msi_parent_ops sg2042_pcie_msi_parent_ops = {
         |                              ^~~~~~~~~~~~~~~~~~~~~~~~~~


vim +/sg2042_pcie_msi_parent_ops +306 drivers/pci/controller/cadence/pcie-sg2042.c

   300	
 > 301	#define SG2042_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS |	\
 > 302						MSI_FLAG_USE_DEF_CHIP_OPS)
   303	
 > 304	#define SG2042_PCIE_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
   305	
 > 306	static struct msi_parent_ops sg2042_pcie_msi_parent_ops = {
 > 307		.required_flags		= SG2042_PCIE_MSI_FLAGS_REQUIRED,
 > 308		.supported_flags	= SG2042_PCIE_MSI_FLAGS_SUPPORTED,
 > 309		.bus_select_mask	= MATCH_PCI_MSI,
 > 310		.bus_select_token	= DOMAIN_BUS_NEXUS,
 > 311		.prefix			= "SG2042-",
 > 312		.init_dev_msi_info	= msi_lib_init_dev_msi_info,
   313	};
   314	
   315	static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie, struct device_node *msi_node)
   316	{
   317		struct device *dev = pcie->cdns_pcie->dev;
   318		struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
   319		struct irq_domain *parent_domain;
   320		int ret = 0;
   321	
   322		if (!of_property_read_bool(msi_node, "msi-controller"))
   323			return -ENODEV;
   324	
   325		ret = of_irq_get_byname(msi_node, "msi");
   326		if (ret <= 0) {
   327			dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
   328			return ret;
   329		}
   330		pcie->msi_irq = ret;
   331	
   332		irq_set_chained_handler_and_data(pcie->msi_irq,
   333						 sg2042_pcie_msi_chained_isr, pcie);
   334	
   335		parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
   336							 &sg2042_pcie_msi_domain_ops, pcie);
   337		if (!parent_domain) {
   338			dev_err(dev, "%pfw: Failed to create IRQ domain\n", fwnode);
   339			return -ENOMEM;
   340		}
   341		irq_domain_update_bus_token(parent_domain, DOMAIN_BUS_NEXUS);
   342	
   343		parent_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
 > 344		parent_domain->msi_parent_ops = &sg2042_pcie_msi_parent_ops;
   345	
   346		pcie->msi_domain = parent_domain;
   347	
   348		ret = sg2042_pcie_init_msi_data(pcie);
   349		if (ret) {
   350			dev_err(dev, "Failed to initialize MSI data!\n");
   351			return ret;
   352		}
   353	
   354		return 0;
   355	}
   356	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-11 19:20       ` Bjorn Helgaas
@ 2024-12-17 13:10         ` Rob Herring
  0 siblings, 0 replies; 21+ messages in thread
From: Rob Herring @ 2024-12-17 13:10 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Chen Wang, Krzysztof Kozlowski, Conor Dooley, Chen Wang, kw,
	u.kleine-koenig, aou, arnd, bhelgaas, guoren, inochiama, lee,
	lpieralisi, manivannan.sadhasivam, palmer, paul.walmsley,
	pbrobinson, devicetree, linux-kernel, linux-pci, linux-riscv,
	chao.wei, xiaoguang.xing, fengchun.li

On Wed, Dec 11, 2024 at 01:20:14PM -0600, Bjorn Helgaas wrote:
> [cc->to: Rob, Krzysztof, Conor because I'm not a DT expert and I'd
> like their thoughts on this idea of describing Root Ports as separate
> children]
> 
> On Wed, Dec 11, 2024 at 05:00:44PM +0800, Chen Wang wrote:
> > On 2024/12/11 1:33, Bjorn Helgaas wrote:
> > > On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:
> 
> > > > +      The Cadence IP has two modes of operation, selected by a strap pin.
> > > > +
> > > > +      In the single-link mode, the Cadence PCIe core instance associated
> > > > +      with Link0 is connected to all the lanes and the Cadence PCIe core
> > > > +      instance associated with Link1 is inactive.
> > > > +
> > > > +      In the dual-link mode, the Cadence PCIe core instance associated
> > > > +      with Link0 is connected to the lower half of the lanes and the
> > > > +      Cadence PCIe core instance associated with Link1 is connected to
> > > > +      the upper half of the lanes.
> 
> > > > +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
> > > > +
> > > > +                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
> > > > +                     |                                |                 |
> > > > +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
> > > > +                     |                                |                 |
> > > > +                     +-- Core(Link1) <---> disabled   +-----------------+
> > > > +
> > > > +                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
> > > > +                     |                                |                 |
> > > > +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> > > > +                     |                                |                 |
> > > > +                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
> > > > +
> > > > +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
> > > > +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
> > > > +
> > > > +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
> > > > +      RC(Link)s may share different bits of the same register. For example,
> > > > +      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.
> 
> > > > +      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
> > > > +      this we can know what registers(bits) we should use.
> 
> > > > +required:
> > > > +  - compatible
> > > > +  - reg
> > > > +  - reg-names
> > > > +  - vendor-id
> > > > +  - device-id
> > > > +  - sophgo,syscon-pcie-ctrl
> > > > +  - sophgo,pcie-port
> > >
> > > It looks like vendor-id and device-id apply to PCI devices, i.e.,
> > > things that will show up in lspci, I assume Root Ports in this case.
> > > Can we make this explicit in the DT, e.g., something like this?
> > > 
> > >    pcie@62000000 {
> > >      compatible = "sophgo,sg2042-pcie-host";
> > >      port0: pci@0,0 {
> > >        vendor-id = <0x1f1c>;
> > >        device-id = <0x2042>;
> > >      };
> > 
> > Sorry, I don't understand your meaning very well.  Referring to the topology
> > diagram I drew above, is it okay to write DTS as follows?
> > 
> > pcie@7060000000 {
> >     compatible = "sophgo,sg2042-pcie-host";
> >     ...... // other properties
> >     pci@0,0 {
> >       vendor-id = <0x1f1c>;
> >       device-id = <0x2042>;
> >     };
> > }
> > 
> > pcie@7062000000 {
> >     compatible = "sophgo,sg2042-pcie-host";
> >     ...... // other properties
> >     pci@0,0 {
> >       vendor-id = <0x1f1c>;
> >       device-id = <0x2042>;
> >     };
> > }
> > 
> > pcie@7062800000 {
> >     compatible = "sophgo,sg2042-pcie-host";
> >     ...... // other properties
> >     pci@1,0 {
> >       vendor-id = <0x1f1c>;
> >       device-id = <0x2042>;
> >     };
> > 
> > }
> 
> Generally makes sense to me.  I'm suggesting that we should start
> describing Root Ports as children of the host bridge node instead of 
> mixing their properties into the host bridge itself.
> 
> Some properties apply to the host bridge, e.g., "bus-range" describes
> the bus number aperture, and "ranges" describes the address
> translation between the upstream CPU address space and the PCI address
> space.
> 
> Others apply specifically to a Root Port, e.g., "num-lanes",
> "max-link-speed", "phys", "vendor-id", "device-id".  I think it will
> help if we can describe these in separate children, especially when
> there are multiple Root Ports.

Agreed.


> Documentation/devicetree/bindings/pci/pci.txt says a Root Port
> should include a reg property that contains the bus/device/function
> number of the RP, e.g.,
> Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt has
> this:
> 
>   pcie-controller@3000 {
>      compatible = "nvidia,tegra30-pcie";
>      pci@1,0 {
>        reg = <0x000800 0 0 0 0>;
>      };
> 
> where the "0x000800 0 0 0 0" means the "pci@1,0" Root Port is at
> 00:01.0 (bus 00, device 01, function 0).  I don't know what the "@1,0"
> part means.
> 
> > And with this change, I can drop the “pcie-port”property and use the port
> > name to figure out the port number, right?
> 
> Seems likely to me.

I don't think device 1 would be the correct address. The RP is almost 
always device 0.

I think instead the 'syscon-pcie-ctrl' should perhaps be modeled as a 
phy with the phy binding. Then the host bridge node can have 1 or 2 phy 
entries depending on if the host uses 1 or 2 links. And the 2nd host 
should have 'status = "disabled";' when it is not used.

Or perhaps just 'num-lanes' would be enough.

Rob


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-10 17:33   ` Bjorn Helgaas
  2024-12-11  9:00     ` Chen Wang
@ 2024-12-19  2:34     ` Chen Wang
  2024-12-19 12:16       ` Rob Herring
                         ` (2 more replies)
  1 sibling, 3 replies; 21+ messages in thread
From: Chen Wang @ 2024-12-19  2:34 UTC (permalink / raw)
  To: Bjorn Helgaas, Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lee, lpieralisi, manivannan.sadhasivam,
	palmer, paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li

hello ~

On 2024/12/11 1:33, Bjorn Helgaas wrote:
> On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:
>> Add binding for Sophgo SG2042 PCIe host controller.
>> +  sophgo,pcie-port:
[......]
>> +      The Cadence IP has two modes of operation, selected by a strap pin.
>> +
>> +      In the single-link mode, the Cadence PCIe core instance associated
>> +      with Link0 is connected to all the lanes and the Cadence PCIe core
>> +      instance associated with Link1 is inactive.
>> +
>> +      In the dual-link mode, the Cadence PCIe core instance associated
>> +      with Link0 is connected to the lower half of the lanes and the
>> +      Cadence PCIe core instance associated with Link1 is connected to
>> +      the upper half of the lanes.
> I assume this means there are two separate Root Ports, one for Link0
> and a second for Link1?
>
>> +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
>> +
>> +                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
>> +                     |                                |                 |
>> +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
>> +                     |                                |                 |
>> +                     +-- Core(Link1) <---> disabled   +-----------------+
>> +
>> +                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
>> +                     |                                |                 |
>> +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
>> +                     |                                |                 |
>> +                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
>> +
>> +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
>> +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
>> +
>> +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
>> +      RC(Link)s may share different bits of the same register. For example,
>> +      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.
> An RC doesn't have a Link.  A Root Port does.
>
>> +      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
>> +      this we can know what registers(bits) we should use.
>> +
>> +  sophgo,syscon-pcie-ctrl:
>> +    $ref: /schemas/types.yaml#/definitions/phandle
>> +    description:
>> +      Phandle to the PCIe System Controller DT node. It's required to
>> +      access some MSI operation registers shared by PCIe RCs.
> I think this probably means "shared by PCIe Root Ports", not RCs.
> It's unlikely that this hardware has multiple Root Complexes.

hi, Bjorn,

I just double confirmed with sophgo engineers, they told me that the 
actual PCIe design is that there is only one root port under a host 
bridge. I am sorry that my original description and diagram may not make 
this clear, so please allow me to introduce this historical background 
in detail again. Please read it patiently :):

The IP provided by Cadence contains two independent cores (called 
"links" according to the terminology of their manual, the first one is 
called link0 and the second one is called link1). Each core corresponds 
to a host bridge, and each host bridge has only one root port, and their 
configuration registers are completely independent. That is to say,one 
cadence IP encapsulates two independent host bridges. SG2042 integrates 
two Cadence IPs, so there can actually be up to four host bridges.


Taking a Cadence IP as an example, the two host bridges can be connected 
to different lanes through configuration, which has been described in 
the original message. At present, the configuration of SG2042 is to let 
core0 (link0) in the first ip occupy all lanes in the ip, and let core0 
(link0) and core1 (link1) in the second ip each use half of the lanes in 
the ip. So in the end we only use 3 cores, that's why 3 host bridge 
nodes are configured in dts.


Because the configurations of these links are independent, the story 
ends here, but unfortunately, sophgo engineers defined some new register 
files to add support for their msi controller inside pcie. The problem 
is they did not separate these register files according to link0 and 
link1. These new register files are "cdns_pcie0_ctrl" / 
"cdns_pcie1_ctrl" in the original picture and dts, where the register of 
"cdns_pcie0_ctrl" is shared by link0 and link1 of the first ip, and 
"cdns_pcie1_ctrl" is shared by link0 and link1 of the second ip. 
According to my new description, "cdns_pcieX_ctrl" is not shared by root 
ports, they are shared by host bridge/rc.


Because the register design of "cdns_pcieX_ctrl" is not strictly 
segmented according to link0 and link1, in pcie host bridge driver 
coding we must know whether the host bridge corresponds to link0 or 
link1 in the ip, so the "sophgo,link-id" attribute is introduced.


Now I think it is not appropriate to change it to "sophgo,pcie-port". 
The reason is that as mentioned above, there is only one root port under 
each host bridge in the cadence ip. Link0 and link1 are actually used to 
distinguish the two host bridges in one ip.

So I suggest to keep the original "sophgo,link-id" and with the prefix 
because the introduction of this attribute is indeed caused by the 
private design of sophgo.

Any other good idea please feel free let me know.

Thansk,

Chen

>> +required:
>> +  - compatible
>> +  - reg
>> +  - reg-names
>> +  - vendor-id
>> +  - device-id
>> +  - sophgo,syscon-pcie-ctrl
>> +  - sophgo,pcie-port
> It looks like vendor-id and device-id apply to PCI devices, i.e.,
> things that will show up in lspci, I assume Root Ports in this case.
> Can we make this explicit in the DT, e.g., something like this?
>
>    pcie@62000000 {
>      compatible = "sophgo,sg2042-pcie-host";
>      port0: pci@0,0 {
>        vendor-id = <0x1f1c>;
>        device-id = <0x2042>;
>      };
As I mentioned above, there is actually only one root port under a host 
bridge, so I think it is unnecessary to introduce the port subnode.
In addition, I found that it is also allowed to directly add the 
vendor-id and device-id properties directly under the host bridge, see 
https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/pci/pci-host-bridge.yaml
And refer to the dts for those products using cadence ip: 
arch/arm64/boot/dts/ti/k3-j721e-main.dtsi

In this way, when executing lspci, the vendor id and device id will 
appear in the line corresponding to the pci brdge device.

[......]



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2024-12-10 17:31   ` Bjorn Helgaas
@ 2024-12-19  3:23     ` Chen Wang
  0 siblings, 0 replies; 21+ messages in thread
From: Chen Wang @ 2024-12-19  3:23 UTC (permalink / raw)
  To: Bjorn Helgaas, Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lee, lpieralisi, manivannan.sadhasivam,
	palmer, paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li


On 2024/12/11 1:31, Bjorn Helgaas wrote:
> On Mon, Dec 09, 2024 at 03:19:57PM +0800, Chen Wang wrote:
>> From: Chen Wang <unicorn_wang@outlook.com>
>>
>> Add support for PCIe controller in SG2042 SoC. The controller
>> uses the Cadence PCIe core programmed by pcie-cadence*.c. The
>> PCIe controller will work in host mode only.
>> +++ b/drivers/pci/controller/cadence/Makefile
>> @@ -4,3 +4,4 @@ obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
>>   obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
>>   obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
>>   obj-$(CONFIG_PCI_J721E) += pci-j721e.o
>> +obj-$(CONFIG_PCIE_SG2042) += pcie-sg2042.o
>> \ No newline at end of file
> Add the newline.
ok
>
>> +++ b/drivers/pci/controller/cadence/pcie-sg2042.c
>> +#include "../../../irqchip/irq-msi-lib.h"
> This is the only file outside drivers/irqchip/ that includes this.
> What's special about this driver?  Maybe this is a hint that something
> here belongs in drivers/irqchip/?

This file is included due to MSI parent model is used in this driver and 
I used helper functions such as msi_lib_init_dev_msi_info.

Actually I see Marc is working on MSI cleanup and will make 
irq-msi-lib.h globally available, see 
https://lore.kernel.org/linux-riscv/20241204124549.607054-2-maz@kernel.org/

But before his PR is merged, we may have to include the file like this now.

>
>> +#ifdef CONFIG_SMP
> No other drivers test CONFIG_SMP, why should this be different?
ok, seems it'ok to remove this test.
>
>> +static int sg2042_pcie_msi_irq_set_affinity(struct irq_data *d,
>> +					    const struct cpumask *mask,
>> +					    bool force)
>> +{
>> +	if (d->parent_data)
>> +		return irq_chip_set_affinity_parent(d, mask, force);
>> +
>> +	return -EINVAL;
>> +}
>> +#endif /* CONFIG_SMP */
>> +static int sg2042_pcie_init_msi_data(struct sg2042_pcie *pcie)
>> +{
>> +	struct device *dev = pcie->cdns_pcie->dev;
>> +	u32 value;
>> +	int ret;
>> +
>> +	raw_spin_lock_init(&pcie->msi_lock);
>> +
>> +	/*
>> +	 * Though the PCIe controller can address >32-bit address space, to
>> +	 * facilitate endpoints that support only 32-bit MSI target address,
>> +	 * the mask is set to 32-bit to make sure that MSI target address is
>> +	 * always a 32-bit address
>> +	 */
>> +	ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
> Not sure this is needed.  Does DT dma-ranges not cover this?
I prefer not introduce dma-ranges, we just need to allocate a small 
continuous space here, and using dma_set_coherent_mask is also one of 
the commonly used methods.
>
>> +static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie, struct device_node *msi_node)
> Wrap to fit in 80 columns like the rest.
ok
>
>> +/*
>> + * SG2042 only support 4-byte aligned access, so for the rootbus (i.e. to read
>> + * the PCIe controller itself, read32 is required. For non-rootbus (i.e. to read
> s/PCIe controller/Root Port/
ok
>
>> + * the PCIe peripheral registers, supports 1/2/4 byte aligned access, so
>> + * directly use read should be fine.
> s/use read/using read/
ok, thanks
>
>> +static int sg2042_pcie_probe(struct platform_device *pdev)
>> +{
>> +	struct device *dev = &pdev->dev;
>> +	struct device_node *np = dev->of_node;
>> +	struct pci_host_bridge *bridge;
>> +	struct device_node *np_syscon;
>> +	struct device_node *msi_node;
>> +	struct cdns_pcie *cdns_pcie;
>> +	struct sg2042_pcie *pcie;
>> +	struct cdns_pcie_rc *rc;
>> +	struct regmap *syscon;
>> +	int ret;
>> +
>> +	if (!IS_ENABLED(CONFIG_PCIE_CADENCE_HOST))
>> +		return -ENODEV;
> I don't think this is needed since CONFIG_PCIE_SG2042 selects
> PCIE_CADENCE_HOST.

Yes, you are right.

Thanks,

Chen

>
> Bjorn

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-19  2:34     ` Chen Wang
@ 2024-12-19 12:16       ` Rob Herring
  2024-12-20  0:14         ` Chen Wang
  2025-01-06 23:55       ` Chen Wang
  2025-01-07  0:18       ` Bjorn Helgaas
  2 siblings, 1 reply; 21+ messages in thread
From: Rob Herring @ 2024-12-19 12:16 UTC (permalink / raw)
  To: Chen Wang
  Cc: Bjorn Helgaas, Chen Wang, kw, u.kleine-koenig, aou, arnd,
	bhelgaas, conor+dt, guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Thu, Dec 19, 2024 at 10:34:50AM +0800, Chen Wang wrote:
> hello ~
> 
> On 2024/12/11 1:33, Bjorn Helgaas wrote:
> > On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:
> > > Add binding for Sophgo SG2042 PCIe host controller.
> > > +  sophgo,pcie-port:
> [......]
> > > +      The Cadence IP has two modes of operation, selected by a strap pin.
> > > +
> > > +      In the single-link mode, the Cadence PCIe core instance associated
> > > +      with Link0 is connected to all the lanes and the Cadence PCIe core
> > > +      instance associated with Link1 is inactive.
> > > +
> > > +      In the dual-link mode, the Cadence PCIe core instance associated
> > > +      with Link0 is connected to the lower half of the lanes and the
> > > +      Cadence PCIe core instance associated with Link1 is connected to
> > > +      the upper half of the lanes.
> > I assume this means there are two separate Root Ports, one for Link0
> > and a second for Link1?
> > 
> > > +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
> > > +
> > > +                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
> > > +                     |                                |                 |
> > > +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
> > > +                     |                                |                 |
> > > +                     +-- Core(Link1) <---> disabled   +-----------------+
> > > +
> > > +                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
> > > +                     |                                |                 |
> > > +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> > > +                     |                                |                 |
> > > +                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
> > > +
> > > +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
> > > +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
> > > +
> > > +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
> > > +      RC(Link)s may share different bits of the same register. For example,
> > > +      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.
> > An RC doesn't have a Link.  A Root Port does.
> > 
> > > +      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
> > > +      this we can know what registers(bits) we should use.
> > > +
> > > +  sophgo,syscon-pcie-ctrl:
> > > +    $ref: /schemas/types.yaml#/definitions/phandle
> > > +    description:
> > > +      Phandle to the PCIe System Controller DT node. It's required to
> > > +      access some MSI operation registers shared by PCIe RCs.
> > I think this probably means "shared by PCIe Root Ports", not RCs.
> > It's unlikely that this hardware has multiple Root Complexes.
> 
> hi, Bjorn,
> 
> I just double confirmed with sophgo engineers, they told me that the actual
> PCIe design is that there is only one root port under a host bridge. I am
> sorry that my original description and diagram may not make this clear, so
> please allow me to introduce this historical background in detail again.
> Please read it patiently :):
> 
> The IP provided by Cadence contains two independent cores (called "links"
> according to the terminology of their manual, the first one is called link0
> and the second one is called link1). Each core corresponds to a host bridge,
> and each host bridge has only one root port, and their configuration
> registers are completely independent. That is to say,one cadence IP
> encapsulates two independent host bridges. SG2042 integrates two Cadence
> IPs, so there can actually be up to four host bridges.
> 
> 
> Taking a Cadence IP as an example, the two host bridges can be connected to
> different lanes through configuration, which has been described in the
> original message. At present, the configuration of SG2042 is to let core0
> (link0) in the first ip occupy all lanes in the ip, and let core0 (link0)
> and core1 (link1) in the second ip each use half of the lanes in the ip. So
> in the end we only use 3 cores, that's why 3 host bridge nodes are
> configured in dts.
> 
> 
> Because the configurations of these links are independent, the story ends
> here, but unfortunately, sophgo engineers defined some new register files to
> add support for their msi controller inside pcie. The problem is they did
> not separate these register files according to link0 and link1. These new
> register files are "cdns_pcie0_ctrl" / "cdns_pcie1_ctrl" in the original
> picture and dts, where the register of "cdns_pcie0_ctrl" is shared by link0
> and link1 of the first ip, and "cdns_pcie1_ctrl" is shared by link0 and
> link1 of the second ip. According to my new description, "cdns_pcieX_ctrl"
> is not shared by root ports, they are shared by host bridge/rc.
> 
> 
> Because the register design of "cdns_pcieX_ctrl" is not strictly segmented
> according to link0 and link1, in pcie host bridge driver coding we must know
> whether the host bridge corresponds to link0 or link1 in the ip, so the
> "sophgo,link-id" attribute is introduced.
> 
> 
> Now I think it is not appropriate to change it to "sophgo,pcie-port". The
> reason is that as mentioned above, there is only one root port under each
> host bridge in the cadence ip. Link0 and link1 are actually used to
> distinguish the two host bridges in one ip.
> 
> So I suggest to keep the original "sophgo,link-id" and with the prefix
> because the introduction of this attribute is indeed caused by the private
> design of sophgo.
> 
> Any other good idea please feel free let me know.
> 
> Thansk,
> 
> Chen
> 
> > > +required:
> > > +  - compatible
> > > +  - reg
> > > +  - reg-names
> > > +  - vendor-id
> > > +  - device-id
> > > +  - sophgo,syscon-pcie-ctrl
> > > +  - sophgo,pcie-port
> > It looks like vendor-id and device-id apply to PCI devices, i.e.,
> > things that will show up in lspci, I assume Root Ports in this case.
> > Can we make this explicit in the DT, e.g., something like this?
> > 
> >    pcie@62000000 {
> >      compatible = "sophgo,sg2042-pcie-host";
> >      port0: pci@0,0 {
> >        vendor-id = <0x1f1c>;
> >        device-id = <0x2042>;
> >      };
> As I mentioned above, there is actually only one root port under a host
> bridge, so I think it is unnecessary to introduce the port subnode.

It doesn't matter how many RPs there are. What matters is what are the 
properties associated with.

> In addition, I found that it is also allowed to directly add the vendor-id
> and device-id properties directly under the host bridge, see https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/pci/pci-host-bridge.yaml
> And refer to the dts for those products using cadence ip:
> arch/arm64/boot/dts/ti/k3-j721e-main.dtsi

It's allowed because we are stuck things with the wrong way. That 
doesn't mean we should continue to do so. 

> In this way, when executing lspci, the vendor id and device id will appear
> in the line corresponding to the pci brdge device.

That's the RP though, isn't it?

There is one difference in location though. If the properties are in the 
RP, then they should be handled by the PCI core and override what's read 
from the RP registers. If the properties are in the host bridge node, 
then the host bridge driver sets the values in some host bridge specific 
registers (or has a way to make read-only regs writeable) which get 
reflected in the RP registers. So perhaps in the host bridge is the 
correct place.

Rob

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-19 12:16       ` Rob Herring
@ 2024-12-20  0:14         ` Chen Wang
  0 siblings, 0 replies; 21+ messages in thread
From: Chen Wang @ 2024-12-20  0:14 UTC (permalink / raw)
  To: Rob Herring
  Cc: Bjorn Helgaas, Chen Wang, kw, u.kleine-koenig, aou, arnd,
	bhelgaas, conor+dt, guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li


On 2024/12/19 20:16, Rob Herring wrote:
> On Thu, Dec 19, 2024 at 10:34:50AM +0800, Chen Wang wrote:
>> hello ~
>>
>> On 2024/12/11 1:33, Bjorn Helgaas wrote:
>>> On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:
>>>> Add binding for Sophgo SG2042 PCIe host controller.
>>>> +  sophgo,pcie-port:
>> [......]
>>>> +      The Cadence IP has two modes of operation, selected by a strap pin.
>>>> +
>>>> +      In the single-link mode, the Cadence PCIe core instance associated
>>>> +      with Link0 is connected to all the lanes and the Cadence PCIe core
>>>> +      instance associated with Link1 is inactive.
>>>> +
>>>> +      In the dual-link mode, the Cadence PCIe core instance associated
>>>> +      with Link0 is connected to the lower half of the lanes and the
>>>> +      Cadence PCIe core instance associated with Link1 is connected to
>>>> +      the upper half of the lanes.
>>> I assume this means there are two separate Root Ports, one for Link0
>>> and a second for Link1?
>>>
>>>> +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
>>>> +
>>>> +                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
>>>> +                     |                                |                 |
>>>> +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
>>>> +                     |                                |                 |
>>>> +                     +-- Core(Link1) <---> disabled   +-----------------+
>>>> +
>>>> +                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
>>>> +                     |                                |                 |
>>>> +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
>>>> +                     |                                |                 |
>>>> +                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
>>>> +
>>>> +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
>>>> +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
>>>> +
>>>> +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
>>>> +      RC(Link)s may share different bits of the same register. For example,
>>>> +      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.
>>> An RC doesn't have a Link.  A Root Port does.
>>>
>>>> +      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
>>>> +      this we can know what registers(bits) we should use.
>>>> +
>>>> +  sophgo,syscon-pcie-ctrl:
>>>> +    $ref: /schemas/types.yaml#/definitions/phandle
>>>> +    description:
>>>> +      Phandle to the PCIe System Controller DT node. It's required to
>>>> +      access some MSI operation registers shared by PCIe RCs.
>>> I think this probably means "shared by PCIe Root Ports", not RCs.
>>> It's unlikely that this hardware has multiple Root Complexes.
>> hi, Bjorn,
>>
>> I just double confirmed with sophgo engineers, they told me that the actual
>> PCIe design is that there is only one root port under a host bridge. I am
>> sorry that my original description and diagram may not make this clear, so
>> please allow me to introduce this historical background in detail again.
>> Please read it patiently :):
>>
>> The IP provided by Cadence contains two independent cores (called "links"
>> according to the terminology of their manual, the first one is called link0
>> and the second one is called link1). Each core corresponds to a host bridge,
>> and each host bridge has only one root port, and their configuration
>> registers are completely independent. That is to say,one cadence IP
>> encapsulates two independent host bridges. SG2042 integrates two Cadence
>> IPs, so there can actually be up to four host bridges.
>>
>>
>> Taking a Cadence IP as an example, the two host bridges can be connected to
>> different lanes through configuration, which has been described in the
>> original message. At present, the configuration of SG2042 is to let core0
>> (link0) in the first ip occupy all lanes in the ip, and let core0 (link0)
>> and core1 (link1) in the second ip each use half of the lanes in the ip. So
>> in the end we only use 3 cores, that's why 3 host bridge nodes are
>> configured in dts.
>>
>>
>> Because the configurations of these links are independent, the story ends
>> here, but unfortunately, sophgo engineers defined some new register files to
>> add support for their msi controller inside pcie. The problem is they did
>> not separate these register files according to link0 and link1. These new
>> register files are "cdns_pcie0_ctrl" / "cdns_pcie1_ctrl" in the original
>> picture and dts, where the register of "cdns_pcie0_ctrl" is shared by link0
>> and link1 of the first ip, and "cdns_pcie1_ctrl" is shared by link0 and
>> link1 of the second ip. According to my new description, "cdns_pcieX_ctrl"
>> is not shared by root ports, they are shared by host bridge/rc.
>>
>>
>> Because the register design of "cdns_pcieX_ctrl" is not strictly segmented
>> according to link0 and link1, in pcie host bridge driver coding we must know
>> whether the host bridge corresponds to link0 or link1 in the ip, so the
>> "sophgo,link-id" attribute is introduced.
>>
>>
>> Now I think it is not appropriate to change it to "sophgo,pcie-port". The
>> reason is that as mentioned above, there is only one root port under each
>> host bridge in the cadence ip. Link0 and link1 are actually used to
>> distinguish the two host bridges in one ip.
>>
>> So I suggest to keep the original "sophgo,link-id" and with the prefix
>> because the introduction of this attribute is indeed caused by the private
>> design of sophgo.
>>
>> Any other good idea please feel free let me know.
>>
>> Thansk,
>>
>> Chen
>>
>>>> +required:
>>>> +  - compatible
>>>> +  - reg
>>>> +  - reg-names
>>>> +  - vendor-id
>>>> +  - device-id
>>>> +  - sophgo,syscon-pcie-ctrl
>>>> +  - sophgo,pcie-port
>>> It looks like vendor-id and device-id apply to PCI devices, i.e.,
>>> things that will show up in lspci, I assume Root Ports in this case.
>>> Can we make this explicit in the DT, e.g., something like this?
>>>
>>>     pcie@62000000 {
>>>       compatible = "sophgo,sg2042-pcie-host";
>>>       port0: pci@0,0 {
>>>         vendor-id = <0x1f1c>;
>>>         device-id = <0x2042>;
>>>       };
>> As I mentioned above, there is actually only one root port under a host
>> bridge, so I think it is unnecessary to introduce the port subnode.
> It doesn't matter how many RPs there are. What matters is what are the
> properties associated with.
>
>> In addition, I found that it is also allowed to directly add the vendor-id
>> and device-id properties directly under the host bridge, see https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/pci/pci-host-bridge.yaml
>> And refer to the dts for those products using cadence ip:
>> arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
> It's allowed because we are stuck things with the wrong way. That
> doesn't mean we should continue to do so.
>
>> In this way, when executing lspci, the vendor id and device id will appear
>> in the line corresponding to the pci brdge device.
> That's the RP though, isn't it?
>
> There is one difference in location though. If the properties are in the
> RP, then they should be handled by the PCI core and override what's read
> from the RP registers. If the properties are in the host bridge node,
> then the host bridge driver sets the values in some host bridge specific
> registers (or has a way to make read-only regs writeable) which get
> reflected in the RP registers. So perhaps in the host bridge is the
> correct place.

Yes, cadence host brdige will handle the regiser setting for these ids, 
see function cdns_pcie_host_setup() in 
drivers/pci/controller/cadence/pcie-cadence-host.c.

So for this case, in the host bridge is the correct place.

Thanks,

Chen

>
> Rob

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-19  2:34     ` Chen Wang
  2024-12-19 12:16       ` Rob Herring
@ 2025-01-06 23:55       ` Chen Wang
  2025-01-07  0:18       ` Bjorn Helgaas
  2 siblings, 0 replies; 21+ messages in thread
From: Chen Wang @ 2025-01-06 23:55 UTC (permalink / raw)
  To: Bjorn Helgaas, Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lee, lpieralisi, manivannan.sadhasivam,
	palmer, paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li

Hello & Happy New Year, Bjorn

On 2024/12/19 10:34, Chen Wang wrote:
> hello ~
>
> On 2024/12/11 1:33, Bjorn Helgaas wrote:
>> On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:
>>> Add binding for Sophgo SG2042 PCIe host controller.
>>> +  sophgo,pcie-port:
> [......]
>>> +      The Cadence IP has two modes of operation, selected by a 
>>> strap pin.
>>> +
>>> +      In the single-link mode, the Cadence PCIe core instance 
>>> associated
>>> +      with Link0 is connected to all the lanes and the Cadence PCIe 
>>> core
>>> +      instance associated with Link1 is inactive.
>>> +
>>> +      In the dual-link mode, the Cadence PCIe core instance associated
>>> +      with Link0 is connected to the lower half of the lanes and the
>>> +      Cadence PCIe core instance associated with Link1 is connected to
>>> +      the upper half of the lanes.
>> I assume this means there are two separate Root Ports, one for Link0
>> and a second for Link1?
>>
>>> +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
>>> +
>>> +                     +-- Core(Link0) <---> pcie_rc0 
>>> +-----------------+
>>> +                     | |                 |
>>> +      Cadence IP 1 --+                                | 
>>> cdns_pcie0_ctrl |
>>> +                     | |                 |
>>> +                     +-- Core(Link1) <---> disabled 
>>> +-----------------+
>>> +
>>> +                     +-- Core(Link0) <---> pcie_rc1 
>>> +-----------------+
>>> +                     | |                 |
>>> +      Cadence IP 2 --+                                | 
>>> cdns_pcie1_ctrl |
>>> +                     | |                 |
>>> +                     +-- Core(Link1) <---> pcie_rc2 
>>> +-----------------+
>>> +
>>> +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in 
>>> DTS.
>>> +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") 
>>> defined in DTS
>>> +
>>> +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, 
>>> even two
>>> +      RC(Link)s may share different bits of the same register. For 
>>> example,
>>> +      cdns_pcie1_ctrl contains registers shared by link0 & link1 
>>> for Cadence IP 2.
>> An RC doesn't have a Link.  A Root Port does.
>>
>>> +      "sophgo,pcie-port" is defined to flag which core(link) the rc 
>>> maps to, with
>>> +      this we can know what registers(bits) we should use.
>>> +
>>> +  sophgo,syscon-pcie-ctrl:
>>> +    $ref: /schemas/types.yaml#/definitions/phandle
>>> +    description:
>>> +      Phandle to the PCIe System Controller DT node. It's required to
>>> +      access some MSI operation registers shared by PCIe RCs.
>> I think this probably means "shared by PCIe Root Ports", not RCs.
>> It's unlikely that this hardware has multiple Root Complexes.
>
> hi, Bjorn,
>
> I just double confirmed with sophgo engineers, they told me that the 
> actual PCIe design is that there is only one root port under a host 
> bridge. I am sorry that my original description and diagram may not 
> make this clear, so please allow me to introduce this historical 
> background in detail again. Please read it patiently :):
>
> The IP provided by Cadence contains two independent cores (called 
> "links" according to the terminology of their manual, the first one is 
> called link0 and the second one is called link1). Each core 
> corresponds to a host bridge, and each host bridge has only one root 
> port, and their configuration registers are completely independent. 
> That is to say,one cadence IP encapsulates two independent host 
> bridges. SG2042 integrates two Cadence IPs, so there can actually be 
> up to four host bridges.
>
>
> Taking a Cadence IP as an example, the two host bridges can be 
> connected to different lanes through configuration, which has been 
> described in the original message. At present, the configuration of 
> SG2042 is to let core0 (link0) in the first ip occupy all lanes in the 
> ip, and let core0 (link0) and core1 (link1) in the second ip each use 
> half of the lanes in the ip. So in the end we only use 3 cores, that's 
> why 3 host bridge nodes are configured in dts.
>
>
> Because the configurations of these links are independent, the story 
> ends here, but unfortunately, sophgo engineers defined some new 
> register files to add support for their msi controller inside pcie. 
> The problem is they did not separate these register files according to 
> link0 and link1. These new register files are "cdns_pcie0_ctrl" / 
> "cdns_pcie1_ctrl" in the original picture and dts, where the register 
> of "cdns_pcie0_ctrl" is shared by link0 and link1 of the first ip, and 
> "cdns_pcie1_ctrl" is shared by link0 and link1 of the second ip. 
> According to my new description, "cdns_pcieX_ctrl" is not shared by 
> root ports, they are shared by host bridge/rc.
>
>
> Because the register design of "cdns_pcieX_ctrl" is not strictly 
> segmented according to link0 and link1, in pcie host bridge driver 
> coding we must know whether the host bridge corresponds to link0 or 
> link1 in the ip, so the "sophgo,link-id" attribute is introduced.
>
>
> Now I think it is not appropriate to change it to "sophgo,pcie-port". 
> The reason is that as mentioned above, there is only one root port 
> under each host bridge in the cadence ip. Link0 and link1 are actually 
> used to distinguish the two host bridges in one ip.
>
> So I suggest to keep the original "sophgo,link-id" and with the prefix 
> because the introduction of this attribute is indeed caused by the 
> private design of sophgo.
>
> Any other good idea please feel free let me know.
>
> Thansk,
>
> Chen
>
>>> +required:
>>> +  - compatible
>>> +  - reg
>>> +  - reg-names
>>> +  - vendor-id
>>> +  - device-id
>>> +  - sophgo,syscon-pcie-ctrl
>>> +  - sophgo,pcie-port
>> It looks like vendor-id and device-id apply to PCI devices, i.e.,
>> things that will show up in lspci, I assume Root Ports in this case.
>> Can we make this explicit in the DT, e.g., something like this?
>>
>>    pcie@62000000 {
>>      compatible = "sophgo,sg2042-pcie-host";
>>      port0: pci@0,0 {
>>        vendor-id = <0x1f1c>;
>>        device-id = <0x2042>;
>>      };
> As I mentioned above, there is actually only one root port under a 
> host bridge, so I think it is unnecessary to introduce the port subnode.
> In addition, I found that it is also allowed to directly add the 
> vendor-id and device-id properties directly under the host bridge, see 
> https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/pci/pci-host-bridge.yaml
> And refer to the dts for those products using cadence ip: 
> arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
>
> In this way, when executing lspci, the vendor id and device id will 
> appear in the line corresponding to the pci brdge device.
>
> [......]
>
I see that you haven't replied since my last reply, so I will continue 
to modify the code as I described. If you have any comments, please let 
me know, thank you.

Regards,

Chen

>
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2024-12-19  2:34     ` Chen Wang
  2024-12-19 12:16       ` Rob Herring
  2025-01-06 23:55       ` Chen Wang
@ 2025-01-07  0:18       ` Bjorn Helgaas
  2025-01-07  0:43         ` Chen Wang
  2 siblings, 1 reply; 21+ messages in thread
From: Bjorn Helgaas @ 2025-01-07  0:18 UTC (permalink / raw)
  To: Chen Wang
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Thu, Dec 19, 2024 at 10:34:50AM +0800, Chen Wang wrote:
> hello ~
> 
> On 2024/12/11 1:33, Bjorn Helgaas wrote:
> > On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:
> > > Add binding for Sophgo SG2042 PCIe host controller.
> > > +  sophgo,pcie-port:
> [......]
> > > +      The Cadence IP has two modes of operation, selected by a strap pin.
> > > +
> > > +      In the single-link mode, the Cadence PCIe core instance associated
> > > +      with Link0 is connected to all the lanes and the Cadence PCIe core
> > > +      instance associated with Link1 is inactive.
> > > +
> > > +      In the dual-link mode, the Cadence PCIe core instance associated
> > > +      with Link0 is connected to the lower half of the lanes and the
> > > +      Cadence PCIe core instance associated with Link1 is connected to
> > > +      the upper half of the lanes.
> > I assume this means there are two separate Root Ports, one for Link0
> > and a second for Link1?
> > 
> > > +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
> > > +
> > > +                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
> > > +                     |                                |                 |
> > > +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
> > > +                     |                                |                 |
> > > +                     +-- Core(Link1) <---> disabled   +-----------------+
> > > +
> > > +                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
> > > +                     |                                |                 |
> > > +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> > > +                     |                                |                 |
> > > +                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
> > > +
> > > +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
> > > +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
> > > +
> > > +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
> > > +      RC(Link)s may share different bits of the same register. For example,
> > > +      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.
> > An RC doesn't have a Link.  A Root Port does.
> > 
> > > +      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
> > > +      this we can know what registers(bits) we should use.
> > > +
> > > +  sophgo,syscon-pcie-ctrl:
> > > +    $ref: /schemas/types.yaml#/definitions/phandle
> > > +    description:
> > > +      Phandle to the PCIe System Controller DT node. It's required to
> > > +      access some MSI operation registers shared by PCIe RCs.
> > I think this probably means "shared by PCIe Root Ports", not RCs.
> > It's unlikely that this hardware has multiple Root Complexes.
> 
> I just double confirmed with sophgo engineers, they told me that the actual
> PCIe design is that there is only one root port under a host bridge. I am
> sorry that my original description and diagram may not make this clear, so
> please allow me to introduce this historical background in detail again.
> Please read it patiently :):
> 
> The IP provided by Cadence contains two independent cores (called "links"
> according to the terminology of their manual, the first one is called link0
> and the second one is called link1). Each core corresponds to a host bridge,
> and each host bridge has only one root port, and their configuration
> registers are completely independent. That is to say,one cadence IP
> encapsulates two independent host bridges. SG2042 integrates two Cadence
> IPs, so there can actually be up to four host bridges.
> 
> 
> Taking a Cadence IP as an example, the two host bridges can be connected to
> different lanes through configuration, which has been described in the
> original message. At present, the configuration of SG2042 is to let core0
> (link0) in the first ip occupy all lanes in the ip, and let core0 (link0)
> and core1 (link1) in the second ip each use half of the lanes in the ip. So
> in the end we only use 3 cores, that's why 3 host bridge nodes are
> configured in dts.

Host bridges are logically separate PCI hierarchies, so these three
host bridges could be in three separate PCI domains, and each one
could use buses 00-ff.  Each one contains a single Root Port, so
enumerating could look like this:

  0000:00:00.0 Root Port to [bus 01-ff]
  0001:00:00.0 Root Port to [bus 01-ff]
  0002:00:00.0 Root Port to [bus 01-ff]

Does that match with your understanding?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-07  0:18       ` Bjorn Helgaas
@ 2025-01-07  0:43         ` Chen Wang
  0 siblings, 0 replies; 21+ messages in thread
From: Chen Wang @ 2025-01-07  0:43 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li


On 2025/1/7 8:18, Bjorn Helgaas wrote:
> On Thu, Dec 19, 2024 at 10:34:50AM +0800, Chen Wang wrote:
>> hello ~
>>
>> On 2024/12/11 1:33, Bjorn Helgaas wrote:
>>> On Mon, Dec 09, 2024 at 03:19:38PM +0800, Chen Wang wrote:
>>>> Add binding for Sophgo SG2042 PCIe host controller.
>>>> +  sophgo,pcie-port:
>> [......]
>>>> +      The Cadence IP has two modes of operation, selected by a strap pin.
>>>> +
>>>> +      In the single-link mode, the Cadence PCIe core instance associated
>>>> +      with Link0 is connected to all the lanes and the Cadence PCIe core
>>>> +      instance associated with Link1 is inactive.
>>>> +
>>>> +      In the dual-link mode, the Cadence PCIe core instance associated
>>>> +      with Link0 is connected to the lower half of the lanes and the
>>>> +      Cadence PCIe core instance associated with Link1 is connected to
>>>> +      the upper half of the lanes.
>>> I assume this means there are two separate Root Ports, one for Link0
>>> and a second for Link1?
>>>
>>>> +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
>>>> +
>>>> +                     +-- Core(Link0) <---> pcie_rc0   +-----------------+
>>>> +                     |                                |                 |
>>>> +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
>>>> +                     |                                |                 |
>>>> +                     +-- Core(Link1) <---> disabled   +-----------------+
>>>> +
>>>> +                     +-- Core(Link0) <---> pcie_rc1   +-----------------+
>>>> +                     |                                |                 |
>>>> +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
>>>> +                     |                                |                 |
>>>> +                     +-- Core(Link1) <---> pcie_rc2   +-----------------+
>>>> +
>>>> +      pcie_rcX is pcie node ("sophgo,sg2042-pcie-host") defined in DTS.
>>>> +      cdns_pcie0_ctrl is syscon node ("sophgo,sg2042-pcie-ctrl") defined in DTS
>>>> +
>>>> +      cdns_pcieX_ctrl contains some registers shared by pcie_rcX, even two
>>>> +      RC(Link)s may share different bits of the same register. For example,
>>>> +      cdns_pcie1_ctrl contains registers shared by link0 & link1 for Cadence IP 2.
>>> An RC doesn't have a Link.  A Root Port does.
>>>
>>>> +      "sophgo,pcie-port" is defined to flag which core(link) the rc maps to, with
>>>> +      this we can know what registers(bits) we should use.
>>>> +
>>>> +  sophgo,syscon-pcie-ctrl:
>>>> +    $ref: /schemas/types.yaml#/definitions/phandle
>>>> +    description:
>>>> +      Phandle to the PCIe System Controller DT node. It's required to
>>>> +      access some MSI operation registers shared by PCIe RCs.
>>> I think this probably means "shared by PCIe Root Ports", not RCs.
>>> It's unlikely that this hardware has multiple Root Complexes.
>> I just double confirmed with sophgo engineers, they told me that the actual
>> PCIe design is that there is only one root port under a host bridge. I am
>> sorry that my original description and diagram may not make this clear, so
>> please allow me to introduce this historical background in detail again.
>> Please read it patiently :):
>>
>> The IP provided by Cadence contains two independent cores (called "links"
>> according to the terminology of their manual, the first one is called link0
>> and the second one is called link1). Each core corresponds to a host bridge,
>> and each host bridge has only one root port, and their configuration
>> registers are completely independent. That is to say,one cadence IP
>> encapsulates two independent host bridges. SG2042 integrates two Cadence
>> IPs, so there can actually be up to four host bridges.
>>
>>
>> Taking a Cadence IP as an example, the two host bridges can be connected to
>> different lanes through configuration, which has been described in the
>> original message. At present, the configuration of SG2042 is to let core0
>> (link0) in the first ip occupy all lanes in the ip, and let core0 (link0)
>> and core1 (link1) in the second ip each use half of the lanes in the ip. So
>> in the end we only use 3 cores, that's why 3 host bridge nodes are
>> configured in dts.
> Host bridges are logically separate PCI hierarchies, so these three
> host bridges could be in three separate PCI domains, and each one
> could use buses 00-ff.  Each one contains a single Root Port, so
> enumerating could look like this:
>
>    0000:00:00.0 Root Port to [bus 01-ff]
>    0001:00:00.0 Root Port to [bus 01-ff]
>    0002:00:00.0 Root Port to [bus 01-ff]
>
> Does that match with your understanding?

Yes, that match my understanding.


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2025-01-07  0:44 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-09  7:19 [PATCH v2 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
2024-12-09  7:19 ` [PATCH v2 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
2024-12-10 17:33   ` Bjorn Helgaas
2024-12-11  9:00     ` Chen Wang
2024-12-11 19:20       ` Bjorn Helgaas
2024-12-17 13:10         ` Rob Herring
2024-12-19  2:34     ` Chen Wang
2024-12-19 12:16       ` Rob Herring
2024-12-20  0:14         ` Chen Wang
2025-01-06 23:55       ` Chen Wang
2025-01-07  0:18       ` Bjorn Helgaas
2025-01-07  0:43         ` Chen Wang
2024-12-09  7:19 ` [PATCH v2 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
2024-12-10 17:31   ` Bjorn Helgaas
2024-12-19  3:23     ` Chen Wang
2024-12-15  9:17   ` kernel test robot
2024-12-15 12:04   ` kernel test robot
2024-12-09  7:20 ` [PATCH v2 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible Chen Wang
2024-12-10 17:32   ` Bjorn Helgaas
2024-12-09  7:20 ` [PATCH v2 4/5] riscv: sophgo: dts: add pcie controllers for SG2042 Chen Wang
2024-12-09  7:20 ` [PATCH v2 5/5] riscv: sophgo: dts: enable pcie for PioneerBox Chen Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).