linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] Add PCIe support to Sophgo SG2042 SoC
@ 2025-01-15  7:05 Chen Wang
  2025-01-15  7:06 ` [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
                   ` (4 more replies)
  0 siblings, 5 replies; 34+ messages in thread
From: Chen Wang @ 2025-01-15  7:05 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Sophgo's SG2042 SoC uses Cadence PCIe core to implement RC mode.

SG2042 PCIe controller supports two ways to report MSI:

Method A, the PICe controller implements an MSI interrupt controller
inside, and connect to PLIC upward through one interrupt line. Provides
memory-mapped msi address, and by programming the upper 32 bits of the
address to zero, it can be compatible with old pcie devices that only
support 32-bit msi address.

Method B, the PICe controller connects to PLIC upward through an
independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
controller provides multiple(up to 32) interrupt sources to PLIC.
Compared with the first method, the advantage is that the interrupt
source is expanded, but because for SG2042, the msi address provided
by the MSI controller is fixed and only supports 64-bit address(> 2^32),
it is not compatible with old pcie devices that only support 32-bit
msi address.

This patchset depends on another patchset for the SG2042 MSI controller[msi].
If you want to have a test, you need to apply the corresponding patchset.

Link: https://lore.kernel.org/linux-riscv/cover.1736921549.git.unicorn_wang@outlook.com/ [msi]

Thanks,
Chen

---

Changes in v3:

  The patch series is based on v6.13-rc7.

  Fixed following issues as per comments from Rob Herring, Bjorn Helgaas, thanks.

  - Driver binding:
    - Fallback to still using "sophgo,link-id" instead of "sophgo,pcie-port",
      and improve the description of this property.
    - Some text improvements.
  - Improve driver code:
    - Removed CONFIG_SMP.
    - Removed checking of CONFIG_PCIE_CADENCE_HOST
    - Improved Kconfig to select some dependencies explicitly.
    - Some text improvements.

Changes in v2:

  The patch series is based on v6.13-rc2. You can simply review or test the
  patches at the link [2].

  Fixed following issues as per comments from Rob Herring, Bjorn Helgaas, thanks.

  - Improve driver binding description
    - Define a new embeded object property msi to replace the "sophgo,internal-msi".
    - Rename "sophgo,link-id" to "sophgo,pcie-port" as per suggestion from Bjorn,
      and add more explanaion for this property.
    - Use msi-parent.
  - Improve driver code:
    - Improve coding style.
    - Fix a bug and make sure num_applied_vecs updated with the max value.
    - Use the MSI parent model.
    - Remove .cpu_addr_fixup.
    - Reorder Kconfig menu item to keep them in alphabetical order by vendor.

Changes in v1:

  The patch series is based on v6.12-rc7. You can simply review or test the
  patches at the link [1].

Link: https://lore.kernel.org/linux-riscv/cover.1731303328.git.unicorn_wang@outlook.com/ [1]
Link: https://lore.kernel.org/linux-riscv/cover.1733726572.git.unicorn_wang@outlook.com/ [2]
---

Chen Wang (5):
  dt-bindings: pci: Add Sophgo SG2042 PCIe host
  PCI: sg2042: Add Sophgo SG2042 PCIe driver
  dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
  riscv: sophgo: dts: add pcie controllers for SG2042
  riscv: sophgo: dts: enable pcie for PioneerBox

 .../devicetree/bindings/mfd/syscon.yaml       |   2 +
 .../bindings/pci/sophgo,sg2042-pcie-host.yaml | 147 +++++
 .../boot/dts/sophgo/sg2042-milkv-pioneer.dts  |  12 +
 arch/riscv/boot/dts/sophgo/sg2042.dtsi        |  89 +++
 drivers/pci/controller/cadence/Kconfig        |  13 +
 drivers/pci/controller/cadence/Makefile       |   1 +
 drivers/pci/controller/cadence/pcie-sg2042.c  | 528 ++++++++++++++++++
 7 files changed, 792 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
 create mode 100644 drivers/pci/controller/cadence/pcie-sg2042.c


base-commit: 5bc55a333a2f7316b58edc7573e8e893f7acb532
prerequisite-patch-id: d3c733a9ccc3fb4c62f145edcc692a0ca9484e53
prerequisite-patch-id: bd8d797ce40a6c2b460872eda31a7685ddba0d67
prerequisite-patch-id: f9b3c6061c52320f85ca4144e3d580fa6d0e54ef
-- 
2.34.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-15  7:05 [PATCH v3 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
@ 2025-01-15  7:06 ` Chen Wang
  2025-01-19 11:44   ` Manivannan Sadhasivam
  2025-01-22 22:21   ` Bjorn Helgaas
  2025-01-15  7:06 ` [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 34+ messages in thread
From: Chen Wang @ 2025-01-15  7:06 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Add binding for Sophgo SG2042 PCIe host controller.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
---
 .../bindings/pci/sophgo,sg2042-pcie-host.yaml | 147 ++++++++++++++++++
 1 file changed, 147 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml

diff --git a/Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml b/Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
new file mode 100644
index 000000000000..f98e71822144
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
@@ -0,0 +1,147 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/pci/sophgo,sg2042-pcie-host.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Sophgo SG2042 PCIe Host (Cadence PCIe Wrapper)
+
+description:
+  Sophgo SG2042 PCIe host controller is based on the Cadence PCIe core.
+
+maintainers:
+  - Chen Wang <unicorn_wang@outlook.com>
+
+properties:
+  compatible:
+    const: sophgo,sg2042-pcie-host
+
+  reg:
+    maxItems: 2
+
+  reg-names:
+    items:
+      - const: reg
+      - const: cfg
+
+  vendor-id:
+    const: 0x1f1c
+
+  device-id:
+    const: 0x2042
+
+  msi:
+    type: object
+    $ref: /schemas/interrupt-controller/msi-controller.yaml#
+    unevaluatedProperties: false
+
+    properties:
+      compatible:
+        items:
+          - const: sophgo,sg2042-pcie-msi
+
+      interrupts:
+        maxItems: 1
+
+      interrupt-names:
+        const: msi
+
+  msi-parent: true
+
+  sophgo,link-id:
+    $ref: /schemas/types.yaml#/definitions/uint32
+    description: |
+      SG2042 uses Cadence IP, every IP is composed of 2 cores (called link0
+      & link1 as Cadence's term). Each core corresponds to a host bridge,
+      and each host bridge has only one root port. Their configuration
+      registers are completely independent. SG2042 integrates two Cadence IPs,
+      so there can actually be up to four host bridges. "sophgo,link-id" is
+      used to identify which core/link the PCIe host bridge node corresponds to.
+
+      The Cadence IP has two modes of operation, selected by a strap pin.
+
+      In the single-link mode, the Cadence PCIe core instance associated
+      with Link0 is connected to all the lanes and the Cadence PCIe core
+      instance associated with Link1 is inactive.
+
+      In the dual-link mode, the Cadence PCIe core instance associated
+      with Link0 is connected to the lower half of the lanes and the
+      Cadence PCIe core instance associated with Link1 is connected to
+      the upper half of the lanes.
+
+      SG2042 contains 2 Cadence IPs and configures the Cores as below:
+
+                     +-- Core (Link0) <---> pcie_rc0  +-----------------+
+                     |                                |                 |
+      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
+                     |                                |                 |
+                     +-- Core (Link1) <---> disabled  +-----------------+
+
+                     +-- Core (Link0) <---> pcie_rc1  +-----------------+
+                     |                                |                 |
+      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
+                     |                                |                 |
+                     +-- Core (Link1) <---> pcie_rc2  +-----------------+
+
+      pcie_rcX is PCIe node ("sophgo,sg2042-pcie-host") defined in DTS.
+
+      Sophgo defines some new register files to add support for their MSI
+      controller inside PCIe. These new register files are defined in DTS as
+      syscon node ("sophgo,sg2042-pcie-ctrl"), i.e. "cdns_pcie0_ctrl" /
+      "cdns_pcie1_ctrl". cdns_pcieX_ctrl contains some registers shared by
+      pcie_rcX, even two RC (Link)s may share different bits of the same
+      register. For example, cdns_pcie1_ctrl contains registers shared by
+      link0 & link1 for Cadence IP 2.
+
+      "sophgo,link-id" is defined to distinguish the two RC's in one Cadence IP,
+      so we can know what registers (bits) we should use.
+
+  sophgo,syscon-pcie-ctrl:
+    $ref: /schemas/types.yaml#/definitions/phandle
+    description:
+      Phandle to the PCIe System Controller DT node. It's required to
+      access some MSI operation registers shared by PCIe RCs.
+
+allOf:
+  - $ref: cdns-pcie-host.yaml#
+
+required:
+  - compatible
+  - reg
+  - reg-names
+  - vendor-id
+  - device-id
+  - sophgo,link-id
+  - sophgo,syscon-pcie-ctrl
+
+unevaluatedProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/irq.h>
+
+    pcie@62000000 {
+      compatible = "sophgo,sg2042-pcie-host";
+      device_type = "pci";
+      reg = <0x62000000  0x00800000>,
+            <0x48000000  0x00001000>;
+      reg-names = "reg", "cfg";
+      #address-cells = <3>;
+      #size-cells = <2>;
+      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
+               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
+      bus-range = <0x00 0xff>;
+      vendor-id = <0x1f1c>;
+      device-id = <0x2042>;
+      cdns,no-bar-match-nbits = <48>;
+      sophgo,link-id = <0>;
+      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
+      msi-parent = <&msi_pcie>;
+      msi_pcie: msi {
+        compatible = "sophgo,sg2042-pcie-msi";
+        msi-controller;
+        interrupt-parent = <&intc>;
+        interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
+        interrupt-names = "msi";
+      };
+    };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-01-15  7:05 [PATCH v3 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
  2025-01-15  7:06 ` [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
@ 2025-01-15  7:06 ` Chen Wang
  2025-01-19 12:23   ` Manivannan Sadhasivam
  2025-01-22 21:33   ` Bjorn Helgaas
  2025-01-15  7:07 ` [PATCH v3 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible Chen Wang
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 34+ messages in thread
From: Chen Wang @ 2025-01-15  7:06 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Add support for PCIe controller in SG2042 SoC. The controller
uses the Cadence PCIe core programmed by pcie-cadence*.c. The
PCIe controller will work in host mode only.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
---
 drivers/pci/controller/cadence/Kconfig       |  13 +
 drivers/pci/controller/cadence/Makefile      |   1 +
 drivers/pci/controller/cadence/pcie-sg2042.c | 528 +++++++++++++++++++
 3 files changed, 542 insertions(+)
 create mode 100644 drivers/pci/controller/cadence/pcie-sg2042.c

diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig
index 8a0044bb3989..292eb2b20e9c 100644
--- a/drivers/pci/controller/cadence/Kconfig
+++ b/drivers/pci/controller/cadence/Kconfig
@@ -42,6 +42,18 @@ config PCIE_CADENCE_PLAT_EP
 	  endpoint mode. This PCIe controller may be embedded into many
 	  different vendors SoCs.
 
+config PCIE_SG2042
+	bool "Sophgo SG2042 PCIe controller (host mode)"
+	depends on ARCH_SOPHGO || COMPILE_TEST
+	depends on OF
+	select IRQ_MSI_LIB
+	select PCI_MSI
+	select PCIE_CADENCE_HOST
+	help
+	  Say Y here if you want to support the Sophgo SG2042 PCIe platform
+	  controller in host mode. Sophgo SG2042 PCIe controller uses Cadence
+	  PCIe core.
+
 config PCI_J721E
 	bool
 
@@ -67,4 +79,5 @@ config PCI_J721E_EP
 	  Say Y here if you want to support the TI J721E PCIe platform
 	  controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe
 	  core.
+
 endmenu
diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile
index 9bac5fb2f13d..4df4456d9539 100644
--- a/drivers/pci/controller/cadence/Makefile
+++ b/drivers/pci/controller/cadence/Makefile
@@ -4,3 +4,4 @@ obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
 obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
 obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
 obj-$(CONFIG_PCI_J721E) += pci-j721e.o
+obj-$(CONFIG_PCIE_SG2042) += pcie-sg2042.o
diff --git a/drivers/pci/controller/cadence/pcie-sg2042.c b/drivers/pci/controller/cadence/pcie-sg2042.c
new file mode 100644
index 000000000000..56797c2af755
--- /dev/null
+++ b/drivers/pci/controller/cadence/pcie-sg2042.c
@@ -0,0 +1,528 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * pcie-sg2042 - PCIe controller driver for Sophgo SG2042 SoC
+ *
+ * Copyright (C) 2024 Sophgo Technology Inc.
+ * Copyright (C) 2024 Chen Wang <unicorn_wang@outlook.com>
+ */
+
+#include <linux/bits.h>
+#include <linux/irq.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/kernel.h>
+#include <linux/mfd/syscon.h>
+#include <linux/msi.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_pci.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/regmap.h>
+
+#include "../../../irqchip/irq-msi-lib.h"
+
+#include "pcie-cadence.h"
+
+/*
+ * SG2042 PCIe controller supports two ways to report MSI:
+ *
+ * - Method A, the PCIe controller implements an MSI interrupt controller
+ *   inside, and connect to PLIC upward through one interrupt line.
+ *   Provides memory-mapped MSI address, and by programming the upper 32
+ *   bits of the address to zero, it can be compatible with old PCIe devices
+ *   that only support 32-bit MSI address.
+ *
+ * - Method B, the PCIe controller connects to PLIC upward through an
+ *   independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
+ *   controller provides multiple(up to 32) interrupt sources to PLIC.
+ *   Compared with the first method, the advantage is that the interrupt
+ *   source is expanded, but because for SG2042, the MSI address provided by
+ *   the MSI controller is fixed and only supports 64-bit address(> 2^32),
+ *   it is not compatible with old PCIe devices that only support 32-bit MSI
+ *   address.
+ *
+ * Method A & B can be configured in DTS, default is Method B.
+ */
+
+#define MAX_MSI_IRQS		8
+#define MAX_MSI_IRQS_PER_CTRL	1
+#define MAX_MSI_CTRLS		(MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL)
+#define MSI_DEF_NUM_VECTORS	MAX_MSI_IRQS
+#define BYTE_NUM_PER_MSI_VEC	4
+
+#define REG_CLEAR		0x0804
+#define REG_STATUS		0x0810
+#define REG_LINK0_MSI_ADDR_SIZE	0x085C
+#define REG_LINK1_MSI_ADDR_SIZE	0x080C
+#define REG_LINK0_MSI_ADDR_LOW	0x0860
+#define REG_LINK0_MSI_ADDR_HIGH	0x0864
+#define REG_LINK1_MSI_ADDR_LOW	0x0868
+#define REG_LINK1_MSI_ADDR_HIGH	0x086C
+
+#define REG_CLEAR_LINK0_BIT	2
+#define REG_CLEAR_LINK1_BIT	3
+#define REG_STATUS_LINK0_BIT	2
+#define REG_STATUS_LINK1_BIT	3
+
+#define REG_LINK0_MSI_ADDR_SIZE_MASK	GENMASK(15, 0)
+#define REG_LINK1_MSI_ADDR_SIZE_MASK	GENMASK(31, 16)
+
+struct sg2042_pcie {
+	struct cdns_pcie	*cdns_pcie;
+
+	struct regmap		*syscon;
+
+	u32			link_id;
+
+	struct irq_domain	*msi_domain;
+
+	int			msi_irq;
+
+	dma_addr_t		msi_phys;
+	void			*msi_virt;
+
+	u32			num_applied_vecs; /* used to speed up ISR */
+
+	raw_spinlock_t		msi_lock;
+	DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
+};
+
+static void sg2042_pcie_msi_clear_status(struct sg2042_pcie *pcie)
+{
+	u32 status, clr_msi_in_bit;
+
+	if (pcie->link_id == 1)
+		clr_msi_in_bit = BIT(REG_CLEAR_LINK1_BIT);
+	else
+		clr_msi_in_bit = BIT(REG_CLEAR_LINK0_BIT);
+
+	regmap_read(pcie->syscon, REG_CLEAR, &status);
+	status |= clr_msi_in_bit;
+	regmap_write(pcie->syscon, REG_CLEAR, status);
+
+	/* need write 0 to reset, hardware can not reset automatically */
+	status &= ~clr_msi_in_bit;
+	regmap_write(pcie->syscon, REG_CLEAR, status);
+}
+
+static int sg2042_pcie_msi_irq_set_affinity(struct irq_data *d,
+					    const struct cpumask *mask,
+					    bool force)
+{
+	if (d->parent_data)
+		return irq_chip_set_affinity_parent(d, mask, force);
+
+	return -EINVAL;
+}
+
+static void sg2042_pcie_msi_irq_compose_msi_msg(struct irq_data *d,
+						struct msi_msg *msg)
+{
+	struct sg2042_pcie *pcie = irq_data_get_irq_chip_data(d);
+	struct device *dev = pcie->cdns_pcie->dev;
+
+	msg->address_lo = lower_32_bits(pcie->msi_phys) + BYTE_NUM_PER_MSI_VEC * d->hwirq;
+	msg->address_hi = upper_32_bits(pcie->msi_phys);
+	msg->data = 1;
+
+	if (d->hwirq > pcie->num_applied_vecs)
+		pcie->num_applied_vecs = d->hwirq;
+
+	dev_dbg(dev, "compose MSI msg hwirq[%ld] address_hi[%#x] address_lo[%#x]\n",
+		d->hwirq, msg->address_hi, msg->address_lo);
+}
+
+static void sg2042_pcie_msi_irq_ack(struct irq_data *d)
+{
+	struct sg2042_pcie *pcie = irq_data_get_irq_chip_data(d);
+
+	sg2042_pcie_msi_clear_status(pcie);
+}
+
+static struct irq_chip sg2042_pcie_msi_bottom_chip = {
+	.name			= "SG2042 PCIe PLIC-MSI translator",
+	.irq_ack		= sg2042_pcie_msi_irq_ack,
+	.irq_compose_msi_msg	= sg2042_pcie_msi_irq_compose_msi_msg,
+	.irq_set_affinity	= sg2042_pcie_msi_irq_set_affinity,
+};
+
+static int sg2042_pcie_irq_domain_alloc(struct irq_domain *domain,
+					unsigned int virq, unsigned int nr_irqs,
+					void *args)
+{
+	struct sg2042_pcie *pcie = domain->host_data;
+	unsigned long flags;
+	u32 i;
+	int bit;
+
+	raw_spin_lock_irqsave(&pcie->msi_lock, flags);
+
+	bit = bitmap_find_free_region(pcie->msi_irq_in_use, MSI_DEF_NUM_VECTORS,
+				      order_base_2(nr_irqs));
+
+	raw_spin_unlock_irqrestore(&pcie->msi_lock, flags);
+
+	if (bit < 0)
+		return -ENOSPC;
+
+	for (i = 0; i < nr_irqs; i++)
+		irq_domain_set_info(domain, virq + i, bit + i,
+				    &sg2042_pcie_msi_bottom_chip,
+				    pcie, handle_edge_irq,
+				    NULL, NULL);
+
+	return 0;
+}
+
+static void sg2042_pcie_irq_domain_free(struct irq_domain *domain,
+					unsigned int virq, unsigned int nr_irqs)
+{
+	struct irq_data *d = irq_domain_get_irq_data(domain, virq);
+	struct sg2042_pcie *pcie = irq_data_get_irq_chip_data(d);
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&pcie->msi_lock, flags);
+
+	bitmap_release_region(pcie->msi_irq_in_use, d->hwirq,
+			      order_base_2(nr_irqs));
+
+	raw_spin_unlock_irqrestore(&pcie->msi_lock, flags);
+}
+
+static const struct irq_domain_ops sg2042_pcie_msi_domain_ops = {
+	.alloc	= sg2042_pcie_irq_domain_alloc,
+	.free	= sg2042_pcie_irq_domain_free,
+};
+
+static int sg2042_pcie_init_msi_data(struct sg2042_pcie *pcie)
+{
+	struct device *dev = pcie->cdns_pcie->dev;
+	u32 value;
+	int ret;
+
+	raw_spin_lock_init(&pcie->msi_lock);
+
+	/*
+	 * Though the PCIe controller can address >32-bit address space, to
+	 * facilitate endpoints that support only 32-bit MSI target address,
+	 * the mask is set to 32-bit to make sure that MSI target address is
+	 * always a 32-bit address
+	 */
+	ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
+	if (ret < 0)
+		return ret;
+
+	pcie->msi_virt = dma_alloc_coherent(dev, BYTE_NUM_PER_MSI_VEC * MAX_MSI_IRQS,
+					    &pcie->msi_phys, GFP_KERNEL);
+	if (!pcie->msi_virt)
+		return -ENOMEM;
+
+	/* Program the MSI address and size */
+	if (pcie->link_id == 1) {
+		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_LOW,
+			     lower_32_bits(pcie->msi_phys));
+		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_HIGH,
+			     upper_32_bits(pcie->msi_phys));
+
+		regmap_read(pcie->syscon, REG_LINK1_MSI_ADDR_SIZE, &value);
+		value = (value & REG_LINK1_MSI_ADDR_SIZE_MASK) | MAX_MSI_IRQS;
+		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_SIZE, value);
+	} else {
+		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_LOW,
+			     lower_32_bits(pcie->msi_phys));
+		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_HIGH,
+			     upper_32_bits(pcie->msi_phys));
+
+		regmap_read(pcie->syscon, REG_LINK0_MSI_ADDR_SIZE, &value);
+		value = (value & REG_LINK0_MSI_ADDR_SIZE_MASK) | (MAX_MSI_IRQS << 16);
+		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_SIZE, value);
+	}
+
+	return 0;
+}
+
+static irqreturn_t sg2042_pcie_msi_handle_irq(struct sg2042_pcie *pcie)
+{
+	u32 i, pos;
+	unsigned long val;
+	u32 status, num_vectors;
+	irqreturn_t ret = IRQ_NONE;
+
+	num_vectors = pcie->num_applied_vecs;
+	for (i = 0; i <= num_vectors; i++) {
+		status = readl((void *)(pcie->msi_virt + i * BYTE_NUM_PER_MSI_VEC));
+		if (!status)
+			continue;
+
+		ret = IRQ_HANDLED;
+		val = status;
+		pos = 0;
+		while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL,
+					    pos)) != MAX_MSI_IRQS_PER_CTRL) {
+			generic_handle_domain_irq(pcie->msi_domain,
+						  (i * MAX_MSI_IRQS_PER_CTRL) +
+						  pos);
+			pos++;
+		}
+		writel(0, ((void *)(pcie->msi_virt) + i * BYTE_NUM_PER_MSI_VEC));
+	}
+	return ret;
+}
+
+static void sg2042_pcie_msi_chained_isr(struct irq_desc *desc)
+{
+	struct irq_chip *chip = irq_desc_get_chip(desc);
+	u32 status, st_msi_in_bit;
+	struct sg2042_pcie *pcie;
+
+	chained_irq_enter(chip, desc);
+
+	pcie = irq_desc_get_handler_data(desc);
+	if (pcie->link_id == 1)
+		st_msi_in_bit = REG_STATUS_LINK1_BIT;
+	else
+		st_msi_in_bit = REG_STATUS_LINK0_BIT;
+
+	regmap_read(pcie->syscon, REG_STATUS, &status);
+	if ((status >> st_msi_in_bit) & 0x1) {
+		sg2042_pcie_msi_clear_status(pcie);
+
+		sg2042_pcie_msi_handle_irq(pcie);
+	}
+
+	chained_irq_exit(chip, desc);
+}
+
+#define SG2042_PCIE_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS |	\
+					MSI_FLAG_USE_DEF_CHIP_OPS)
+
+#define SG2042_PCIE_MSI_FLAGS_SUPPORTED MSI_GENERIC_FLAGS_MASK
+
+static struct msi_parent_ops sg2042_pcie_msi_parent_ops = {
+	.required_flags		= SG2042_PCIE_MSI_FLAGS_REQUIRED,
+	.supported_flags	= SG2042_PCIE_MSI_FLAGS_SUPPORTED,
+	.bus_select_mask	= MATCH_PCI_MSI,
+	.bus_select_token	= DOMAIN_BUS_NEXUS,
+	.prefix			= "SG2042-",
+	.init_dev_msi_info	= msi_lib_init_dev_msi_info,
+};
+
+static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie,
+				 struct device_node *msi_node)
+{
+	struct device *dev = pcie->cdns_pcie->dev;
+	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
+	struct irq_domain *parent_domain;
+	int ret = 0;
+
+	if (!of_property_read_bool(msi_node, "msi-controller"))
+		return -ENODEV;
+
+	ret = of_irq_get_byname(msi_node, "msi");
+	if (ret <= 0) {
+		dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
+		return ret;
+	}
+	pcie->msi_irq = ret;
+
+	irq_set_chained_handler_and_data(pcie->msi_irq,
+					 sg2042_pcie_msi_chained_isr, pcie);
+
+	parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
+						 &sg2042_pcie_msi_domain_ops, pcie);
+	if (!parent_domain) {
+		dev_err(dev, "%pfw: Failed to create IRQ domain\n", fwnode);
+		return -ENOMEM;
+	}
+	irq_domain_update_bus_token(parent_domain, DOMAIN_BUS_NEXUS);
+
+	parent_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
+	parent_domain->msi_parent_ops = &sg2042_pcie_msi_parent_ops;
+
+	pcie->msi_domain = parent_domain;
+
+	ret = sg2042_pcie_init_msi_data(pcie);
+	if (ret) {
+		dev_err(dev, "Failed to initialize MSI data!\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static void sg2042_pcie_free_msi(struct sg2042_pcie *pcie)
+{
+	struct device *dev = pcie->cdns_pcie->dev;
+
+	if (pcie->msi_irq)
+		irq_set_chained_handler_and_data(pcie->msi_irq, NULL, NULL);
+
+	if (pcie->msi_virt)
+		dma_free_coherent(dev, BYTE_NUM_PER_MSI_VEC * MAX_MSI_IRQS,
+				  pcie->msi_virt, pcie->msi_phys);
+}
+
+/*
+ * SG2042 only support 4-byte aligned access, so for the rootbus (i.e. to read
+ * the Root Port itself, read32 is required. For non-rootbus (i.e. to read
+ * the PCIe peripheral registers, supports 1/2/4 byte aligned access, so
+ * directly using read should be fine.
+ *
+ * The same is true for write.
+ */
+static int sg2042_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
+				   int where, int size, u32 *value)
+{
+	if (pci_is_root_bus(bus))
+		return pci_generic_config_read32(bus, devfn, where, size,
+						 value);
+
+	return pci_generic_config_read(bus, devfn, where, size, value);
+}
+
+static int sg2042_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
+				    int where, int size, u32 value)
+{
+	if (pci_is_root_bus(bus))
+		return pci_generic_config_write32(bus, devfn, where, size,
+						  value);
+
+	return pci_generic_config_write(bus, devfn, where, size, value);
+}
+
+static struct pci_ops sg2042_pcie_host_ops = {
+	.map_bus	= cdns_pci_map_bus,
+	.read		= sg2042_pcie_config_read,
+	.write		= sg2042_pcie_config_write,
+};
+
+/* Dummy ops which will be assigned to cdns_pcie.ops, which must be !NULL. */
+static const struct cdns_pcie_ops sg2042_cdns_pcie_ops = {};
+
+static int sg2042_pcie_probe(struct platform_device *pdev)
+{
+	struct device *dev = &pdev->dev;
+	struct device_node *np = dev->of_node;
+	struct pci_host_bridge *bridge;
+	struct device_node *np_syscon;
+	struct device_node *msi_node;
+	struct cdns_pcie *cdns_pcie;
+	struct sg2042_pcie *pcie;
+	struct cdns_pcie_rc *rc;
+	struct regmap *syscon;
+	int ret;
+
+	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
+	if (!pcie)
+		return -ENOMEM;
+
+	bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
+	if (!bridge) {
+		dev_err(dev, "Failed to alloc host bridge!\n");
+		return -ENOMEM;
+	}
+
+	bridge->ops = &sg2042_pcie_host_ops;
+
+	rc = pci_host_bridge_priv(bridge);
+	cdns_pcie = &rc->pcie;
+	cdns_pcie->dev = dev;
+	cdns_pcie->ops = &sg2042_cdns_pcie_ops;
+	pcie->cdns_pcie = cdns_pcie;
+
+	np_syscon = of_parse_phandle(np, "sophgo,syscon-pcie-ctrl", 0);
+	if (!np_syscon) {
+		dev_err(dev, "Failed to get syscon node\n");
+		return -ENOMEM;
+	}
+	syscon = syscon_node_to_regmap(np_syscon);
+	if (IS_ERR(syscon)) {
+		dev_err(dev, "Failed to get regmap for syscon\n");
+		return -ENOMEM;
+	}
+	pcie->syscon = syscon;
+
+	if (of_property_read_u32(np, "sophgo,link-id", &pcie->link_id)) {
+		dev_err(dev, "Unable to parse sophgo,link-id\n");
+		return -EINVAL;
+	}
+
+	platform_set_drvdata(pdev, pcie);
+
+	pm_runtime_enable(dev);
+
+	ret = pm_runtime_get_sync(dev);
+	if (ret < 0) {
+		dev_err(dev, "pm_runtime_get_sync failed\n");
+		goto err_get_sync;
+	}
+
+	msi_node = of_parse_phandle(dev->of_node, "msi-parent", 0);
+	if (!msi_node) {
+		dev_err(dev, "Failed to get msi-parent!\n");
+		return -1;
+	}
+
+	if (of_device_is_compatible(msi_node, "sophgo,sg2042-pcie-msi")) {
+		ret = sg2042_pcie_setup_msi(pcie, msi_node);
+		if (ret < 0)
+			goto err_setup_msi;
+	}
+
+	ret = cdns_pcie_init_phy(dev, cdns_pcie);
+	if (ret) {
+		dev_err(dev, "Failed to init phy!\n");
+		goto err_setup_msi;
+	}
+
+	ret = cdns_pcie_host_setup(rc);
+	if (ret < 0) {
+		dev_err(dev, "Failed to setup host!\n");
+		goto err_host_setup;
+	}
+
+	return 0;
+
+err_host_setup:
+	cdns_pcie_disable_phy(cdns_pcie);
+
+err_setup_msi:
+	sg2042_pcie_free_msi(pcie);
+
+err_get_sync:
+	pm_runtime_put(dev);
+	pm_runtime_disable(dev);
+
+	return ret;
+}
+
+static void sg2042_pcie_shutdown(struct platform_device *pdev)
+{
+	struct sg2042_pcie *pcie = platform_get_drvdata(pdev);
+	struct cdns_pcie *cdns_pcie = pcie->cdns_pcie;
+	struct device *dev = &pdev->dev;
+
+	sg2042_pcie_free_msi(pcie);
+
+	cdns_pcie_disable_phy(cdns_pcie);
+
+	pm_runtime_put(dev);
+	pm_runtime_disable(dev);
+}
+
+static const struct of_device_id sg2042_pcie_of_match[] = {
+	{ .compatible = "sophgo,sg2042-pcie-host" },
+	{},
+};
+
+static struct platform_driver sg2042_pcie_driver = {
+	.driver = {
+		.name		= "sg2042-pcie",
+		.of_match_table	= sg2042_pcie_of_match,
+		.pm		= &cdns_pcie_pm_ops,
+	},
+	.probe		= sg2042_pcie_probe,
+	.shutdown	= sg2042_pcie_shutdown,
+};
+builtin_platform_driver(sg2042_pcie_driver);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
  2025-01-15  7:05 [PATCH v3 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
  2025-01-15  7:06 ` [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
  2025-01-15  7:06 ` [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
@ 2025-01-15  7:07 ` Chen Wang
  2025-02-11 14:33   ` (subset) " Lee Jones
  2025-01-15  7:07 ` [PATCH v3 4/5] riscv: sophgo: dts: add pcie controllers for SG2042 Chen Wang
  2025-01-15  7:07 ` [PATCH v3 5/5] riscv: sophgo: dts: enable pcie for PioneerBox Chen Wang
  4 siblings, 1 reply; 34+ messages in thread
From: Chen Wang @ 2025-01-15  7:07 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Document SOPHGO SG2042 compatible for PCIe control registers.
These registers are shared by PCIe controller nodes.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
Acked-by: Rob Herring (Arm) <robh@kernel.org>
---
 Documentation/devicetree/bindings/mfd/syscon.yaml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Documentation/devicetree/bindings/mfd/syscon.yaml b/Documentation/devicetree/bindings/mfd/syscon.yaml
index b414de4fa779..afd89aa0ae8b 100644
--- a/Documentation/devicetree/bindings/mfd/syscon.yaml
+++ b/Documentation/devicetree/bindings/mfd/syscon.yaml
@@ -107,6 +107,7 @@ select:
           - rockchip,rk3576-qos
           - rockchip,rk3588-qos
           - rockchip,rv1126-qos
+          - sophgo,sg2042-pcie-ctrl
           - st,spear1340-misc
           - stericsson,nomadik-pmu
           - starfive,jh7100-sysmain
@@ -205,6 +206,7 @@ properties:
           - rockchip,rk3576-qos
           - rockchip,rk3588-qos
           - rockchip,rv1126-qos
+          - sophgo,sg2042-pcie-ctrl
           - st,spear1340-misc
           - stericsson,nomadik-pmu
           - starfive,jh7100-sysmain
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 4/5] riscv: sophgo: dts: add pcie controllers for SG2042
  2025-01-15  7:05 [PATCH v3 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
                   ` (2 preceding siblings ...)
  2025-01-15  7:07 ` [PATCH v3 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible Chen Wang
@ 2025-01-15  7:07 ` Chen Wang
  2025-01-15  7:07 ` [PATCH v3 5/5] riscv: sophgo: dts: enable pcie for PioneerBox Chen Wang
  4 siblings, 0 replies; 34+ messages in thread
From: Chen Wang @ 2025-01-15  7:07 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Add PCIe controller nodes in DTS for Sophgo SG2042.
Default they are disabled.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
---
 arch/riscv/boot/dts/sophgo/sg2042.dtsi | 89 ++++++++++++++++++++++++++
 1 file changed, 89 insertions(+)

diff --git a/arch/riscv/boot/dts/sophgo/sg2042.dtsi b/arch/riscv/boot/dts/sophgo/sg2042.dtsi
index 02fbb978973c..3db46bfa1a06 100644
--- a/arch/riscv/boot/dts/sophgo/sg2042.dtsi
+++ b/arch/riscv/boot/dts/sophgo/sg2042.dtsi
@@ -205,6 +205,95 @@ clkgen: clock-controller@7030012000 {
 			#clock-cells = <1>;
 		};
 
+		pcie_rc0: pcie@7060000000 {
+			compatible = "sophgo,sg2042-pcie-host";
+			device_type = "pci";
+			reg = <0x70 0x60000000  0x0 0x02000000>,
+			      <0x40 0x00000000  0x0 0x00001000>;
+			reg-names = "reg", "cfg";
+			linux,pci-domain = <0>;
+			#address-cells = <3>;
+			#size-cells = <2>;
+			ranges = <0x01000000 0x0  0xc0000000  0x40 0xc0000000  0x0 0x00400000>,
+				 <0x42000000 0x0  0xd0000000  0x40 0xd0000000  0x0 0x10000000>,
+				 <0x02000000 0x0  0xe0000000  0x40 0xe0000000  0x0 0x20000000>,
+				 <0x43000000 0x42 0x00000000  0x42 0x00000000  0x2 0x00000000>,
+				 <0x03000000 0x41 0x00000000  0x41 0x00000000  0x1 0x00000000>;
+			bus-range = <0x0 0xff>;
+			vendor-id = <0x1f1c>;
+			device-id = <0x2042>;
+			cdns,no-bar-match-nbits = <48>;
+			sophgo,link-id = <0>;
+			sophgo,syscon-pcie-ctrl = <&cdns_pcie0_ctrl>;
+			msi-parent = <&msi>;
+			status = "disabled";
+		};
+
+		cdns_pcie0_ctrl: syscon@7061800000 {
+			compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
+			reg = <0x70 0x61800000 0x0 0x800000>;
+		};
+
+		pcie_rc1: pcie@7062000000 {
+			compatible = "sophgo,sg2042-pcie-host";
+			device_type = "pci";
+			reg = <0x70 0x62000000  0x0 0x00800000>,
+			      <0x48 0x00000000  0x0 0x00001000>;
+			reg-names = "reg", "cfg";
+			linux,pci-domain = <1>;
+			#address-cells = <3>;
+			#size-cells = <2>;
+			ranges = <0x01000000 0x0  0xc0800000  0x48 0xc0800000  0x0 0x00400000>,
+				 <0x42000000 0x0  0xd0000000  0x48 0xd0000000  0x0 0x10000000>,
+				 <0x02000000 0x0  0xe0000000  0x48 0xe0000000  0x0 0x20000000>,
+				 <0x03000000 0x49 0x00000000  0x49 0x00000000  0x1 0x00000000>,
+				 <0x43000000 0x4a 0x00000000  0x4a 0x00000000  0x2 0x00000000>;
+			bus-range = <0x0 0xff>;
+			vendor-id = <0x1f1c>;
+			device-id = <0x2042>;
+			cdns,no-bar-match-nbits = <48>;
+			sophgo,link-id = <0>;
+			sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
+			msi-parent = <&msi_pcie>;
+			status = "disabled";
+			msi_pcie: msi {
+				compatible = "sophgo,sg2042-pcie-msi";
+				msi-controller;
+				interrupt-parent = <&intc>;
+				interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
+				interrupt-names = "msi";
+			};
+		};
+
+		pcie_rc2: pcie@7062800000 {
+			compatible = "sophgo,sg2042-pcie-host";
+			device_type = "pci";
+			reg = <0x70 0x62800000  0x0 0x00800000>,
+			      <0x4c 0x00000000  0x0 0x00001000>;
+			reg-names = "reg", "cfg";
+			linux,pci-domain = <2>;
+			#address-cells = <3>;
+			#size-cells = <2>;
+			ranges = <0x01000000 0x0  0xc0c00000  0x4c 0xc0c00000  0x0 0x00400000>,
+				 <0x42000000 0x0  0xf8000000  0x4c 0xf8000000  0x0 0x04000000>,
+				 <0x02000000 0x0  0xfc000000  0x4c 0xfc000000  0x0 0x04000000>,
+				 <0x43000000 0x4e 0x00000000  0x4e 0x00000000  0x2 0x00000000>,
+				 <0x03000000 0x4d 0x00000000  0x4d 0x00000000  0x1 0x00000000>;
+			bus-range = <0x0 0xff>;
+			vendor-id = <0x1f1c>;
+			device-id = <0x2042>;
+			cdns,no-bar-match-nbits = <48>;
+			sophgo,link-id = <1>;
+			sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
+			msi-parent = <&msi>;
+			status = "disabled";
+		};
+
+		cdns_pcie1_ctrl: syscon@7063800000 {
+			compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
+			reg = <0x70 0x63800000 0x0 0x800000>;
+		};
+
 		clint_mswi: interrupt-controller@7094000000 {
 			compatible = "sophgo,sg2042-aclint-mswi", "thead,c900-aclint-mswi";
 			reg = <0x00000070 0x94000000 0x00000000 0x00004000>;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v3 5/5] riscv: sophgo: dts: enable pcie for PioneerBox
  2025-01-15  7:05 [PATCH v3 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
                   ` (3 preceding siblings ...)
  2025-01-15  7:07 ` [PATCH v3 4/5] riscv: sophgo: dts: add pcie controllers for SG2042 Chen Wang
@ 2025-01-15  7:07 ` Chen Wang
  4 siblings, 0 replies; 34+ messages in thread
From: Chen Wang @ 2025-01-15  7:07 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas

From: Chen Wang <unicorn_wang@outlook.com>

Enable pcie controllers for PioneerBox, which uses SG2042 SoC.

Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
---
 arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts b/arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts
index be596d01ff8d..e63445cc7d18 100644
--- a/arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts
+++ b/arch/riscv/boot/dts/sophgo/sg2042-milkv-pioneer.dts
@@ -60,6 +60,18 @@ mcu: syscon@17 {
 	};
 };
 
+&pcie_rc0 {
+	status = "okay";
+};
+
+&pcie_rc1 {
+	status = "okay";
+};
+
+&pcie_rc2 {
+	status = "okay";
+};
+
 &sd {
 	bus-width = <4>;
 	no-sdio;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-15  7:06 ` [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
@ 2025-01-19 11:44   ` Manivannan Sadhasivam
  2025-01-22 12:52     ` Chen Wang
  2025-01-22 22:21   ` Bjorn Helgaas
  1 sibling, 1 reply; 34+ messages in thread
From: Manivannan Sadhasivam @ 2025-01-19 11:44 UTC (permalink / raw)
  To: Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas

On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
> From: Chen Wang <unicorn_wang@outlook.com>
> 
> Add binding for Sophgo SG2042 PCIe host controller.
> 
> Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
> ---
>  .../bindings/pci/sophgo,sg2042-pcie-host.yaml | 147 ++++++++++++++++++
>  1 file changed, 147 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
> 
> diff --git a/Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml b/Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
> new file mode 100644
> index 000000000000..f98e71822144
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
> @@ -0,0 +1,147 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/pci/sophgo,sg2042-pcie-host.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Sophgo SG2042 PCIe Host (Cadence PCIe Wrapper)
> +
> +description:
> +  Sophgo SG2042 PCIe host controller is based on the Cadence PCIe core.
> +
> +maintainers:
> +  - Chen Wang <unicorn_wang@outlook.com>
> +
> +properties:
> +  compatible:
> +    const: sophgo,sg2042-pcie-host
> +
> +  reg:
> +    maxItems: 2
> +
> +  reg-names:
> +    items:
> +      - const: reg
> +      - const: cfg
> +
> +  vendor-id:
> +    const: 0x1f1c
> +
> +  device-id:
> +    const: 0x2042
> +
> +  msi:
> +    type: object
> +    $ref: /schemas/interrupt-controller/msi-controller.yaml#
> +    unevaluatedProperties: false
> +
> +    properties:
> +      compatible:
> +        items:
> +          - const: sophgo,sg2042-pcie-msi
> +
> +      interrupts:
> +        maxItems: 1
> +
> +      interrupt-names:
> +        const: msi
> +
> +  msi-parent: true
> +
> +  sophgo,link-id:
> +    $ref: /schemas/types.yaml#/definitions/uint32
> +    description: |
> +      SG2042 uses Cadence IP, every IP is composed of 2 cores (called link0
> +      & link1 as Cadence's term). Each core corresponds to a host bridge,
> +      and each host bridge has only one root port. Their configuration
> +      registers are completely independent. SG2042 integrates two Cadence IPs,
> +      so there can actually be up to four host bridges. "sophgo,link-id" is
> +      used to identify which core/link the PCIe host bridge node corresponds to.
> +
> +      The Cadence IP has two modes of operation, selected by a strap pin.
> +
> +      In the single-link mode, the Cadence PCIe core instance associated
> +      with Link0 is connected to all the lanes and the Cadence PCIe core
> +      instance associated with Link1 is inactive.
> +
> +      In the dual-link mode, the Cadence PCIe core instance associated
> +      with Link0 is connected to the lower half of the lanes and the
> +      Cadence PCIe core instance associated with Link1 is connected to
> +      the upper half of the lanes.
> +
> +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
> +
> +                     +-- Core (Link0) <---> pcie_rc0  +-----------------+
> +                     |                                |                 |
> +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
> +                     |                                |                 |
> +                     +-- Core (Link1) <---> disabled  +-----------------+
> +
> +                     +-- Core (Link0) <---> pcie_rc1  +-----------------+
> +                     |                                |                 |
> +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> +                     |                                |                 |
> +                     +-- Core (Link1) <---> pcie_rc2  +-----------------+
> +
> +      pcie_rcX is PCIe node ("sophgo,sg2042-pcie-host") defined in DTS.
> +
> +      Sophgo defines some new register files to add support for their MSI
> +      controller inside PCIe. These new register files are defined in DTS as
> +      syscon node ("sophgo,sg2042-pcie-ctrl"), i.e. "cdns_pcie0_ctrl" /
> +      "cdns_pcie1_ctrl". cdns_pcieX_ctrl contains some registers shared by
> +      pcie_rcX, even two RC (Link)s may share different bits of the same
> +      register. For example, cdns_pcie1_ctrl contains registers shared by
> +      link0 & link1 for Cadence IP 2.
> +
> +      "sophgo,link-id" is defined to distinguish the two RC's in one Cadence IP,
> +      so we can know what registers (bits) we should use.
> +
> +  sophgo,syscon-pcie-ctrl:
> +    $ref: /schemas/types.yaml#/definitions/phandle
> +    description:
> +      Phandle to the PCIe System Controller DT node. It's required to
> +      access some MSI operation registers shared by PCIe RCs.
> +
> +allOf:
> +  - $ref: cdns-pcie-host.yaml#
> +
> +required:
> +  - compatible
> +  - reg
> +  - reg-names
> +  - vendor-id
> +  - device-id
> +  - sophgo,link-id
> +  - sophgo,syscon-pcie-ctrl
> +
> +unevaluatedProperties: false
> +
> +examples:
> +  - |
> +    #include <dt-bindings/interrupt-controller/irq.h>
> +
> +    pcie@62000000 {
> +      compatible = "sophgo,sg2042-pcie-host";
> +      device_type = "pci";
> +      reg = <0x62000000  0x00800000>,
> +            <0x48000000  0x00001000>;

Use single space between address and size.

> +      reg-names = "reg", "cfg";
> +      #address-cells = <3>;
> +      #size-cells = <2>;
> +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
> +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;

For sure you don't need to set 'relocatable' flag for both regions.

> +      bus-range = <0x00 0xff>;
> +      vendor-id = <0x1f1c>;
> +      device-id = <0x2042>;

As Bjorn explained in v2, these properties need to be moved to PCI root port
node. Your argument of a single root port node for a host bridge doesn't add as
we have found that describing the root port properties in host bridge only
creates issues.

Btw, we are migrating the existing single RP platforms too to root port node.

> +      cdns,no-bar-match-nbits = <48>;
> +      sophgo,link-id = <0>;
> +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;

Where is the num-lanes property?

> +      msi-parent = <&msi_pcie>;
> +      msi_pcie: msi {

'msi' is not a standard node name. 'interrupt-controller' is what usually used
to describe the MSI node.

Btw, is the MSI controller a separate IP inside the host bridge? If not, there
would no need to add a separate node. Most of the host bridge IPs implementing
MSI controller, do not use a separate node.

- Mani

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-01-15  7:06 ` [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
@ 2025-01-19 12:23   ` Manivannan Sadhasivam
  2025-01-22 13:28     ` Chen Wang
  2025-01-22 21:33   ` Bjorn Helgaas
  1 sibling, 1 reply; 34+ messages in thread
From: Manivannan Sadhasivam @ 2025-01-19 12:23 UTC (permalink / raw)
  To: Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas

On Wed, Jan 15, 2025 at 03:06:57PM +0800, Chen Wang wrote:
> From: Chen Wang <unicorn_wang@outlook.com>
> 
> Add support for PCIe controller in SG2042 SoC. The controller
> uses the Cadence PCIe core programmed by pcie-cadence*.c. The
> PCIe controller will work in host mode only.
> 

Please add more info about the IP. Like IP revision, PCIe Gen, lane count,
etc...

> Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
> ---
>  drivers/pci/controller/cadence/Kconfig       |  13 +
>  drivers/pci/controller/cadence/Makefile      |   1 +
>  drivers/pci/controller/cadence/pcie-sg2042.c | 528 +++++++++++++++++++
>  3 files changed, 542 insertions(+)
>  create mode 100644 drivers/pci/controller/cadence/pcie-sg2042.c
> 
> diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig
> index 8a0044bb3989..292eb2b20e9c 100644
> --- a/drivers/pci/controller/cadence/Kconfig
> +++ b/drivers/pci/controller/cadence/Kconfig
> @@ -42,6 +42,18 @@ config PCIE_CADENCE_PLAT_EP
>  	  endpoint mode. This PCIe controller may be embedded into many
>  	  different vendors SoCs.
>  
> +config PCIE_SG2042
> +	bool "Sophgo SG2042 PCIe controller (host mode)"
> +	depends on ARCH_SOPHGO || COMPILE_TEST
> +	depends on OF
> +	select IRQ_MSI_LIB
> +	select PCI_MSI
> +	select PCIE_CADENCE_HOST
> +	help
> +	  Say Y here if you want to support the Sophgo SG2042 PCIe platform
> +	  controller in host mode. Sophgo SG2042 PCIe controller uses Cadence
> +	  PCIe core.
> +
>  config PCI_J721E
>  	bool
>  
> @@ -67,4 +79,5 @@ config PCI_J721E_EP
>  	  Say Y here if you want to support the TI J721E PCIe platform
>  	  controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe
>  	  core.
> +

Spurious newline.

>  endmenu
> diff --git a/drivers/pci/controller/cadence/Makefile b/drivers/pci/controller/cadence/Makefile
> index 9bac5fb2f13d..4df4456d9539 100644
> --- a/drivers/pci/controller/cadence/Makefile
> +++ b/drivers/pci/controller/cadence/Makefile
> @@ -4,3 +4,4 @@ obj-$(CONFIG_PCIE_CADENCE_HOST) += pcie-cadence-host.o
>  obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o
>  obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o
>  obj-$(CONFIG_PCI_J721E) += pci-j721e.o
> +obj-$(CONFIG_PCIE_SG2042) += pcie-sg2042.o
> diff --git a/drivers/pci/controller/cadence/pcie-sg2042.c b/drivers/pci/controller/cadence/pcie-sg2042.c
> new file mode 100644
> index 000000000000..56797c2af755
> --- /dev/null
> +++ b/drivers/pci/controller/cadence/pcie-sg2042.c
> @@ -0,0 +1,528 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * pcie-sg2042 - PCIe controller driver for Sophgo SG2042 SoC
> + *
> + * Copyright (C) 2024 Sophgo Technology Inc.
> + * Copyright (C) 2024 Chen Wang <unicorn_wang@outlook.com>
> + */
> +
> +#include <linux/bits.h>
> +#include <linux/irq.h>
> +#include <linux/irqchip/chained_irq.h>
> +#include <linux/irqdomain.h>
> +#include <linux/kernel.h>
> +#include <linux/mfd/syscon.h>
> +#include <linux/msi.h>
> +#include <linux/of_address.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/regmap.h>
> +
> +#include "../../../irqchip/irq-msi-lib.h"
> +
> +#include "pcie-cadence.h"
> +
> +/*
> + * SG2042 PCIe controller supports two ways to report MSI:
> + *
> + * - Method A, the PCIe controller implements an MSI interrupt controller
> + *   inside, and connect to PLIC upward through one interrupt line.
> + *   Provides memory-mapped MSI address, and by programming the upper 32
> + *   bits of the address to zero, it can be compatible with old PCIe devices
> + *   that only support 32-bit MSI address.
> + *
> + * - Method B, the PCIe controller connects to PLIC upward through an
> + *   independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
> + *   controller provides multiple(up to 32) interrupt sources to PLIC.
> + *   Compared with the first method, the advantage is that the interrupt
> + *   source is expanded, but because for SG2042, the MSI address provided by
> + *   the MSI controller is fixed and only supports 64-bit address(> 2^32),
> + *   it is not compatible with old PCIe devices that only support 32-bit MSI
> + *   address.
> + *
> + * Method A & B can be configured in DTS, default is Method B.

How to configure them? I can only see "sophgo,sg2042-msi" in the binding.

> + */
> +
> +#define MAX_MSI_IRQS		8
> +#define MAX_MSI_IRQS_PER_CTRL	1
> +#define MAX_MSI_CTRLS		(MAX_MSI_IRQS / MAX_MSI_IRQS_PER_CTRL)
> +#define MSI_DEF_NUM_VECTORS	MAX_MSI_IRQS
> +#define BYTE_NUM_PER_MSI_VEC	4
> +
> +#define REG_CLEAR		0x0804
> +#define REG_STATUS		0x0810
> +#define REG_LINK0_MSI_ADDR_SIZE	0x085C
> +#define REG_LINK1_MSI_ADDR_SIZE	0x080C
> +#define REG_LINK0_MSI_ADDR_LOW	0x0860
> +#define REG_LINK0_MSI_ADDR_HIGH	0x0864
> +#define REG_LINK1_MSI_ADDR_LOW	0x0868
> +#define REG_LINK1_MSI_ADDR_HIGH	0x086C
> +
> +#define REG_CLEAR_LINK0_BIT	2
> +#define REG_CLEAR_LINK1_BIT	3
> +#define REG_STATUS_LINK0_BIT	2
> +#define REG_STATUS_LINK1_BIT	3
> +
> +#define REG_LINK0_MSI_ADDR_SIZE_MASK	GENMASK(15, 0)
> +#define REG_LINK1_MSI_ADDR_SIZE_MASK	GENMASK(31, 16)
> +
> +struct sg2042_pcie {
> +	struct cdns_pcie	*cdns_pcie;
> +
> +	struct regmap		*syscon;
> +
> +	u32			link_id;
> +
> +	struct irq_domain	*msi_domain;
> +
> +	int			msi_irq;
> +
> +	dma_addr_t		msi_phys;
> +	void			*msi_virt;
> +
> +	u32			num_applied_vecs; /* used to speed up ISR */
> +

Get rid of the newline between struct members, please.

> +	raw_spinlock_t		msi_lock;
> +	DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
> +};
> +

[...]

> +static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie,
> +				 struct device_node *msi_node)
> +{
> +	struct device *dev = pcie->cdns_pcie->dev;
> +	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
> +	struct irq_domain *parent_domain;
> +	int ret = 0;
> +
> +	if (!of_property_read_bool(msi_node, "msi-controller"))
> +		return -ENODEV;
> +
> +	ret = of_irq_get_byname(msi_node, "msi");
> +	if (ret <= 0) {
> +		dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
> +		return ret;
> +	}
> +	pcie->msi_irq = ret;
> +
> +	irq_set_chained_handler_and_data(pcie->msi_irq,
> +					 sg2042_pcie_msi_chained_isr, pcie);
> +
> +	parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
> +						 &sg2042_pcie_msi_domain_ops, pcie);
> +	if (!parent_domain) {
> +		dev_err(dev, "%pfw: Failed to create IRQ domain\n", fwnode);
> +		return -ENOMEM;
> +	}
> +	irq_domain_update_bus_token(parent_domain, DOMAIN_BUS_NEXUS);
> +

The MSI controller is wired to PLIC isn't it? If so, why can't you use
hierarchial MSI domain implementation as like other controller drivers?

> +	parent_domain->flags |= IRQ_DOMAIN_FLAG_MSI_PARENT;
> +	parent_domain->msi_parent_ops = &sg2042_pcie_msi_parent_ops;
> +
> +	pcie->msi_domain = parent_domain;
> +
> +	ret = sg2042_pcie_init_msi_data(pcie);
> +	if (ret) {
> +		dev_err(dev, "Failed to initialize MSI data!\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void sg2042_pcie_free_msi(struct sg2042_pcie *pcie)
> +{
> +	struct device *dev = pcie->cdns_pcie->dev;
> +
> +	if (pcie->msi_irq)
> +		irq_set_chained_handler_and_data(pcie->msi_irq, NULL, NULL);
> +
> +	if (pcie->msi_virt)
> +		dma_free_coherent(dev, BYTE_NUM_PER_MSI_VEC * MAX_MSI_IRQS,
> +				  pcie->msi_virt, pcie->msi_phys);
> +}
> +
> +/*
> + * SG2042 only support 4-byte aligned access, so for the rootbus (i.e. to read
> + * the Root Port itself, read32 is required. For non-rootbus (i.e. to read
> + * the PCIe peripheral registers, supports 1/2/4 byte aligned access, so
> + * directly using read should be fine.
> + *
> + * The same is true for write.
> + */
> +static int sg2042_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
> +				   int where, int size, u32 *value)
> +{
> +	if (pci_is_root_bus(bus))
> +		return pci_generic_config_read32(bus, devfn, where, size,
> +						 value);
> +
> +	return pci_generic_config_read(bus, devfn, where, size, value);
> +}
> +
> +static int sg2042_pcie_config_write(struct pci_bus *bus, unsigned int devfn,
> +				    int where, int size, u32 value)
> +{
> +	if (pci_is_root_bus(bus))
> +		return pci_generic_config_write32(bus, devfn, where, size,
> +						  value);
> +
> +	return pci_generic_config_write(bus, devfn, where, size, value);
> +}
> +
> +static struct pci_ops sg2042_pcie_host_ops = {
> +	.map_bus	= cdns_pci_map_bus,
> +	.read		= sg2042_pcie_config_read,
> +	.write		= sg2042_pcie_config_write,
> +};
> +
> +/* Dummy ops which will be assigned to cdns_pcie.ops, which must be !NULL. */

Because cadence code driver doesn't check for !ops? Please fix it instead. And
the fix is trivial.

> +static const struct cdns_pcie_ops sg2042_cdns_pcie_ops = {};
> +
> +static int sg2042_pcie_probe(struct platform_device *pdev)
> +{
> +	struct device *dev = &pdev->dev;
> +	struct device_node *np = dev->of_node;
> +	struct pci_host_bridge *bridge;
> +	struct device_node *np_syscon;
> +	struct device_node *msi_node;
> +	struct cdns_pcie *cdns_pcie;
> +	struct sg2042_pcie *pcie;
> +	struct cdns_pcie_rc *rc;
> +	struct regmap *syscon;
> +	int ret;
> +
> +	pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
> +	if (!pcie)
> +		return -ENOMEM;
> +
> +	bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
> +	if (!bridge) {
> +		dev_err(dev, "Failed to alloc host bridge!\n");

Use dev_err_probe() throughout the probe function.

> +		return -ENOMEM;
> +	}
> +
> +	bridge->ops = &sg2042_pcie_host_ops;
> +
> +	rc = pci_host_bridge_priv(bridge);
> +	cdns_pcie = &rc->pcie;
> +	cdns_pcie->dev = dev;
> +	cdns_pcie->ops = &sg2042_cdns_pcie_ops;
> +	pcie->cdns_pcie = cdns_pcie;
> +
> +	np_syscon = of_parse_phandle(np, "sophgo,syscon-pcie-ctrl", 0);
> +	if (!np_syscon) {
> +		dev_err(dev, "Failed to get syscon node\n");
> +		return -ENOMEM;

-ENODEV

> +	}
> +	syscon = syscon_node_to_regmap(np_syscon);

You should drop the np reference count once you are done with it.

> +	if (IS_ERR(syscon)) {
> +		dev_err(dev, "Failed to get regmap for syscon\n");
> +		return -ENOMEM;

PTR_ERR(syscon)

> +	}
> +	pcie->syscon = syscon;
> +
> +	if (of_property_read_u32(np, "sophgo,link-id", &pcie->link_id)) {
> +		dev_err(dev, "Unable to parse sophgo,link-id\n");

"Unable to parse \"sophgo,link-id\"\n"

> +		return -EINVAL;
> +	}
> +
> +	platform_set_drvdata(pdev, pcie);
> +
> +	pm_runtime_enable(dev);
> +
> +	ret = pm_runtime_get_sync(dev);

Use pm_runtime_resume_and_get().

> +	if (ret < 0) {
> +		dev_err(dev, "pm_runtime_get_sync failed\n");
> +		goto err_get_sync;
> +	}
> +
> +	msi_node = of_parse_phandle(dev->of_node, "msi-parent", 0);
> +	if (!msi_node) {
> +		dev_err(dev, "Failed to get msi-parent!\n");
> +		return -1;

-ENODATA

> +	}
> +
> +	if (of_device_is_compatible(msi_node, "sophgo,sg2042-pcie-msi")) {
> +		ret = sg2042_pcie_setup_msi(pcie, msi_node);

Same as above. np leak here.

- Mani

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-19 11:44   ` Manivannan Sadhasivam
@ 2025-01-22 12:52     ` Chen Wang
  2025-01-22 17:21       ` Manivannan Sadhasivam
  0 siblings, 1 reply; 34+ messages in thread
From: Chen Wang @ 2025-01-22 12:52 UTC (permalink / raw)
  To: Manivannan Sadhasivam, Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lee, lpieralisi, palmer, paul.walmsley,
	pbrobinson, robh, devicetree, linux-kernel, linux-pci,
	linux-riscv, chao.wei, xiaoguang.xing, fengchun.li, helgaas


On 2025/1/19 19:44, Manivannan Sadhasivam wrote:
> On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
>> From: Chen Wang <unicorn_wang@outlook.com>
>>
>> Add binding for Sophgo SG2042 PCIe host controller.
>>
>> Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
>> ---
>>   .../bindings/pci/sophgo,sg2042-pcie-host.yaml | 147 ++++++++++++++++++
>>   1 file changed, 147 insertions(+)
>>   create mode 100644 Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml

[......]

>> +examples:
>> +  - |
>> +    #include <dt-bindings/interrupt-controller/irq.h>
>> +
>> +    pcie@62000000 {
>> +      compatible = "sophgo,sg2042-pcie-host";
>> +      device_type = "pci";
>> +      reg = <0x62000000  0x00800000>,
>> +            <0x48000000  0x00001000>;
> Use single space between address and size.
ok, thanks.
>> +      reg-names = "reg", "cfg";
>> +      #address-cells = <3>;
>> +      #size-cells = <2>;
>> +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
>> +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
> For sure you don't need to set 'relocatable' flag for both regions.
ok, I will correct this in next version.
>> +      bus-range = <0x00 0xff>;
>> +      vendor-id = <0x1f1c>;
>> +      device-id = <0x2042>;
> As Bjorn explained in v2, these properties need to be moved to PCI root port
> node. Your argument of a single root port node for a host bridge doesn't add as
> we have found that describing the root port properties in host bridge only
> creates issues.

Got it. I will try to change this in next version.

> Btw, we are migrating the existing single RP platforms too to root port node.
>
>> +      cdns,no-bar-match-nbits = <48>;
>> +      sophgo,link-id = <0>;
>> +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
> Where is the num-lanes property?
Is this num-lanes a must-have property? The lane number of each link on 
the SG2042 is hard-coded in the firmware, so it seems meaningless to 
configure it.
>> +      msi-parent = <&msi_pcie>;
>> +      msi_pcie: msi {
> 'msi' is not a standard node name. 'interrupt-controller' is what usually used
> to describe the MSI node.
OK. I will corret this.
> Btw, is the MSI controller a separate IP inside the host bridge? If not, there
> would no need to add a separate node. Most of the host bridge IPs implementing
> MSI controller, do not use a separate node.

Yes, it's a separated IP inside the host bridge.

> - Mani

Thanks,

Chen


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-01-19 12:23   ` Manivannan Sadhasivam
@ 2025-01-22 13:28     ` Chen Wang
  2025-01-22 17:34       ` Manivannan Sadhasivam
  0 siblings, 1 reply; 34+ messages in thread
From: Chen Wang @ 2025-01-22 13:28 UTC (permalink / raw)
  To: Manivannan Sadhasivam, Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lee, lpieralisi, palmer, paul.walmsley,
	pbrobinson, robh, devicetree, linux-kernel, linux-pci,
	linux-riscv, chao.wei, xiaoguang.xing, fengchun.li, helgaas


On 2025/1/19 20:23, Manivannan Sadhasivam wrote:
> On Wed, Jan 15, 2025 at 03:06:57PM +0800, Chen Wang wrote:
>> From: Chen Wang <unicorn_wang@outlook.com>
>>
>> Add support for PCIe controller in SG2042 SoC. The controller
>> uses the Cadence PCIe core programmed by pcie-cadence*.c. The
>> PCIe controller will work in host mode only.
>>
> Please add more info about the IP. Like IP revision, PCIe Gen, lane count,
> etc...
ok.
>> Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
>> ---
>>   drivers/pci/controller/cadence/Kconfig       |  13 +
>>   drivers/pci/controller/cadence/Makefile      |   1 +
>>   drivers/pci/controller/cadence/pcie-sg2042.c | 528 +++++++++++++++++++
>>   3 files changed, 542 insertions(+)
>>   create mode 100644 drivers/pci/controller/cadence/pcie-sg2042.c
>>
>> diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig
>> index 8a0044bb3989..292eb2b20e9c 100644
>> --- a/drivers/pci/controller/cadence/Kconfig
>> +++ b/drivers/pci/controller/cadence/Kconfig
>> @@ -42,6 +42,18 @@ config PCIE_CADENCE_PLAT_EP
>>   	  endpoint mode. This PCIe controller may be embedded into many
>>   	  different vendors SoCs.
>>   
>> +config PCIE_SG2042
>> +	bool "Sophgo SG2042 PCIe controller (host mode)"
>> +	depends on ARCH_SOPHGO || COMPILE_TEST
>> +	depends on OF
>> +	select IRQ_MSI_LIB
>> +	select PCI_MSI
>> +	select PCIE_CADENCE_HOST
>> +	help
>> +	  Say Y here if you want to support the Sophgo SG2042 PCIe platform
>> +	  controller in host mode. Sophgo SG2042 PCIe controller uses Cadence
>> +	  PCIe core.
>> +
>>   config PCI_J721E
>>   	bool
>>   
>> @@ -67,4 +79,5 @@ config PCI_J721E_EP
>>   	  Say Y here if you want to support the TI J721E PCIe platform
>>   	  controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe
>>   	  core.
>> +
> Spurious newline.

oops, I will fix this.

[......]

>> +/*
>> + * SG2042 PCIe controller supports two ways to report MSI:
>> + *
>> + * - Method A, the PCIe controller implements an MSI interrupt controller
>> + *   inside, and connect to PLIC upward through one interrupt line.
>> + *   Provides memory-mapped MSI address, and by programming the upper 32
>> + *   bits of the address to zero, it can be compatible with old PCIe devices
>> + *   that only support 32-bit MSI address.
>> + *
>> + * - Method B, the PCIe controller connects to PLIC upward through an
>> + *   independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
>> + *   controller provides multiple(up to 32) interrupt sources to PLIC.
>> + *   Compared with the first method, the advantage is that the interrupt
>> + *   source is expanded, but because for SG2042, the MSI address provided by
>> + *   the MSI controller is fixed and only supports 64-bit address(> 2^32),
>> + *   it is not compatible with old PCIe devices that only support 32-bit MSI
>> + *   address.
>> + *
>> + * Method A & B can be configured in DTS, default is Method B.
> How to configure them? I can only see "sophgo,sg2042-msi" in the binding.


The value of the msi-parent attribute is used in dts to distinguish 
them, for example:

```dts

msi: msi-controller@7030010300 {
     ......
};

pcie_rc0: pcie@7060000000 {
     msi-parent = <&msi>;
};

pcie_rc1: pcie@7062000000 {
     ......
     msi-parent = <&msi_pcie>;
     msi_pcie: interrupt-controller {
         ......
     };
};

```

Which means:

pcie_rc0 uses Method B

pcie_rc1 uses Method A.

[......]

>> +struct sg2042_pcie {
>> +	struct cdns_pcie	*cdns_pcie;
>> +
>> +	struct regmap		*syscon;
>> +
>> +	u32			link_id;
>> +
>> +	struct irq_domain	*msi_domain;
>> +
>> +	int			msi_irq;
>> +
>> +	dma_addr_t		msi_phys;
>> +	void			*msi_virt;
>> +
>> +	u32			num_applied_vecs; /* used to speed up ISR */
>> +
> Get rid of the newline between struct members, please.
ok
>> +	raw_spinlock_t		msi_lock;
>> +	DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
>> +};
>> +
> [...]
>
>> +static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie,
>> +				 struct device_node *msi_node)
>> +{
>> +	struct device *dev = pcie->cdns_pcie->dev;
>> +	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
>> +	struct irq_domain *parent_domain;
>> +	int ret = 0;
>> +
>> +	if (!of_property_read_bool(msi_node, "msi-controller"))
>> +		return -ENODEV;
>> +
>> +	ret = of_irq_get_byname(msi_node, "msi");
>> +	if (ret <= 0) {
>> +		dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
>> +		return ret;
>> +	}
>> +	pcie->msi_irq = ret;
>> +
>> +	irq_set_chained_handler_and_data(pcie->msi_irq,
>> +					 sg2042_pcie_msi_chained_isr, pcie);
>> +
>> +	parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
>> +						 &sg2042_pcie_msi_domain_ops, pcie);
>> +	if (!parent_domain) {
>> +		dev_err(dev, "%pfw: Failed to create IRQ domain\n", fwnode);
>> +		return -ENOMEM;
>> +	}
>> +	irq_domain_update_bus_token(parent_domain, DOMAIN_BUS_NEXUS);
>> +
> The MSI controller is wired to PLIC isn't it? If so, why can't you use
> hierarchial MSI domain implementation as like other controller drivers?

The method used here is somewhat similar to dw_pcie_allocate_domains() 
in drivers/pci/controller/dwc/pcie-designware-host.c. This MSI 
controller is about Method A, the PCIe controller implements an MSI 
interrupt controller inside, and connect to PLIC upward through only ONE 
interrupt line. Because MSI to PLIC is multiple to one, I use linear 
mode here and use chained ISR to handle the interrupts.

[......]

>> +/* Dummy ops which will be assigned to cdns_pcie.ops, which must be !NULL. */
> Because cadence code driver doesn't check for !ops? Please fix it instead. And
> the fix is trivial.

ok, I will fix this for cadence code together in next version.

[......]

>> +	bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc));
>> +	if (!bridge) {
>> +		dev_err(dev, "Failed to alloc host bridge!\n");
> Use dev_err_probe() throughout the probe function.
Got it.
>> +		return -ENOMEM;
>> +	}
>> +
>> +	bridge->ops = &sg2042_pcie_host_ops;
>> +
>> +	rc = pci_host_bridge_priv(bridge);
>> +	cdns_pcie = &rc->pcie;
>> +	cdns_pcie->dev = dev;
>> +	cdns_pcie->ops = &sg2042_cdns_pcie_ops;
>> +	pcie->cdns_pcie = cdns_pcie;
>> +
>> +	np_syscon = of_parse_phandle(np, "sophgo,syscon-pcie-ctrl", 0);
>> +	if (!np_syscon) {
>> +		dev_err(dev, "Failed to get syscon node\n");
>> +		return -ENOMEM;
> -ENODEV
Got.
>> +	}
>> +	syscon = syscon_node_to_regmap(np_syscon);
> You should drop the np reference count once you are done with it.
ok, I will double check this through the file.
>> +	if (IS_ERR(syscon)) {
>> +		dev_err(dev, "Failed to get regmap for syscon\n");
>> +		return -ENOMEM;
> PTR_ERR(syscon)

yes.


>> +	}
>> +	pcie->syscon = syscon;
>> +
>> +	if (of_property_read_u32(np, "sophgo,link-id", &pcie->link_id)) {
>> +		dev_err(dev, "Unable to parse sophgo,link-id\n");
> "Unable to parse \"sophgo,link-id\"\n"
ok.
>> +		return -EINVAL;
>> +	}
>> +
>> +	platform_set_drvdata(pdev, pcie);
>> +
>> +	pm_runtime_enable(dev);
>> +
>> +	ret = pm_runtime_get_sync(dev);
> Use pm_runtime_resume_and_get().
Got it.
>> +	if (ret < 0) {
>> +		dev_err(dev, "pm_runtime_get_sync failed\n");
>> +		goto err_get_sync;
>> +	}
>> +
>> +	msi_node = of_parse_phandle(dev->of_node, "msi-parent", 0);
>> +	if (!msi_node) {
>> +		dev_err(dev, "Failed to get msi-parent!\n");
>> +		return -1;
> -ENODATA
Thanks, this should be fixed.
>
>> +	}
>> +
>> +	if (of_device_is_compatible(msi_node, "sophgo,sg2042-pcie-msi")) {
>> +		ret = sg2042_pcie_setup_msi(pcie, msi_node);
> Same as above. np leak here.
ok, will check this through the file.
>
> - Mani

Thanks,

Chen


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-22 12:52     ` Chen Wang
@ 2025-01-22 17:21       ` Manivannan Sadhasivam
  2025-01-26  0:29         ` Chen Wang
  0 siblings, 1 reply; 34+ messages in thread
From: Manivannan Sadhasivam @ 2025-01-22 17:21 UTC (permalink / raw)
  To: Chen Wang
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas

On Wed, Jan 22, 2025 at 08:52:39PM +0800, Chen Wang wrote:
> 
> On 2025/1/19 19:44, Manivannan Sadhasivam wrote:
> > On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
> > > From: Chen Wang <unicorn_wang@outlook.com>
> > > 
> > > Add binding for Sophgo SG2042 PCIe host controller.
> > > 
> > > Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
> > > ---
> > >   .../bindings/pci/sophgo,sg2042-pcie-host.yaml | 147 ++++++++++++++++++
> > >   1 file changed, 147 insertions(+)
> > >   create mode 100644 Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
> 
> [......]
> 
> > > +examples:
> > > +  - |
> > > +    #include <dt-bindings/interrupt-controller/irq.h>
> > > +
> > > +    pcie@62000000 {
> > > +      compatible = "sophgo,sg2042-pcie-host";
> > > +      device_type = "pci";
> > > +      reg = <0x62000000  0x00800000>,
> > > +            <0x48000000  0x00001000>;
> > Use single space between address and size.
> ok, thanks.
> > > +      reg-names = "reg", "cfg";
> > > +      #address-cells = <3>;
> > > +      #size-cells = <2>;
> > > +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
> > > +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
> > For sure you don't need to set 'relocatable' flag for both regions.
> ok, I will correct this in next version.
> > > +      bus-range = <0x00 0xff>;
> > > +      vendor-id = <0x1f1c>;
> > > +      device-id = <0x2042>;
> > As Bjorn explained in v2, these properties need to be moved to PCI root port
> > node. Your argument of a single root port node for a host bridge doesn't add as
> > we have found that describing the root port properties in host bridge only
> > creates issues.
> 
> Got it. I will try to change this in next version.
> 
> > Btw, we are migrating the existing single RP platforms too to root port node.
> > 
> > > +      cdns,no-bar-match-nbits = <48>;
> > > +      sophgo,link-id = <0>;
> > > +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
> > Where is the num-lanes property?
> Is this num-lanes a must-have property? The lane number of each link on the
> SG2042 is hard-coded in the firmware, so it seems meaningless to configure
> it.

It is not a requirement, but most of the controllers define it so that the
drivers can use it to set Max Link Width, Link speed etc.... You can skip it if
you want.

> > > +      msi-parent = <&msi_pcie>;
> > > +      msi_pcie: msi {
> > 'msi' is not a standard node name. 'interrupt-controller' is what usually used
> > to describe the MSI node.
> OK. I will corret this.

There is also 'msi-controller' node name used commonly, but it is not
documented in the devicetree spec. Maybe you can use it instead and I'll add it
to the spec since MSI and interrupt controllers are two distinct controllers.

> > Btw, is the MSI controller a separate IP inside the host bridge? If not, there
> > would no need to add a separate node. Most of the host bridge IPs implementing
> > MSI controller, do not use a separate node.
> 
> Yes, it's a separated IP inside the host bridge.
> 

Ok.

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-01-22 13:28     ` Chen Wang
@ 2025-01-22 17:34       ` Manivannan Sadhasivam
  2025-01-23 12:12         ` Marc Zyngier
  2025-02-17  8:22         ` Chen Wang
  0 siblings, 2 replies; 34+ messages in thread
From: Manivannan Sadhasivam @ 2025-01-22 17:34 UTC (permalink / raw)
  To: Chen Wang, maz
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas

+ Marc (for the IRQCHIP implementation review)

On Wed, Jan 22, 2025 at 09:28:12PM +0800, Chen Wang wrote:
> 
> On 2025/1/19 20:23, Manivannan Sadhasivam wrote:
> > On Wed, Jan 15, 2025 at 03:06:57PM +0800, Chen Wang wrote:
> > > From: Chen Wang <unicorn_wang@outlook.com>
> > > 
> > > Add support for PCIe controller in SG2042 SoC. The controller
> > > uses the Cadence PCIe core programmed by pcie-cadence*.c. The
> > > PCIe controller will work in host mode only.
> > > 
> > Please add more info about the IP. Like IP revision, PCIe Gen, lane count,
> > etc...
> ok.
> > > Signed-off-by: Chen Wang <unicorn_wang@outlook.com>
> > > ---
> > >   drivers/pci/controller/cadence/Kconfig       |  13 +
> > >   drivers/pci/controller/cadence/Makefile      |   1 +
> > >   drivers/pci/controller/cadence/pcie-sg2042.c | 528 +++++++++++++++++++
> > >   3 files changed, 542 insertions(+)
> > >   create mode 100644 drivers/pci/controller/cadence/pcie-sg2042.c
> > > 
> > > diff --git a/drivers/pci/controller/cadence/Kconfig b/drivers/pci/controller/cadence/Kconfig
> > > index 8a0044bb3989..292eb2b20e9c 100644
> > > --- a/drivers/pci/controller/cadence/Kconfig
> > > +++ b/drivers/pci/controller/cadence/Kconfig
> > > @@ -42,6 +42,18 @@ config PCIE_CADENCE_PLAT_EP
> > >   	  endpoint mode. This PCIe controller may be embedded into many
> > >   	  different vendors SoCs.
> > > +config PCIE_SG2042
> > > +	bool "Sophgo SG2042 PCIe controller (host mode)"
> > > +	depends on ARCH_SOPHGO || COMPILE_TEST
> > > +	depends on OF
> > > +	select IRQ_MSI_LIB
> > > +	select PCI_MSI
> > > +	select PCIE_CADENCE_HOST
> > > +	help
> > > +	  Say Y here if you want to support the Sophgo SG2042 PCIe platform
> > > +	  controller in host mode. Sophgo SG2042 PCIe controller uses Cadence
> > > +	  PCIe core.
> > > +
> > >   config PCI_J721E
> > >   	bool
> > > @@ -67,4 +79,5 @@ config PCI_J721E_EP
> > >   	  Say Y here if you want to support the TI J721E PCIe platform
> > >   	  controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe
> > >   	  core.
> > > +
> > Spurious newline.
> 
> oops, I will fix this.
> 
> [......]
> 
> > > +/*
> > > + * SG2042 PCIe controller supports two ways to report MSI:
> > > + *
> > > + * - Method A, the PCIe controller implements an MSI interrupt controller
> > > + *   inside, and connect to PLIC upward through one interrupt line.
> > > + *   Provides memory-mapped MSI address, and by programming the upper 32
> > > + *   bits of the address to zero, it can be compatible with old PCIe devices
> > > + *   that only support 32-bit MSI address.
> > > + *
> > > + * - Method B, the PCIe controller connects to PLIC upward through an
> > > + *   independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
> > > + *   controller provides multiple(up to 32) interrupt sources to PLIC.
> > > + *   Compared with the first method, the advantage is that the interrupt
> > > + *   source is expanded, but because for SG2042, the MSI address provided by
> > > + *   the MSI controller is fixed and only supports 64-bit address(> 2^32),
> > > + *   it is not compatible with old PCIe devices that only support 32-bit MSI
> > > + *   address.
> > > + *
> > > + * Method A & B can be configured in DTS, default is Method B.
> > How to configure them? I can only see "sophgo,sg2042-msi" in the binding.
> 
> 
> The value of the msi-parent attribute is used in dts to distinguish them,
> for example:
> 
> ```dts
> 
> msi: msi-controller@7030010300 {
>     ......
> };
> 
> pcie_rc0: pcie@7060000000 {
>     msi-parent = <&msi>;
> };
> 
> pcie_rc1: pcie@7062000000 {
>     ......
>     msi-parent = <&msi_pcie>;
>     msi_pcie: interrupt-controller {
>         ......
>     };
> };
> 
> ```
> 
> Which means:
> 
> pcie_rc0 uses Method B
> 
> pcie_rc1 uses Method A.
> 

Ok. you mentioned 'default method' which is not accurate since the choice
obviously depends on DT. Maybe you should say, 'commonly used method'? But both
the binding and dts patches make use of in-built MSI controller only (method A).

In general, for MSI implementations inside the PCIe IP, we don't usually add a
dedicated devicetree node since the IP is the same. But in your reply to the my
question on the bindings patch, you said it is a separate IP. I'm confused now.

> [......]
> 
> > > +struct sg2042_pcie {
> > > +	struct cdns_pcie	*cdns_pcie;
> > > +
> > > +	struct regmap		*syscon;
> > > +
> > > +	u32			link_id;
> > > +
> > > +	struct irq_domain	*msi_domain;
> > > +
> > > +	int			msi_irq;
> > > +
> > > +	dma_addr_t		msi_phys;
> > > +	void			*msi_virt;
> > > +
> > > +	u32			num_applied_vecs; /* used to speed up ISR */
> > > +
> > Get rid of the newline between struct members, please.
> ok
> > > +	raw_spinlock_t		msi_lock;
> > > +	DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
> > > +};
> > > +
> > [...]
> > 
> > > +static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie,
> > > +				 struct device_node *msi_node)
> > > +{
> > > +	struct device *dev = pcie->cdns_pcie->dev;
> > > +	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
> > > +	struct irq_domain *parent_domain;
> > > +	int ret = 0;
> > > +
> > > +	if (!of_property_read_bool(msi_node, "msi-controller"))
> > > +		return -ENODEV;
> > > +
> > > +	ret = of_irq_get_byname(msi_node, "msi");
> > > +	if (ret <= 0) {
> > > +		dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
> > > +		return ret;
> > > +	}
> > > +	pcie->msi_irq = ret;
> > > +
> > > +	irq_set_chained_handler_and_data(pcie->msi_irq,
> > > +					 sg2042_pcie_msi_chained_isr, pcie);
> > > +
> > > +	parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
> > > +						 &sg2042_pcie_msi_domain_ops, pcie);
> > > +	if (!parent_domain) {
> > > +		dev_err(dev, "%pfw: Failed to create IRQ domain\n", fwnode);
> > > +		return -ENOMEM;
> > > +	}
> > > +	irq_domain_update_bus_token(parent_domain, DOMAIN_BUS_NEXUS);
> > > +
> > The MSI controller is wired to PLIC isn't it? If so, why can't you use
> > hierarchial MSI domain implementation as like other controller drivers?
> 
> The method used here is somewhat similar to dw_pcie_allocate_domains() in
> drivers/pci/controller/dwc/pcie-designware-host.c. This MSI controller is
> about Method A, the PCIe controller implements an MSI interrupt controller
> inside, and connect to PLIC upward through only ONE interrupt line. Because
> MSI to PLIC is multiple to one, I use linear mode here and use chained ISR
> to handle the interrupts.
> 

Hmm, ok. I'm not an IRQCHIP expert, but I'll defer to Marc to review the IRQCHIP
implementation part.

- Mani

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-01-15  7:06 ` [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
  2025-01-19 12:23   ` Manivannan Sadhasivam
@ 2025-01-22 21:33   ` Bjorn Helgaas
  2025-02-17  8:36     ` Chen Wang
  1 sibling, 1 reply; 34+ messages in thread
From: Bjorn Helgaas @ 2025-01-22 21:33 UTC (permalink / raw)
  To: Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Wed, Jan 15, 2025 at 03:06:57PM +0800, Chen Wang wrote:
> From: Chen Wang <unicorn_wang@outlook.com>
> 
> Add support for PCIe controller in SG2042 SoC. The controller
> uses the Cadence PCIe core programmed by pcie-cadence*.c. The
> PCIe controller will work in host mode only.

> + * pcie-sg2042 - PCIe controller driver for Sophgo SG2042 SoC

I'm guessing this is the first of a *family* of Sophgo SoCs, so
"sg2042" looks like it might be a little too specific if there will be
things like "sg3042" etc added in the future.

> +#include "../../../irqchip/irq-msi-lib.h"

I know you're using this path because you're relying on Marc's
work in progress [1].

But I don't want to carry around an #include like this in drivers/pci
while we're waiting for that, so I think you should use the existing
published MSI model until after Marc's update is merged.  Otherwise we
might end up with this ugly path here and no real path to migrate to
the published, supported one.

[1] https://lore.kernel.org/linux-riscv/20241204124549.607054-2-maz@kernel.org/

> + * SG2042 PCIe controller supports two ways to report MSI:
> + *
> + * - Method A, the PCIe controller implements an MSI interrupt controller
> + *   inside, and connect to PLIC upward through one interrupt line.
> + *   Provides memory-mapped MSI address, and by programming the upper 32
> + *   bits of the address to zero, it can be compatible with old PCIe devices
> + *   that only support 32-bit MSI address.
> + *
> + * - Method B, the PCIe controller connects to PLIC upward through an
> + *   independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
> + *   controller provides multiple(up to 32) interrupt sources to PLIC.

Maybe expand "PLIC" the first time?

s/SOC/SoC/ to match previous uses, e.g., in commit log
s/multiple(up to 32)/up to 32/

> + *   Compared with the first method, the advantage is that the interrupt
> + *   source is expanded, but because for SG2042, the MSI address provided by
> + *   the MSI controller is fixed and only supports 64-bit address(> 2^32),
> + *   it is not compatible with old PCIe devices that only support 32-bit MSI
> + *   address.

"Supporting 64-bit address" means supporting any address from 0 to
2^64 - 1, but I don't think that's what you mean here.

I think what you mean here is that with Method B, the MSI address is
fixed and it can only be above 4GB.

> +#define REG_CLEAR_LINK0_BIT	2
> +#define REG_CLEAR_LINK1_BIT	3
> +#define REG_STATUS_LINK0_BIT	2
> +#define REG_STATUS_LINK1_BIT	3

> +static void sg2042_pcie_msi_clear_status(struct sg2042_pcie *pcie)
> +{
> +	u32 status, clr_msi_in_bit;
> +
> +	if (pcie->link_id == 1)
> +		clr_msi_in_bit = BIT(REG_CLEAR_LINK1_BIT);
> +	else
> +		clr_msi_in_bit = BIT(REG_CLEAR_LINK0_BIT);

Why not put the BIT() in the #defines for REG_CLEAR_LINK0_BIT,
REG_STATUS_LINK0_BIT, ...?  Then this code is slightly simpler, and
you can use similar style in sg2042_pcie_msi_chained_isr() instead of
shifting there.

> +	regmap_read(pcie->syscon, REG_CLEAR, &status);
> +	status |= clr_msi_in_bit;
> +	regmap_write(pcie->syscon, REG_CLEAR, status);

> +static void sg2042_pcie_msi_irq_compose_msi_msg(struct irq_data *d,
> +						struct msi_msg *msg)
> +{
> +	struct sg2042_pcie *pcie = irq_data_get_irq_chip_data(d);
> +	struct device *dev = pcie->cdns_pcie->dev;
> +
> +	msg->address_lo = lower_32_bits(pcie->msi_phys) + BYTE_NUM_PER_MSI_VEC * d->hwirq;
> +	msg->address_hi = upper_32_bits(pcie->msi_phys);

This seems a little suspect because adding "BYTE_NUM_PER_MSI_VEC *
d->hwirq" could cause overflow into the upper 32 bits.  I think you
should add first, then take the lower/upper 32 bits of the 64-bit
result.

> +	if (d->hwirq > pcie->num_applied_vecs)
> +		pcie->num_applied_vecs = d->hwirq;

"num_applied_vecs" is a bit of a misnomer; it's actually the *max*
hwirq.

> +static const struct irq_domain_ops sg2042_pcie_msi_domain_ops = {
> +	.alloc	= sg2042_pcie_irq_domain_alloc,
> +	.free	= sg2042_pcie_irq_domain_free,

Mention "msi" in these function names, e.g.,
sg2042_pcie_msi_domain_alloc().

> +static int sg2042_pcie_init_msi_data(struct sg2042_pcie *pcie)
> +{
> ...
> +	/* Program the MSI address and size */
> +	if (pcie->link_id == 1) {
> +		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_LOW,
> +			     lower_32_bits(pcie->msi_phys));
> +		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_HIGH,
> +			     upper_32_bits(pcie->msi_phys));
> +
> +		regmap_read(pcie->syscon, REG_LINK1_MSI_ADDR_SIZE, &value);
> +		value = (value & REG_LINK1_MSI_ADDR_SIZE_MASK) | MAX_MSI_IRQS;
> +		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_SIZE, value);
> +	} else {
> +		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_LOW,
> +			     lower_32_bits(pcie->msi_phys));
> +		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_HIGH,
> +			     upper_32_bits(pcie->msi_phys));
> +
> +		regmap_read(pcie->syscon, REG_LINK0_MSI_ADDR_SIZE, &value);
> +		value = (value & REG_LINK0_MSI_ADDR_SIZE_MASK) | (MAX_MSI_IRQS << 16);
> +		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_SIZE, value);
> +	}

Would be nicer to set temporaries to the link_id-dependent values (as
you did elsewhere) so it's obvious that the code is identical, e.g.,

  if (pcie->link_id == 1) {
    msi_addr = REG_LINK1_MSI_ADDR_LOW;
    msi_addr_size = REG_LINK1_MSI_ADDR_SIZE;
    msi_addr_size_mask = REG_LINK1_MSI_ADDR_SIZE;
  } else {
    ...
  }

  regmap_write(pcie->syscon, msi_addr, lower_32_bits(pcie->msi_phys));
  regmap_write(pcie->syscon, msi_addr + 4, upper_32_bits(pcie->msi_phys));
  ...

> +
> +	return 0;
> +}
> +
> +static irqreturn_t sg2042_pcie_msi_handle_irq(struct sg2042_pcie *pcie)

Which driver are you using as a template for function names and code
structure?  There are probably a dozen different names for functions
that iterate like this around a call to generic_handle_domain_irq(),
but you've managed to come up with a new one.  If you can pick a
similar name to copy, it makes it easier to compare drivers and find
and fix defects across them.

> +{
> +	u32 i, pos;
> +	unsigned long val;
> +	u32 status, num_vectors;
> +	irqreturn_t ret = IRQ_NONE;
> +
> +	num_vectors = pcie->num_applied_vecs;
> +	for (i = 0; i <= num_vectors; i++) {
> +		status = readl((void *)(pcie->msi_virt + i * BYTE_NUM_PER_MSI_VEC));
> +		if (!status)
> +			continue;
> +
> +		ret = IRQ_HANDLED;
> +		val = status;

I don't think you need both val and status.

> +		pos = 0;
> +		while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL,
> +					    pos)) != MAX_MSI_IRQS_PER_CTRL) {

Most drivers use for_each_set_bit() here.

> +			generic_handle_domain_irq(pcie->msi_domain,
> +						  (i * MAX_MSI_IRQS_PER_CTRL) +
> +						  pos);
> +			pos++;
> +		}
> +		writel(0, ((void *)(pcie->msi_virt) + i * BYTE_NUM_PER_MSI_VEC));
> +	}
> +	return ret;
> +}

> +static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie,
> +				 struct device_node *msi_node)
> +{
> +	struct device *dev = pcie->cdns_pcie->dev;
> +	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
> +	struct irq_domain *parent_domain;
> +	int ret = 0;

Pointless initialization of "ret".

> +	if (!of_property_read_bool(msi_node, "msi-controller"))
> +		return -ENODEV;
> +
> +	ret = of_irq_get_byname(msi_node, "msi");
> +	if (ret <= 0) {
> +		dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
> +		return ret;
> +	}
> +	pcie->msi_irq = ret;
> +
> +	irq_set_chained_handler_and_data(pcie->msi_irq,
> +					 sg2042_pcie_msi_chained_isr, pcie);
> +
> +	parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
> +						 &sg2042_pcie_msi_domain_ops, pcie);

Wrap to fit in 80 columns like 99% of the rest of this file.

Bjorn

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-15  7:06 ` [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
  2025-01-19 11:44   ` Manivannan Sadhasivam
@ 2025-01-22 22:21   ` Bjorn Helgaas
  2025-01-26  2:27     ` Chen Wang
  1 sibling, 1 reply; 34+ messages in thread
From: Bjorn Helgaas @ 2025-01-22 22:21 UTC (permalink / raw)
  To: Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
> From: Chen Wang <unicorn_wang@outlook.com>
> 
> Add binding for Sophgo SG2042 PCIe host controller.

> +  sophgo,link-id:
> +    $ref: /schemas/types.yaml#/definitions/uint32
> +    description: |
> +      SG2042 uses Cadence IP, every IP is composed of 2 cores (called link0
> +      & link1 as Cadence's term). Each core corresponds to a host bridge,
> +      and each host bridge has only one root port. Their configuration
> +      registers are completely independent. SG2042 integrates two Cadence IPs,
> +      so there can actually be up to four host bridges. "sophgo,link-id" is
> +      used to identify which core/link the PCIe host bridge node corresponds to.

IIUC, the registers of Cadence IP 1 and IP 2 are completely
independent, and if you describe both of them, you would have separate
"pcie@62000000" stanzas with separate 'reg' and 'ranges' properties.

From the driver, it does not look like the registers for Link0 and
Link1 are independent, since the driver claims the
"sophgo,sg2042-pcie-host", which includes two Cores, and it tests
pcie->link_id to select the correct register address and bit mask.

"sophgo,link-id" corresponds to Cadence documentation, but I think it
is somewhat misleading in the binding because a PCIe "Link" refers to
the downstream side of a Root Port.  If we use "link-id" to identify
either Core0 or Core1 of a Cadence IP, it sort of bakes in the
idea that there can never be more than one Root Port per Core.

Since each Core is the root of a separate PCI hierarchy, it seems like
maybe there should be a stanza for the Core so there's a place where
per-hierarchy things like "linux,pci-domain" properties could go,
e.g.,

  pcie@62000000 {		// IP 1, single-link mode
    compatible = "sophgo,sg2042-pcie-host";
    reg = <...>;
    ranges = <...>;

    core0 {
      sophgo,core-id = <0>;
      linux,pci-domain = <0>;

      port {
        num-lanes = <4>;	// all lanes
      };
    };
  };

  pcie@82000000 {		// IP 2, dual-link mode
    compatible = "sophgo,sg2042-pcie-host";
    reg = <...>;
    ranges = <...>;

    core0 {
      sophgo,core-id = <0>;
      linux,pci-domain = <1>;

      port {
        num-lanes = <2>;	// half of lanes
      };
    };

    core1 {
      sophgo,core-id = <1>;
      linux,pci-domain = <2>;

      port {
        num-lanes = <2>;	// half of lanes
      };
    };
  };

> +      The Cadence IP has two modes of operation, selected by a strap pin.
> +
> +      In the single-link mode, the Cadence PCIe core instance associated
> +      with Link0 is connected to all the lanes and the Cadence PCIe core
> +      instance associated with Link1 is inactive.
> +
> +      In the dual-link mode, the Cadence PCIe core instance associated
> +      with Link0 is connected to the lower half of the lanes and the
> +      Cadence PCIe core instance associated with Link1 is connected to
> +      the upper half of the lanes.
> +
> +      SG2042 contains 2 Cadence IPs and configures the Cores as below:
> +
> +                     +-- Core (Link0) <---> pcie_rc0  +-----------------+
> +                     |                                |                 |
> +      Cadence IP 1 --+                                | cdns_pcie0_ctrl |
> +                     |                                |                 |
> +                     +-- Core (Link1) <---> disabled  +-----------------+
> +
> +                     +-- Core (Link0) <---> pcie_rc1  +-----------------+
> +                     |                                |                 |
> +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> +                     |                                |                 |
> +                     +-- Core (Link1) <---> pcie_rc2  +-----------------+
> +
> +      pcie_rcX is PCIe node ("sophgo,sg2042-pcie-host") defined in DTS.
> +
> +      Sophgo defines some new register files to add support for their MSI
> +      controller inside PCIe. These new register files are defined in DTS as
> +      syscon node ("sophgo,sg2042-pcie-ctrl"), i.e. "cdns_pcie0_ctrl" /
> +      "cdns_pcie1_ctrl". cdns_pcieX_ctrl contains some registers shared by
> +      pcie_rcX, even two RC (Link)s may share different bits of the same
> +      register. For example, cdns_pcie1_ctrl contains registers shared by
> +      link0 & link1 for Cadence IP 2.
> +
> +      "sophgo,link-id" is defined to distinguish the two RC's in one Cadence IP,
> +      so we can know what registers (bits) we should use.

> +examples:
> +  - |
> +    #include <dt-bindings/interrupt-controller/irq.h>
> +
> +    pcie@62000000 {
> +      compatible = "sophgo,sg2042-pcie-host";
> +      device_type = "pci";
> +      reg = <0x62000000  0x00800000>,
> +            <0x48000000  0x00001000>;
> +      reg-names = "reg", "cfg";
> +      #address-cells = <3>;
> +      #size-cells = <2>;
> +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
> +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
> +      bus-range = <0x00 0xff>;
> +      vendor-id = <0x1f1c>;
> +      device-id = <0x2042>;
> +      cdns,no-bar-match-nbits = <48>;
> +      sophgo,link-id = <0>;
> +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
> +      msi-parent = <&msi_pcie>;
> +      msi_pcie: msi {
> +        compatible = "sophgo,sg2042-pcie-msi";
> +        msi-controller;
> +        interrupt-parent = <&intc>;
> +        interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
> +        interrupt-names = "msi";
> +      };
> +    };

It would be helpful for me if the example showed how both link-id 0
and link-id 1 would be used (or whatever they end up being named).
I assume both have to be somewhere in the same pcie@62000000 device to
make this work.

Bjorn

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-01-22 17:34       ` Manivannan Sadhasivam
@ 2025-01-23 12:12         ` Marc Zyngier
  2025-02-07 17:49           ` Manivannan Sadhasivam
  2025-02-17  8:22         ` Chen Wang
  1 sibling, 1 reply; 34+ messages in thread
From: Marc Zyngier @ 2025-01-23 12:12 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: Chen Wang, Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas,
	conor+dt, guoren, inochiama, krzk+dt, lee, lpieralisi, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas

On Wed, 22 Jan 2025 17:34:51 +0000,
Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> wrote:
> 
> + Marc (for the IRQCHIP implementation review)
> 
> On Wed, Jan 22, 2025 at 09:28:12PM +0800, Chen Wang wrote:
> > 
> > > > +static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie,
> > > > +				 struct device_node *msi_node)
> > > > +{
> > > > +	struct device *dev = pcie->cdns_pcie->dev;
> > > > +	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
> > > > +	struct irq_domain *parent_domain;
> > > > +	int ret = 0;
> > > > +
> > > > +	if (!of_property_read_bool(msi_node, "msi-controller"))
> > > > +		return -ENODEV;
> > > > +
> > > > +	ret = of_irq_get_byname(msi_node, "msi");
> > > > +	if (ret <= 0) {
> > > > +		dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
> > > > +		return ret;
> > > > +	}
> > > > +	pcie->msi_irq = ret;
> > > > +
> > > > +	irq_set_chained_handler_and_data(pcie->msi_irq,
> > > > +					 sg2042_pcie_msi_chained_isr, pcie);
> > > > +
> > > > +	parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
> > > > +						 &sg2042_pcie_msi_domain_ops, pcie);
> > > > +	if (!parent_domain) {
> > > > +		dev_err(dev, "%pfw: Failed to create IRQ domain\n", fwnode);
> > > > +		return -ENOMEM;
> > > > +	}
> > > > +	irq_domain_update_bus_token(parent_domain, DOMAIN_BUS_NEXUS);
> > > > +
> > > The MSI controller is wired to PLIC isn't it? If so, why can't you use
> > > hierarchial MSI domain implementation as like other controller drivers?
> > 
> > The method used here is somewhat similar to dw_pcie_allocate_domains() in
> > drivers/pci/controller/dwc/pcie-designware-host.c. This MSI controller is
> > about Method A, the PCIe controller implements an MSI interrupt controller
> > inside, and connect to PLIC upward through only ONE interrupt line. Because
> > MSI to PLIC is multiple to one, I use linear mode here and use chained ISR
> > to handle the interrupts.
> > 
> 
> Hmm, ok. I'm not an IRQCHIP expert, but I'll defer to Marc to review the IRQCHIP
> implementation part.

I don't offer this service anymore, I'm afraid.

As for the "I create my own non-hierarchical IRQ domain", this is
something that happens for all completely mis-designed interrupt
controllers, MSI or not, that multiplex interrupts.

These implementations are stuck in the previous century, and seeing
this on modern designs, for a "server SoC", is really pathetic.

maybe you now understand why I don't offer this sort of reviewing
service anymore.

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-22 17:21       ` Manivannan Sadhasivam
@ 2025-01-26  0:29         ` Chen Wang
  0 siblings, 0 replies; 34+ messages in thread
From: Chen Wang @ 2025-01-26  0:29 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas


On 2025/1/23 1:21, Manivannan Sadhasivam wrote:
> On Wed, Jan 22, 2025 at 08:52:39PM +0800, Chen Wang wrote:
>> On 2025/1/19 19:44, Manivannan Sadhasivam wrote:
[......]
>>>> +      msi-parent = <&msi_pcie>;
>>>> +      msi_pcie: msi {
>>> 'msi' is not a standard node name. 'interrupt-controller' is what usually used
>>> to describe the MSI node.
>> OK. I will corret this.
> There is also 'msi-controller' node name used commonly, but it is not
> documented in the devicetree spec. Maybe you can use it instead and I'll add it
> to the spec since MSI and interrupt controllers are two distinct controllers.

Yes, "msi-controller" is better, I will use this, thanks.

Chen

[......]


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-22 22:21   ` Bjorn Helgaas
@ 2025-01-26  2:27     ` Chen Wang
  2025-02-03  2:35       ` Chen Wang
  2025-02-11 23:34       ` Bjorn Helgaas
  0 siblings, 2 replies; 34+ messages in thread
From: Chen Wang @ 2025-01-26  2:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lee, lpieralisi, manivannan.sadhasivam,
	palmer, paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li

hello~

On 2025/1/23 6:21, Bjorn Helgaas wrote:
> On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
>> From: Chen Wang <unicorn_wang@outlook.com>
>>
>> Add binding for Sophgo SG2042 PCIe host controller.
>> +  sophgo,link-id:
>> +    $ref: /schemas/types.yaml#/definitions/uint32
>> +    description: |
>> +      SG2042 uses Cadence IP, every IP is composed of 2 cores (called link0
>> +      & link1 as Cadence's term). Each core corresponds to a host bridge,
>> +      and each host bridge has only one root port. Their configuration
>> +      registers are completely independent. SG2042 integrates two Cadence IPs,
>> +      so there can actually be up to four host bridges. "sophgo,link-id" is
>> +      used to identify which core/link the PCIe host bridge node corresponds to.
> IIUC, the registers of Cadence IP 1 and IP 2 are completely
> independent, and if you describe both of them, you would have separate
> "pcie@62000000" stanzas with separate 'reg' and 'ranges' properties.

To be precise, for two cores of a cadence IP, each core has a separate 
set of configuration registers, that is, the configuration of each core 
is completely independent. This is also what I meant in the binding by 
"Each core corresponds to a host bridge, and each host bridge has only 
one root port. Their configuration registers are completely 
independent.". Maybe the "Their" here is a bit unclear. My original 
intention was to refer to the core. I can improve this description next 
time.

>  From the driver, it does not look like the registers for Link0 and
> Link1 are independent, since the driver claims the
> "sophgo,sg2042-pcie-host", which includes two Cores, and it tests
> pcie->link_id to select the correct register address and bit mask.
In the driver code, one "sophgo,sg2042-pcie-host" corresponds to one 
core, not two. So, you can see in patch 4 of this pathset [1], 3 pcie 
host-bridge nodes are defined, pcie_rc0 ~ pcie_rc2, each corresponding 
to one core.

[1]:https://lore.kernel.org/linux-riscv/4a1f23e5426bfb56cad9c07f90d4efaad5eab976.1736923025.git.unicorn_wang@outlook.com/


I also need to explain that link0 and link1 are actually completely 
independent in PCIE processing, but when sophgo implements the internal 
msi controller for PCIE, its design is not good enough, and the 
registers for processing msi are not made separately for link0 and 
link1, but mixed together, which is what I said 
cdns_pcie0_ctrl/cdns_pcie1_ctrl. In these two new register files added 
by sophgo (only involving MSI processing), take the second cadence IP as 
an example, some registers are used to control the msi controller of 
pcie_rc1 (corresponding to link0), and some registers are used to 
control the msi controller of pcie_rc2 (corresponding to link1). In a 
more complicated situation, some bits in a register are used to control 
pcie_rc1, and some bits are used to control pcie_rc2. This is why I have 
to add the link_id attribute to know whether the current PCIe host 
bridge corresponds to link0 or link1, so that when processing the msi 
controller related to this pcie host bridge, we can find the 
corresponding register or even the bit of a register in cdns_pcieX_ctrl.


> "sophgo,link-id" corresponds to Cadence documentation, but I think it
> is somewhat misleading in the binding because a PCIe "Link" refers to
> the downstream side of a Root Port.  If we use "link-id" to identify
> either Core0 or Core1 of a Cadence IP, it sort of bakes in the
> idea that there can never be more than one Root Port per Core.
The fact is that for the cadence IP used by sg2042, only one root port 
is supported per core.
>
> Since each Core is the root of a separate PCI hierarchy, it seems like
> maybe there should be a stanza for the Core so there's a place where
> per-hierarchy things like "linux,pci-domain" properties could go,
> e.g.,
>
>    pcie@62000000 {		// IP 1, single-link mode
>      compatible = "sophgo,sg2042-pcie-host";
>      reg = <...>;
>      ranges = <...>;
>
>      core0 {
>        sophgo,core-id = <0>;
>        linux,pci-domain = <0>;
>
>        port {
>          num-lanes = <4>;	// all lanes
>        };
>      };
>    };
>
>    pcie@82000000 {		// IP 2, dual-link mode
>      compatible = "sophgo,sg2042-pcie-host";
>      reg = <...>;
>      ranges = <...>;
>
>      core0 {
>        sophgo,core-id = <0>;
>        linux,pci-domain = <1>;
>
>        port {
>          num-lanes = <2>;	// half of lanes
>        };
>      };
>
>      core1 {
>        sophgo,core-id = <1>;
>        linux,pci-domain = <2>;
>
>        port {
>          num-lanes = <2>;	// half of lanes
>        };
>      };
>    };

Based on the above analysis, I think the introduction of a three-layer 
structure (pcie-core-port) looks a bit too complicated for candence IP. 
In fact, the source of the discussion at the beginning of this issue was 
whether some attributes should be placed under the host bridge or the 
root port. I suggest that adding the root port layer on the basis of the 
existing patch may be enough. What do you think?

e.g.,

pcie_rc0: pcie@7060000000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     sophgo,link-id = <0>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <4>;
     }
};

pcie_rc1: pcie@7062000000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     sophgo,link-id = <0>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <2>;
     };
};

pcie_rc2: pcie@7062800000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     sophgo,link-id = <0>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <2>;
     }
};

[......]

>> +examples:
>> +  - |
>> +    #include <dt-bindings/interrupt-controller/irq.h>
>> +
>> +    pcie@62000000 {
>> +      compatible = "sophgo,sg2042-pcie-host";
>> +      device_type = "pci";
>> +      reg = <0x62000000  0x00800000>,
>> +            <0x48000000  0x00001000>;
>> +      reg-names = "reg", "cfg";
>> +      #address-cells = <3>;
>> +      #size-cells = <2>;
>> +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
>> +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
>> +      bus-range = <0x00 0xff>;
>> +      vendor-id = <0x1f1c>;
>> +      device-id = <0x2042>;
>> +      cdns,no-bar-match-nbits = <48>;
>> +      sophgo,link-id = <0>;
>> +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>> +      msi-parent = <&msi_pcie>;
>> +      msi_pcie: msi {
>> +        compatible = "sophgo,sg2042-pcie-msi";
>> +        msi-controller;
>> +        interrupt-parent = <&intc>;
>> +        interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
>> +        interrupt-names = "msi";
>> +      };
>> +    };
> It would be helpful for me if the example showed how both link-id 0
> and link-id 1 would be used (or whatever they end up being named).
> I assume both have to be somewhere in the same pcie@62000000 device to
> make this work.
>
> Bjorn

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-26  2:27     ` Chen Wang
@ 2025-02-03  2:35       ` Chen Wang
  2025-02-11 23:34       ` Bjorn Helgaas
  1 sibling, 0 replies; 34+ messages in thread
From: Chen Wang @ 2025-02-03  2:35 UTC (permalink / raw)
  To: Bjorn Helgaas, Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lee, lpieralisi, manivannan.sadhasivam,
	palmer, paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li

hi, Bjorn, what do you think of my input?

Regards,

Chen

On 2025/1/26 10:27, Chen Wang wrote:
> hello~
>
> On 2025/1/23 6:21, Bjorn Helgaas wrote:
>> On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
>>> From: Chen Wang <unicorn_wang@outlook.com>
>>>
>>> Add binding for Sophgo SG2042 PCIe host controller.
>>> +  sophgo,link-id:
>>> +    $ref: /schemas/types.yaml#/definitions/uint32
>>> +    description: |
>>> +      SG2042 uses Cadence IP, every IP is composed of 2 cores 
>>> (called link0
>>> +      & link1 as Cadence's term). Each core corresponds to a host 
>>> bridge,
>>> +      and each host bridge has only one root port. Their configuration
>>> +      registers are completely independent. SG2042 integrates two 
>>> Cadence IPs,
>>> +      so there can actually be up to four host bridges. 
>>> "sophgo,link-id" is
>>> +      used to identify which core/link the PCIe host bridge node 
>>> corresponds to.
>> IIUC, the registers of Cadence IP 1 and IP 2 are completely
>> independent, and if you describe both of them, you would have separate
>> "pcie@62000000" stanzas with separate 'reg' and 'ranges' properties.
>
> To be precise, for two cores of a cadence IP, each core has a separate 
> set of configuration registers, that is, the configuration of each 
> core is completely independent. This is also what I meant in the 
> binding by "Each core corresponds to a host bridge, and each host 
> bridge has only one root port. Their configuration registers are 
> completely independent.". Maybe the "Their" here is a bit unclear. My 
> original intention was to refer to the core. I can improve this 
> description next time.
>
>>  From the driver, it does not look like the registers for Link0 and
>> Link1 are independent, since the driver claims the
>> "sophgo,sg2042-pcie-host", which includes two Cores, and it tests
>> pcie->link_id to select the correct register address and bit mask.
> In the driver code, one "sophgo,sg2042-pcie-host" corresponds to one 
> core, not two. So, you can see in patch 4 of this pathset [1], 3 pcie 
> host-bridge nodes are defined, pcie_rc0 ~ pcie_rc2, each corresponding 
> to one core.
>
> [1]:https://lore.kernel.org/linux-riscv/4a1f23e5426bfb56cad9c07f90d4efaad5eab976.1736923025.git.unicorn_wang@outlook.com/ 
>
>
>
> I also need to explain that link0 and link1 are actually completely 
> independent in PCIE processing, but when sophgo implements the 
> internal msi controller for PCIE, its design is not good enough, and 
> the registers for processing msi are not made separately for link0 and 
> link1, but mixed together, which is what I said 
> cdns_pcie0_ctrl/cdns_pcie1_ctrl. In these two new register files added 
> by sophgo (only involving MSI processing), take the second cadence IP 
> as an example, some registers are used to control the msi controller 
> of pcie_rc1 (corresponding to link0), and some registers are used to 
> control the msi controller of pcie_rc2 (corresponding to link1). In a 
> more complicated situation, some bits in a register are used to 
> control pcie_rc1, and some bits are used to control pcie_rc2. This is 
> why I have to add the link_id attribute to know whether the current 
> PCIe host bridge corresponds to link0 or link1, so that when 
> processing the msi controller related to this pcie host bridge, we can 
> find the corresponding register or even the bit of a register in 
> cdns_pcieX_ctrl.
>
>
>> "sophgo,link-id" corresponds to Cadence documentation, but I think it
>> is somewhat misleading in the binding because a PCIe "Link" refers to
>> the downstream side of a Root Port.  If we use "link-id" to identify
>> either Core0 or Core1 of a Cadence IP, it sort of bakes in the
>> idea that there can never be more than one Root Port per Core.
> The fact is that for the cadence IP used by sg2042, only one root port 
> is supported per core.
>>
>> Since each Core is the root of a separate PCI hierarchy, it seems like
>> maybe there should be a stanza for the Core so there's a place where
>> per-hierarchy things like "linux,pci-domain" properties could go,
>> e.g.,
>>
>>    pcie@62000000 {        // IP 1, single-link mode
>>      compatible = "sophgo,sg2042-pcie-host";
>>      reg = <...>;
>>      ranges = <...>;
>>
>>      core0 {
>>        sophgo,core-id = <0>;
>>        linux,pci-domain = <0>;
>>
>>        port {
>>          num-lanes = <4>;    // all lanes
>>        };
>>      };
>>    };
>>
>>    pcie@82000000 {        // IP 2, dual-link mode
>>      compatible = "sophgo,sg2042-pcie-host";
>>      reg = <...>;
>>      ranges = <...>;
>>
>>      core0 {
>>        sophgo,core-id = <0>;
>>        linux,pci-domain = <1>;
>>
>>        port {
>>          num-lanes = <2>;    // half of lanes
>>        };
>>      };
>>
>>      core1 {
>>        sophgo,core-id = <1>;
>>        linux,pci-domain = <2>;
>>
>>        port {
>>          num-lanes = <2>;    // half of lanes
>>        };
>>      };
>>    };
>
> Based on the above analysis, I think the introduction of a three-layer 
> structure (pcie-core-port) looks a bit too complicated for candence 
> IP. In fact, the source of the discussion at the beginning of this 
> issue was whether some attributes should be placed under the host 
> bridge or the root port. I suggest that adding the root port layer on 
> the basis of the existing patch may be enough. What do you think?
>
> e.g.,
>
> pcie_rc0: pcie@7060000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     sophgo,link-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <4>;
>     }
> };
>
> pcie_rc1: pcie@7062000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     sophgo,link-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     };
> };
>
> pcie_rc2: pcie@7062800000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     sophgo,link-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     }
> };
>
> [......]
>
>>> +examples:
>>> +  - |
>>> +    #include <dt-bindings/interrupt-controller/irq.h>
>>> +
>>> +    pcie@62000000 {
>>> +      compatible = "sophgo,sg2042-pcie-host";
>>> +      device_type = "pci";
>>> +      reg = <0x62000000  0x00800000>,
>>> +            <0x48000000  0x00001000>;
>>> +      reg-names = "reg", "cfg";
>>> +      #address-cells = <3>;
>>> +      #size-cells = <2>;
>>> +      ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>,
>>> +               <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>;
>>> +      bus-range = <0x00 0xff>;
>>> +      vendor-id = <0x1f1c>;
>>> +      device-id = <0x2042>;
>>> +      cdns,no-bar-match-nbits = <48>;
>>> +      sophgo,link-id = <0>;
>>> +      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>>> +      msi-parent = <&msi_pcie>;
>>> +      msi_pcie: msi {
>>> +        compatible = "sophgo,sg2042-pcie-msi";
>>> +        msi-controller;
>>> +        interrupt-parent = <&intc>;
>>> +        interrupts = <123 IRQ_TYPE_LEVEL_HIGH>;
>>> +        interrupt-names = "msi";
>>> +      };
>>> +    };
>> It would be helpful for me if the example showed how both link-id 0
>> and link-id 1 would be used (or whatever they end up being named).
>> I assume both have to be somewhere in the same pcie@62000000 device to
>> make this work.
>>
>> Bjorn
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-01-23 12:12         ` Marc Zyngier
@ 2025-02-07 17:49           ` Manivannan Sadhasivam
  0 siblings, 0 replies; 34+ messages in thread
From: Manivannan Sadhasivam @ 2025-02-07 17:49 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Chen Wang, Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas,
	conor+dt, guoren, inochiama, krzk+dt, lee, lpieralisi, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas

On Thu, Jan 23, 2025 at 12:12:03PM +0000, Marc Zyngier wrote:
> On Wed, 22 Jan 2025 17:34:51 +0000,
> Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> wrote:
> > 
> > + Marc (for the IRQCHIP implementation review)
> > 
> > On Wed, Jan 22, 2025 at 09:28:12PM +0800, Chen Wang wrote:
> > > 
> > > > > +static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie,
> > > > > +				 struct device_node *msi_node)
> > > > > +{
> > > > > +	struct device *dev = pcie->cdns_pcie->dev;
> > > > > +	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
> > > > > +	struct irq_domain *parent_domain;
> > > > > +	int ret = 0;
> > > > > +
> > > > > +	if (!of_property_read_bool(msi_node, "msi-controller"))
> > > > > +		return -ENODEV;
> > > > > +
> > > > > +	ret = of_irq_get_byname(msi_node, "msi");
> > > > > +	if (ret <= 0) {
> > > > > +		dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
> > > > > +		return ret;
> > > > > +	}
> > > > > +	pcie->msi_irq = ret;
> > > > > +
> > > > > +	irq_set_chained_handler_and_data(pcie->msi_irq,
> > > > > +					 sg2042_pcie_msi_chained_isr, pcie);
> > > > > +
> > > > > +	parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
> > > > > +						 &sg2042_pcie_msi_domain_ops, pcie);
> > > > > +	if (!parent_domain) {
> > > > > +		dev_err(dev, "%pfw: Failed to create IRQ domain\n", fwnode);
> > > > > +		return -ENOMEM;
> > > > > +	}
> > > > > +	irq_domain_update_bus_token(parent_domain, DOMAIN_BUS_NEXUS);
> > > > > +
> > > > The MSI controller is wired to PLIC isn't it? If so, why can't you use
> > > > hierarchial MSI domain implementation as like other controller drivers?
> > > 
> > > The method used here is somewhat similar to dw_pcie_allocate_domains() in
> > > drivers/pci/controller/dwc/pcie-designware-host.c. This MSI controller is
> > > about Method A, the PCIe controller implements an MSI interrupt controller
> > > inside, and connect to PLIC upward through only ONE interrupt line. Because
> > > MSI to PLIC is multiple to one, I use linear mode here and use chained ISR
> > > to handle the interrupts.
> > > 
> > 
> > Hmm, ok. I'm not an IRQCHIP expert, but I'll defer to Marc to review the IRQCHIP
> > implementation part.
> 
> I don't offer this service anymore, I'm afraid.
> 
> As for the "I create my own non-hierarchical IRQ domain", this is
> something that happens for all completely mis-designed interrupt
> controllers, MSI or not, that multiplex interrupts.
> 
> These implementations are stuck in the previous century, and seeing
> this on modern designs, for a "server SoC", is really pathetic.
> 
> maybe you now understand why I don't offer this sort of reviewing
> service anymore.
> 

Ok, I can understand your pain as a maintainer. I'll try my best to review these
implementations as I have no other choice :)

- Mani

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: (subset) [PATCH v3 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
  2025-01-15  7:07 ` [PATCH v3 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible Chen Wang
@ 2025-02-11 14:33   ` Lee Jones
  2025-02-12  0:48     ` Chen Wang
  0 siblings, 1 reply; 34+ messages in thread
From: Lee Jones @ 2025-02-11 14:33 UTC (permalink / raw)
  To: kw, u.kleine-koenig, aou, arnd, bhelgaas, unicorn_wang, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li, helgaas, Chen Wang

On Wed, 15 Jan 2025 15:07:14 +0800, Chen Wang wrote:
> Document SOPHGO SG2042 compatible for PCIe control registers.
> These registers are shared by PCIe controller nodes.
> 
> 

Applied, thanks!

[3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
      commit: 28df3b1a6aeced4c77a70adc12b4d7b0b69e2ea6

--
Lee Jones [李琼斯]


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-01-26  2:27     ` Chen Wang
  2025-02-03  2:35       ` Chen Wang
@ 2025-02-11 23:34       ` Bjorn Helgaas
  2025-02-12  1:50         ` Chen Wang
  1 sibling, 1 reply; 34+ messages in thread
From: Bjorn Helgaas @ 2025-02-11 23:34 UTC (permalink / raw)
  To: Chen Wang
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Sun, Jan 26, 2025 at 10:27:27AM +0800, Chen Wang wrote:
> On 2025/1/23 6:21, Bjorn Helgaas wrote:
> > On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
> > > From: Chen Wang <unicorn_wang@outlook.com>
> > > 
> > > Add binding for Sophgo SG2042 PCIe host controller.
> > > +  sophgo,link-id:
> > > +    $ref: /schemas/types.yaml#/definitions/uint32
> > > +    description: |
> > > +      SG2042 uses Cadence IP, every IP is composed of 2 cores (called link0
> > > +      & link1 as Cadence's term). Each core corresponds to a host bridge,
> > > +      and each host bridge has only one root port. Their configuration
> > > +      registers are completely independent. SG2042 integrates two Cadence IPs,
> > > +      so there can actually be up to four host bridges. "sophgo,link-id" is
> > > +      used to identify which core/link the PCIe host bridge node corresponds to.
> > IIUC, the registers of Cadence IP 1 and IP 2 are completely
> > independent, and if you describe both of them, you would have separate
> > "pcie@62000000" stanzas with separate 'reg' and 'ranges' properties.
> 
> To be precise, for two cores of a cadence IP, each core has a separate set
> of configuration registers, that is, the configuration of each core is
> completely independent. This is also what I meant in the binding by "Each
> core corresponds to a host bridge, and each host bridge has only one root
> port. Their configuration registers are completely independent.". Maybe the
> "Their" here is a bit unclear. My original intention was to refer to the
> core. I can improve this description next time.
> 
> >  From the driver, it does not look like the registers for Link0 and
> > Link1 are independent, since the driver claims the
> > "sophgo,sg2042-pcie-host", which includes two Cores, and it tests
> > pcie->link_id to select the correct register address and bit mask.
> In the driver code, one "sophgo,sg2042-pcie-host" corresponds to one core,
> not two. So, you can see in patch 4 of this pathset [1], 3 pcie host-bridge
> nodes are defined, pcie_rc0 ~ pcie_rc2, each corresponding to one core.
> 
> [1]:https://lore.kernel.org/linux-riscv/4a1f23e5426bfb56cad9c07f90d4efaad5eab976.1736923025.git.unicorn_wang@outlook.com/
> 
> I also need to explain that link0 and link1 are actually completely
> independent in PCIE processing, but when sophgo implements the internal msi
> controller for PCIE, its design is not good enough, and the registers for
> processing msi are not made separately for link0 and link1, but mixed
> together, which is what I said cdns_pcie0_ctrl/cdns_pcie1_ctrl. In these two
> new register files added by sophgo (only involving MSI processing), take the
> second cadence IP as an example, some registers are used to control the msi
> controller of pcie_rc1 (corresponding to link0), and some registers are used
> to control the msi controller of pcie_rc2 (corresponding to link1). In a
> more complicated situation, some bits in a register are used to control
> pcie_rc1, and some bits are used to control pcie_rc2. This is why I have to
> add the link_id attribute to know whether the current PCIe host bridge
> corresponds to link0 or link1, so that when processing the msi controller
> related to this pcie host bridge, we can find the corresponding register or
> even the bit of a register in cdns_pcieX_ctrl.
> 
> > "sophgo,link-id" corresponds to Cadence documentation, but I think it
> > is somewhat misleading in the binding because a PCIe "Link" refers to
> > the downstream side of a Root Port.  If we use "link-id" to identify
> > either Core0 or Core1 of a Cadence IP, it sort of bakes in the
> > idea that there can never be more than one Root Port per Core.
>
> The fact is that for the cadence IP used by sg2042, only one root port is
> supported per core.

1) That's true today but may not be true forever.

2) Even if there's only one root port forever, "link" already means
something specific in PCIe, and this usage means something different,
so it's a little confusing.  Maybe a comment to say that this refers
to a "Core", not a PCIe link, is the best we can do.

> ...
> Based on the above analysis, I think the introduction of a three-layer
> structure (pcie-core-port) looks a bit too complicated for candence IP. In
> fact, the source of the discussion at the beginning of this issue was
> whether some attributes should be placed under the host bridge or the root
> port. I suggest that adding the root port layer on the basis of the existing
> patch may be enough. What do you think?
> 
> e.g.,
> 
> pcie_rc0: pcie@7060000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     sophgo,link-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <4>;
>     }
> };
> 
> pcie_rc1: pcie@7062000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     sophgo,link-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     };
> };
> 
> pcie_rc2: pcie@7062800000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     sophgo,link-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     }
> };

Where does linux,pci-domain go?

Can you show how link-id 0 and link-id 1 would both be used?  I assume
they need to be connected somehow, since IIUC there's some register
shared between them?

Bjorn

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: (subset) [PATCH v3 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
  2025-02-11 14:33   ` (subset) " Lee Jones
@ 2025-02-12  0:48     ` Chen Wang
  2025-02-20 16:00       ` Lee Jones
  0 siblings, 1 reply; 34+ messages in thread
From: Chen Wang @ 2025-02-12  0:48 UTC (permalink / raw)
  To: Lee Jones
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lpieralisi, manivannan.sadhasivam, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas, Chen Wang

Hello, Lee

I would request that this patch not be merged yet, because it is related 
to PCIe changes, and the PCIe changes (bindings and dts) have not been 
confirmed yet.

Although this patch is small and will not affect other builds, it is 
best to submit it together with the PCIe patch after it is confirmed.

Sorry for the trouble.

Best regards

Chen

On 2025/2/11 22:33, Lee Jones wrote:
> On Wed, 15 Jan 2025 15:07:14 +0800, Chen Wang wrote:
>> Document SOPHGO SG2042 compatible for PCIe control registers.
>> These registers are shared by PCIe controller nodes.
>>
>>
> Applied, thanks!
>
> [3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
>        commit: 28df3b1a6aeced4c77a70adc12b4d7b0b69e2ea6
>
> --
> Lee Jones [李琼斯]
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-02-11 23:34       ` Bjorn Helgaas
@ 2025-02-12  1:50         ` Chen Wang
  2025-02-12  4:25           ` Bjorn Helgaas
  0 siblings, 1 reply; 34+ messages in thread
From: Chen Wang @ 2025-02-12  1:50 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

Hello~

On 2025/2/12 7:34, Bjorn Helgaas wrote:
> On Sun, Jan 26, 2025 at 10:27:27AM +0800, Chen Wang wrote:
>> On 2025/1/23 6:21, Bjorn Helgaas wrote:
>>> On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
>>>> From: Chen Wang <unicorn_wang@outlook.com>
>>>>
>>>> Add binding for Sophgo SG2042 PCIe host controller.
>>>> +  sophgo,link-id:
>>>> +    $ref: /schemas/types.yaml#/definitions/uint32
>>>> +    description: |
>>>> +      SG2042 uses Cadence IP, every IP is composed of 2 cores (called link0
>>>> +      & link1 as Cadence's term). Each core corresponds to a host bridge,
>>>> +      and each host bridge has only one root port. Their configuration
>>>> +      registers are completely independent. SG2042 integrates two Cadence IPs,
>>>> +      so there can actually be up to four host bridges. "sophgo,link-id" is
>>>> +      used to identify which core/link the PCIe host bridge node corresponds to.
>>> IIUC, the registers of Cadence IP 1 and IP 2 are completely
>>> independent, and if you describe both of them, you would have separate
>>> "pcie@62000000" stanzas with separate 'reg' and 'ranges' properties.
>> To be precise, for two cores of a cadence IP, each core has a separate set
>> of configuration registers, that is, the configuration of each core is
>> completely independent. This is also what I meant in the binding by "Each
>> core corresponds to a host bridge, and each host bridge has only one root
>> port. Their configuration registers are completely independent.". Maybe the
>> "Their" here is a bit unclear. My original intention was to refer to the
>> core. I can improve this description next time.
>>
>>>   From the driver, it does not look like the registers for Link0 and
>>> Link1 are independent, since the driver claims the
>>> "sophgo,sg2042-pcie-host", which includes two Cores, and it tests
>>> pcie->link_id to select the correct register address and bit mask.
>> In the driver code, one "sophgo,sg2042-pcie-host" corresponds to one core,
>> not two. So, you can see in patch 4 of this pathset [1], 3 pcie host-bridge
>> nodes are defined, pcie_rc0 ~ pcie_rc2, each corresponding to one core.
>>
>> [1]:https://lore.kernel.org/linux-riscv/4a1f23e5426bfb56cad9c07f90d4efaad5eab976.1736923025.git.unicorn_wang@outlook.com/
>>
>> I also need to explain that link0 and link1 are actually completely
>> independent in PCIE processing, but when sophgo implements the internal msi
>> controller for PCIE, its design is not good enough, and the registers for
>> processing msi are not made separately for link0 and link1, but mixed
>> together, which is what I said cdns_pcie0_ctrl/cdns_pcie1_ctrl. In these two
>> new register files added by sophgo (only involving MSI processing), take the
>> second cadence IP as an example, some registers are used to control the msi
>> controller of pcie_rc1 (corresponding to link0), and some registers are used
>> to control the msi controller of pcie_rc2 (corresponding to link1). In a
>> more complicated situation, some bits in a register are used to control
>> pcie_rc1, and some bits are used to control pcie_rc2. This is why I have to
>> add the link_id attribute to know whether the current PCIe host bridge
>> corresponds to link0 or link1, so that when processing the msi controller
>> related to this pcie host bridge, we can find the corresponding register or
>> even the bit of a register in cdns_pcieX_ctrl.
>>
>>> "sophgo,link-id" corresponds to Cadence documentation, but I think it
>>> is somewhat misleading in the binding because a PCIe "Link" refers to
>>> the downstream side of a Root Port.  If we use "link-id" to identify
>>> either Core0 or Core1 of a Cadence IP, it sort of bakes in the
>>> idea that there can never be more than one Root Port per Core.
>> The fact is that for the cadence IP used by sg2042, only one root port is
>> supported per core.
> 1) That's true today but may not be true forever.
>
> 2) Even if there's only one root port forever, "link" already means
> something specific in PCIe, and this usage means something different,
> so it's a little confusing.  Maybe a comment to say that this refers
> to a "Core", not a PCIe link, is the best we can do.
How about using "sophgo,core-id", as I said in the binding description, 
"every IP is composed of 2 cores (called link0 & link1 as Cadence's 
term).". This avoids the conflict with the concept "link " in the PCIe 
specification, what do you think?
>> ...
>> Based on the above analysis, I think the introduction of a three-layer
>> structure (pcie-core-port) looks a bit too complicated for candence IP. In
>> fact, the source of the discussion at the beginning of this issue was
>> whether some attributes should be placed under the host bridge or the root
>> port. I suggest that adding the root port layer on the basis of the existing
>> patch may be enough. What do you think?
>>
>> e.g.,
>>
>> pcie_rc0: pcie@7060000000 {
>>      compatible = "sophgo,sg2042-pcie-host";
>>      ...... // host bride level properties
>>      sophgo,link-id = <0>;
>>      port {
>>          // port level properties
>>          vendor-id = <0x1f1c>;
>>          device-id = <0x2042>;
>>          num-lanes = <4>;
>>      }
>> };
>>
>> pcie_rc1: pcie@7062000000 {
>>      compatible = "sophgo,sg2042-pcie-host";
>>      ...... // host bride level properties
>>      sophgo,link-id = <0>;
>>      port {
>>          // port level properties
>>          vendor-id = <0x1f1c>;
>>          device-id = <0x2042>;
>>          num-lanes = <2>;
>>      };
>> };
>>
>> pcie_rc2: pcie@7062800000 {
>>      compatible = "sophgo,sg2042-pcie-host";
>>      ...... // host bride level properties
>>      sophgo,link-id = <0>;
>>      port {
>>          // port level properties
>>          vendor-id = <0x1f1c>;
>>          device-id = <0x2042>;
>>          num-lanes = <2>;
>>      }
>> };
> Where does linux,pci-domain go?
>
> Can you show how link-id 0 and link-id 1 would both be used?  I assume
> they need to be connected somehow, since IIUC there's some register
> shared between them?
>
> Bjorn

Oh, sorry, I made a typo when I was giving the example. I wrote all the 
link-id values ​​as 0. I rewrote it as follows. I changed 
"sophgo,link-id" to "sophgo,core-id", and added "linux,pci-domain".

e.g.,

pcie_rc0: pcie@7060000000 {

     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     linux,pci-domain = <0>;
     sophgo,core-id = <0>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <4>;
     }
};

pcie_rc1: pcie@7062000000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     linux,pci-domain = <1>;
     sophgo,core-id = <0>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <2>;
     };
};

pcie_rc2: pcie@7062800000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     linux,pci-domain = <2>;
     sophgo,core-id = <1>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <2>;
     }

};

pcie_rc1 and pcie_rc2 share registers in cdns_pcie1_ctrl. By using 
different "sophgo,core-id" values, they can distinguish and access the 
registers they need in cdns_pcie1_ctrl.

Regards,

Chen


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-02-12  1:50         ` Chen Wang
@ 2025-02-12  4:25           ` Bjorn Helgaas
  2025-02-12  5:54             ` Chen Wang
  0 siblings, 1 reply; 34+ messages in thread
From: Bjorn Helgaas @ 2025-02-12  4:25 UTC (permalink / raw)
  To: Chen Wang
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Wed, Feb 12, 2025 at 09:50:11AM +0800, Chen Wang wrote:
> On 2025/2/12 7:34, Bjorn Helgaas wrote:
> > On Sun, Jan 26, 2025 at 10:27:27AM +0800, Chen Wang wrote:
> > > On 2025/1/23 6:21, Bjorn Helgaas wrote:
> > > > On Wed, Jan 15, 2025 at 03:06:37PM +0800, Chen Wang wrote:
> > > > > From: Chen Wang <unicorn_wang@outlook.com>
> > > > > 
> > > > > Add binding for Sophgo SG2042 PCIe host controller.
> ...

> > > > "sophgo,link-id" corresponds to Cadence documentation, but I
> > > > think it is somewhat misleading in the binding because a PCIe
> > > > "Link" refers to the downstream side of a Root Port.  If we
> > > > use "link-id" to identify either Core0 or Core1 of a Cadence
> > > > IP, it sort of bakes in the idea that there can never be more
> > > > than one Root Port per Core.
> > >
> > > The fact is that for the cadence IP used by sg2042, only one
> > > root port is supported per core.
> >
> > 1) That's true today but may not be true forever.
> > 
> > 2) Even if there's only one root port forever, "link" already
> > means something specific in PCIe, and this usage means something
> > different, so it's a little confusing.  Maybe a comment to say
> > that this refers to a "Core", not a PCIe link, is the best we can
> > do.
>
> How about using "sophgo,core-id", as I said in the binding
> description, "every IP is composed of 2 cores (called link0 & link1
> as Cadence's term).".  This avoids the conflict with the concept
> "link " in the PCIe specification, what do you think?

I think that would be great.

> > > Based on the above analysis, I think the introduction of a
> > > three-layer structure (pcie-core-port) looks a bit too
> > > complicated for candence IP. In fact, the source of the
> > > discussion at the beginning of this issue was whether some
> > > attributes should be placed under the host bridge or the root
> > > port. I suggest that adding the root port layer on the basis of
> > > the existing patch may be enough. What do you think?
> > > 
> > > e.g.,
> > > 
> > > pcie_rc0: pcie@7060000000 {
> > >      compatible = "sophgo,sg2042-pcie-host";
> > >      ...... // host bride level properties
> > >      sophgo,link-id = <0>;
> > >      port {
> > >          // port level properties
> > >          vendor-id = <0x1f1c>;
> > >          device-id = <0x2042>;
> > >          num-lanes = <4>;
> > >      }
> > > };
> > > 
> > > pcie_rc1: pcie@7062000000 {
> > >      compatible = "sophgo,sg2042-pcie-host";
> > >      ...... // host bride level properties
> > >      sophgo,link-id = <0>;
> > >      port {
> > >          // port level properties
> > >          vendor-id = <0x1f1c>;
> > >          device-id = <0x2042>;
> > >          num-lanes = <2>;
> > >      };
> > > };
> > > 
> > > pcie_rc2: pcie@7062800000 {
> > >      compatible = "sophgo,sg2042-pcie-host";
> > >      ...... // host bride level properties
> > >      sophgo,link-id = <0>;
> > >      port {
> > >          // port level properties
> > >          vendor-id = <0x1f1c>;
> > >          device-id = <0x2042>;
> > >          num-lanes = <2>;
> > >      }
> > > };
> >
> > Where does linux,pci-domain go?
> > 
> > Can you show how link-id 0 and link-id 1 would both be used?  I
> > assume they need to be connected somehow, since IIUC there's some
> > register shared between them?
> 
> Oh, sorry, I made a typo when I was giving the example. I wrote all
> the link-id values ​​as 0. I rewrote it as follows. I
> changed "sophgo,link-id" to "sophgo,core-id", and added
> "linux,pci-domain".
> 
> e.g.,
> 
> pcie_rc0: pcie@7060000000 {
> 
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     linux,pci-domain = <0>;
>     sophgo,core-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <4>;
>     }
> };
> 
> pcie_rc1: pcie@7062000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     linux,pci-domain = <1>;
>     sophgo,core-id = <0>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     };
> };
> 
> pcie_rc2: pcie@7062800000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     linux,pci-domain = <2>;
>     sophgo,core-id = <1>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     }
> 
> };
> 
> pcie_rc1 and pcie_rc2 share registers in cdns_pcie1_ctrl. By using
> different "sophgo,core-id" values, they can distinguish and access
> the registers they need in cdns_pcie1_ctrl.

Where does cdns_pcie1_ctrl fit in this example?  Does that enclose
both pcie_rc1 and pcie_rc2?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-02-12  4:25           ` Bjorn Helgaas
@ 2025-02-12  5:54             ` Chen Wang
  2025-02-17  8:40               ` Chen Wang
  2025-02-19 18:22               ` Bjorn Helgaas
  0 siblings, 2 replies; 34+ messages in thread
From: Chen Wang @ 2025-02-12  5:54 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li


On 2025/2/12 12:25, Bjorn Helgaas wrote:
[......]
>> pcie_rc1 and pcie_rc2 share registers in cdns_pcie1_ctrl. By using
>> different "sophgo,core-id" values, they can distinguish and access
>> the registers they need in cdns_pcie1_ctrl.
> Where does cdns_pcie1_ctrl fit in this example?  Does that enclose
> both pcie_rc1 and pcie_rc2?

cdns_pcie1_ctrl is defined as a syscon node,  which contains registers 
shared by pcie_rc1 and pcie_rc2. In the binding yaml file, I drew a 
diagram to describe the relationship between them, copy here for your 
quick reference:

+                     +-- Core (Link0) <---> pcie_rc1  +-----------------+
+                     |                                |                 |
+      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
+                     |                                |                 |
+                     +-- Core (Link1) <---> pcie_rc2  +-----------------+

The following is an example with cdns_pcie1_ctrl added. For simplicity, 
I deleted pcie_rc0.

pcie_rc1: pcie@7062000000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     linux,pci-domain = <1>;
     sophgo,core-id = <0>;
     sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <2>;
     };
};

pcie_rc2: pcie@7062800000 {
     compatible = "sophgo,sg2042-pcie-host";
     ...... // host bride level properties
     linux,pci-domain = <2>;
     sophgo,core-id = <1>;
     sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
     port {
         // port level properties
         vendor-id = <0x1f1c>;
         device-id = <0x2042>;
         num-lanes = <2>;
     }

};

cdns_pcie1_ctrl: syscon@7063800000 {
     compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
     reg = <0x70 0x63800000 0x0 0x800000>;
};


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-01-22 17:34       ` Manivannan Sadhasivam
  2025-01-23 12:12         ` Marc Zyngier
@ 2025-02-17  8:22         ` Chen Wang
  2025-02-19 17:57           ` Manivannan Sadhasivam
  1 sibling, 1 reply; 34+ messages in thread
From: Chen Wang @ 2025-02-17  8:22 UTC (permalink / raw)
  To: Manivannan Sadhasivam, maz
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas


On 2025/1/23 1:34, Manivannan Sadhasivam wrote:

[......]
>>>> +/*
>>>> + * SG2042 PCIe controller supports two ways to report MSI:
>>>> + *
>>>> + * - Method A, the PCIe controller implements an MSI interrupt controller
>>>> + *   inside, and connect to PLIC upward through one interrupt line.
>>>> + *   Provides memory-mapped MSI address, and by programming the upper 32
>>>> + *   bits of the address to zero, it can be compatible with old PCIe devices
>>>> + *   that only support 32-bit MSI address.
>>>> + *
>>>> + * - Method B, the PCIe controller connects to PLIC upward through an
>>>> + *   independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
>>>> + *   controller provides multiple(up to 32) interrupt sources to PLIC.
>>>> + *   Compared with the first method, the advantage is that the interrupt
>>>> + *   source is expanded, but because for SG2042, the MSI address provided by
>>>> + *   the MSI controller is fixed and only supports 64-bit address(> 2^32),
>>>> + *   it is not compatible with old PCIe devices that only support 32-bit MSI
>>>> + *   address.
>>>> + *
>>>> + * Method A & B can be configured in DTS, default is Method B.
>>> How to configure them? I can only see "sophgo,sg2042-msi" in the binding.
>>
>> The value of the msi-parent attribute is used in dts to distinguish them,
>> for example:
>>
>> ```dts
>>
>> msi: msi-controller@7030010300 {
>>      ......
>> };
>>
>> pcie_rc0: pcie@7060000000 {
>>      msi-parent = <&msi>;
>> };
>>
>> pcie_rc1: pcie@7062000000 {
>>      ......
>>      msi-parent = <&msi_pcie>;
>>      msi_pcie: interrupt-controller {
>>          ......
>>      };
>> };
>>
>> ```
>>
>> Which means:
>>
>> pcie_rc0 uses Method B
>>
>> pcie_rc1 uses Method A.
>>
> Ok. you mentioned 'default method' which is not accurate since the choice
> obviously depends on DT. Maybe you should say, 'commonly used method'? But both
> the binding and dts patches make use of in-built MSI controller only (method A).

"commonly used method" looks ok to me.

Binding example only shows the case for Method A, due to I think the 
writing of case for Method A  covers the writing of case for Method B.

DTS patches use both Method A and B. You can see patch 4 of this 
patchset, pcie_rc1 uses Method A, pcie_rc0 & pcie_rc2 use Method B.

> In general, for MSI implementations inside the PCIe IP, we don't usually add a
> dedicated devicetree node since the IP is the same. But in your reply to the my
> question on the bindings patch, you said it is a separate IP. I'm confused now.

I learned the writing of DTS from "brcm,iproc-pcie", see 
arch/arm/boot/dts/broadcom/bcm-cygnus.dtsi for example. Wouldn't it be 
clearer to embed an msi controller in topo?

And regarding what you said, "we don't usually add a dedicated 
devicetree node", do you have any example I can refer to?

Thanks,

Chen

[......]



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-01-22 21:33   ` Bjorn Helgaas
@ 2025-02-17  8:36     ` Chen Wang
  0 siblings, 0 replies; 34+ messages in thread
From: Chen Wang @ 2025-02-17  8:36 UTC (permalink / raw)
  To: Bjorn Helgaas, Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lee, lpieralisi, manivannan.sadhasivam,
	palmer, paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li


On 2025/1/23 5:33, Bjorn Helgaas wrote:
> On Wed, Jan 15, 2025 at 03:06:57PM +0800, Chen Wang wrote:
>> From: Chen Wang <unicorn_wang@outlook.com>
>>
>> Add support for PCIe controller in SG2042 SoC. The controller
>> uses the Cadence PCIe core programmed by pcie-cadence*.c. The
>> PCIe controller will work in host mode only.
>> + * pcie-sg2042 - PCIe controller driver for Sophgo SG2042 SoC
> I'm guessing this is the first of a *family* of Sophgo SoCs, so
> "sg2042" looks like it might be a little too specific if there will be
> things like "sg3042" etc added in the future.
As I know, SG2042 will be the only one SoC using Cadence IP from Sophgo. 
They have switched to other IP(DWC) later.
>> +#include "../../../irqchip/irq-msi-lib.h"
> I know you're using this path because you're relying on Marc's
> work in progress [1].
>
> But I don't want to carry around an #include like this in drivers/pci
> while we're waiting for that, so I think you should use the existing
> published MSI model until after Marc's update is merged.  Otherwise we
> might end up with this ugly path here and no real path to migrate to
> the published, supported one.
>
> [1] https://lore.kernel.org/linux-riscv/20241204124549.607054-2-maz@kernel.org/
OK, I will switch back to use the existing published MSI model.
>> + * SG2042 PCIe controller supports two ways to report MSI:
>> + *
>> + * - Method A, the PCIe controller implements an MSI interrupt controller
>> + *   inside, and connect to PLIC upward through one interrupt line.
>> + *   Provides memory-mapped MSI address, and by programming the upper 32
>> + *   bits of the address to zero, it can be compatible with old PCIe devices
>> + *   that only support 32-bit MSI address.
>> + *
>> + * - Method B, the PCIe controller connects to PLIC upward through an
>> + *   independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
>> + *   controller provides multiple(up to 32) interrupt sources to PLIC.
> Maybe expand "PLIC" the first time?
Sure.
>
> s/SOC/SoC/ to match previous uses, e.g., in commit log
> s/multiple(up to 32)/up to 32/
ok
>> + *   Compared with the first method, the advantage is that the interrupt
>> + *   source is expanded, but because for SG2042, the MSI address provided by
>> + *   the MSI controller is fixed and only supports 64-bit address(> 2^32),
>> + *   it is not compatible with old PCIe devices that only support 32-bit MSI
>> + *   address.
> "Supporting 64-bit address" means supporting any address from 0 to
> 2^64 - 1, but I don't think that's what you mean here.
>
> I think what you mean here is that with Method B, the MSI address is
> fixed and it can only be above 4GB.
Yes, I will fix it.
>> +#define REG_CLEAR_LINK0_BIT	2
>> +#define REG_CLEAR_LINK1_BIT	3
>> +#define REG_STATUS_LINK0_BIT	2
>> +#define REG_STATUS_LINK1_BIT	3
>> +static void sg2042_pcie_msi_clear_status(struct sg2042_pcie *pcie)
>> +{
>> +	u32 status, clr_msi_in_bit;
>> +
>> +	if (pcie->link_id == 1)
>> +		clr_msi_in_bit = BIT(REG_CLEAR_LINK1_BIT);
>> +	else
>> +		clr_msi_in_bit = BIT(REG_CLEAR_LINK0_BIT);
> Why not put the BIT() in the #defines for REG_CLEAR_LINK0_BIT,
> REG_STATUS_LINK0_BIT, ...?  Then this code is slightly simpler, and
> you can use similar style in sg2042_pcie_msi_chained_isr() instead of
> shifting there.
Ok, I will check this out in new version.
>> +	regmap_read(pcie->syscon, REG_CLEAR, &status);
>> +	status |= clr_msi_in_bit;
>> +	regmap_write(pcie->syscon, REG_CLEAR, status);
>> +static void sg2042_pcie_msi_irq_compose_msi_msg(struct irq_data *d,
>> +						struct msi_msg *msg)
>> +{
>> +	struct sg2042_pcie *pcie = irq_data_get_irq_chip_data(d);
>> +	struct device *dev = pcie->cdns_pcie->dev;
>> +
>> +	msg->address_lo = lower_32_bits(pcie->msi_phys) + BYTE_NUM_PER_MSI_VEC * d->hwirq;
>> +	msg->address_hi = upper_32_bits(pcie->msi_phys);
> This seems a little suspect because adding "BYTE_NUM_PER_MSI_VEC *
> d->hwirq" could cause overflow into the upper 32 bits.  I think you
> should add first, then take the lower/upper 32 bits of the 64-bit
> result.
OK, I will check this out in new version.
>> +	if (d->hwirq > pcie->num_applied_vecs)
>> +		pcie->num_applied_vecs = d->hwirq;
> "num_applied_vecs" is a bit of a misnomer; it's actually the *max*
> hwirq.
"max_applied_vecs"?
>
>> +static const struct irq_domain_ops sg2042_pcie_msi_domain_ops = {
>> +	.alloc	= sg2042_pcie_irq_domain_alloc,
>> +	.free	= sg2042_pcie_irq_domain_free,
> Mention "msi" in these function names, e.g.,
> sg2042_pcie_msi_domain_alloc().
ok
>
>> +static int sg2042_pcie_init_msi_data(struct sg2042_pcie *pcie)
>> +{
>> ...
>> +	/* Program the MSI address and size */
>> +	if (pcie->link_id == 1) {
>> +		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_LOW,
>> +			     lower_32_bits(pcie->msi_phys));
>> +		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_HIGH,
>> +			     upper_32_bits(pcie->msi_phys));
>> +
>> +		regmap_read(pcie->syscon, REG_LINK1_MSI_ADDR_SIZE, &value);
>> +		value = (value & REG_LINK1_MSI_ADDR_SIZE_MASK) | MAX_MSI_IRQS;
>> +		regmap_write(pcie->syscon, REG_LINK1_MSI_ADDR_SIZE, value);
>> +	} else {
>> +		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_LOW,
>> +			     lower_32_bits(pcie->msi_phys));
>> +		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_HIGH,
>> +			     upper_32_bits(pcie->msi_phys));
>> +
>> +		regmap_read(pcie->syscon, REG_LINK0_MSI_ADDR_SIZE, &value);
>> +		value = (value & REG_LINK0_MSI_ADDR_SIZE_MASK) | (MAX_MSI_IRQS << 16);
>> +		regmap_write(pcie->syscon, REG_LINK0_MSI_ADDR_SIZE, value);
>> +	}
> Would be nicer to set temporaries to the link_id-dependent values (as
> you did elsewhere) so it's obvious that the code is identical, e.g.,
>
>    if (pcie->link_id == 1) {
>      msi_addr = REG_LINK1_MSI_ADDR_LOW;
>      msi_addr_size = REG_LINK1_MSI_ADDR_SIZE;
>      msi_addr_size_mask = REG_LINK1_MSI_ADDR_SIZE;
>    } else {
>      ...
>    }
>
>    regmap_write(pcie->syscon, msi_addr, lower_32_bits(pcie->msi_phys));
>    regmap_write(pcie->syscon, msi_addr + 4, upper_32_bits(pcie->msi_phys));
>    ...
Ok,I will check this out.
>> +
>> +	return 0;
>> +}
>> +
>> +static irqreturn_t sg2042_pcie_msi_handle_irq(struct sg2042_pcie *pcie)
> Which driver are you using as a template for function names and code
> structure?  There are probably a dozen different names for functions
> that iterate like this around a call to generic_handle_domain_irq(),
> but you've managed to come up with a new one.  If you can pick a
> similar name to copy, it makes it easier to compare drivers and find
> and fix defects across them.
>
>> +{
>> +	u32 i, pos;
>> +	unsigned long val;
>> +	u32 status, num_vectors;
>> +	irqreturn_t ret = IRQ_NONE;
>> +
>> +	num_vectors = pcie->num_applied_vecs;
>> +	for (i = 0; i <= num_vectors; i++) {
>> +		status = readl((void *)(pcie->msi_virt + i * BYTE_NUM_PER_MSI_VEC));
>> +		if (!status)
>> +			continue;
>> +
>> +		ret = IRQ_HANDLED;
>> +		val = status;
> I don't think you need both val and status.
Yes, I will fix this.
>
>> +		pos = 0;
>> +		while ((pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL,
>> +					    pos)) != MAX_MSI_IRQS_PER_CTRL) {
> Most drivers use for_each_set_bit() here.
Ok, I will check this out.
>> +			generic_handle_domain_irq(pcie->msi_domain,
>> +						  (i * MAX_MSI_IRQS_PER_CTRL) +
>> +						  pos);
>> +			pos++;
>> +		}
>> +		writel(0, ((void *)(pcie->msi_virt) + i * BYTE_NUM_PER_MSI_VEC));
>> +	}
>> +	return ret;
>> +}
>> +static int sg2042_pcie_setup_msi(struct sg2042_pcie *pcie,
>> +				 struct device_node *msi_node)
>> +{
>> +	struct device *dev = pcie->cdns_pcie->dev;
>> +	struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node);
>> +	struct irq_domain *parent_domain;
>> +	int ret = 0;
> Pointless initialization of "ret".
Yes, I will fix this.
>> +	if (!of_property_read_bool(msi_node, "msi-controller"))
>> +		return -ENODEV;
>> +
>> +	ret = of_irq_get_byname(msi_node, "msi");
>> +	if (ret <= 0) {
>> +		dev_err(dev, "%pOF: failed to get MSI irq\n", msi_node);
>> +		return ret;
>> +	}
>> +	pcie->msi_irq = ret;
>> +
>> +	irq_set_chained_handler_and_data(pcie->msi_irq,
>> +					 sg2042_pcie_msi_chained_isr, pcie);
>> +
>> +	parent_domain = irq_domain_create_linear(fwnode, MSI_DEF_NUM_VECTORS,
>> +						 &sg2042_pcie_msi_domain_ops, pcie);
> Wrap to fit in 80 columns like 99% of the rest of this file.

Ok, I will check this out.

Thanks,

Chen

>
> Bjorn

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-02-12  5:54             ` Chen Wang
@ 2025-02-17  8:40               ` Chen Wang
  2025-02-19 18:22               ` Bjorn Helgaas
  1 sibling, 0 replies; 34+ messages in thread
From: Chen Wang @ 2025-02-17  8:40 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

Hello, Bjorn, what do you think of my input?

Regards,

Chen

On 2025/2/12 13:54, Chen Wang wrote:
>
> On 2025/2/12 12:25, Bjorn Helgaas wrote:
> [......]
>>> pcie_rc1 and pcie_rc2 share registers in cdns_pcie1_ctrl. By using
>>> different "sophgo,core-id" values, they can distinguish and access
>>> the registers they need in cdns_pcie1_ctrl.
>> Where does cdns_pcie1_ctrl fit in this example?  Does that enclose
>> both pcie_rc1 and pcie_rc2?
>
> cdns_pcie1_ctrl is defined as a syscon node,  which contains registers 
> shared by pcie_rc1 and pcie_rc2. In the binding yaml file, I drew a 
> diagram to describe the relationship between them, copy here for your 
> quick reference:
>
> +                     +-- Core (Link0) <---> pcie_rc1 +-----------------+
> +                     | |                 |
> +      Cadence IP 2 --+                                | 
> cdns_pcie1_ctrl |
> +                     | |                 |
> +                     +-- Core (Link1) <---> pcie_rc2 +-----------------+
>
> The following is an example with cdns_pcie1_ctrl added. For 
> simplicity, I deleted pcie_rc0.
>
> pcie_rc1: pcie@7062000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     linux,pci-domain = <1>;
>     sophgo,core-id = <0>;
>     sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     };
> };
>
> pcie_rc2: pcie@7062800000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     linux,pci-domain = <2>;
>     sophgo,core-id = <1>;
>     sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     }
>
> };
>
> cdns_pcie1_ctrl: syscon@7063800000 {
>     compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
>     reg = <0x70 0x63800000 0x0 0x800000>;
> };
>
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver
  2025-02-17  8:22         ` Chen Wang
@ 2025-02-19 17:57           ` Manivannan Sadhasivam
  0 siblings, 0 replies; 34+ messages in thread
From: Manivannan Sadhasivam @ 2025-02-19 17:57 UTC (permalink / raw)
  To: Chen Wang
  Cc: Manivannan Sadhasivam, maz, Chen Wang, kw, u.kleine-koenig, aou,
	arnd, bhelgaas, conor+dt, guoren, inochiama, krzk+dt, lee,
	lpieralisi, palmer, paul.walmsley, pbrobinson, robh, devicetree,
	linux-kernel, linux-pci, linux-riscv, chao.wei, xiaoguang.xing,
	fengchun.li, helgaas

On Mon, Feb 17, 2025 at 04:22:08PM +0800, Chen Wang wrote:
> 
> On 2025/1/23 1:34, Manivannan Sadhasivam wrote:
> 
> [......]
> > > > > +/*
> > > > > + * SG2042 PCIe controller supports two ways to report MSI:
> > > > > + *
> > > > > + * - Method A, the PCIe controller implements an MSI interrupt controller
> > > > > + *   inside, and connect to PLIC upward through one interrupt line.
> > > > > + *   Provides memory-mapped MSI address, and by programming the upper 32
> > > > > + *   bits of the address to zero, it can be compatible with old PCIe devices
> > > > > + *   that only support 32-bit MSI address.
> > > > > + *
> > > > > + * - Method B, the PCIe controller connects to PLIC upward through an
> > > > > + *   independent MSI controller "sophgo,sg2042-msi" on the SOC. The MSI
> > > > > + *   controller provides multiple(up to 32) interrupt sources to PLIC.
> > > > > + *   Compared with the first method, the advantage is that the interrupt
> > > > > + *   source is expanded, but because for SG2042, the MSI address provided by
> > > > > + *   the MSI controller is fixed and only supports 64-bit address(> 2^32),
> > > > > + *   it is not compatible with old PCIe devices that only support 32-bit MSI
> > > > > + *   address.
> > > > > + *
> > > > > + * Method A & B can be configured in DTS, default is Method B.
> > > > How to configure them? I can only see "sophgo,sg2042-msi" in the binding.
> > > 
> > > The value of the msi-parent attribute is used in dts to distinguish them,
> > > for example:
> > > 
> > > ```dts
> > > 
> > > msi: msi-controller@7030010300 {
> > >      ......
> > > };
> > > 
> > > pcie_rc0: pcie@7060000000 {
> > >      msi-parent = <&msi>;
> > > };
> > > 
> > > pcie_rc1: pcie@7062000000 {
> > >      ......
> > >      msi-parent = <&msi_pcie>;
> > >      msi_pcie: interrupt-controller {
> > >          ......
> > >      };
> > > };
> > > 
> > > ```
> > > 
> > > Which means:
> > > 
> > > pcie_rc0 uses Method B
> > > 
> > > pcie_rc1 uses Method A.
> > > 
> > Ok. you mentioned 'default method' which is not accurate since the choice
> > obviously depends on DT. Maybe you should say, 'commonly used method'? But both
> > the binding and dts patches make use of in-built MSI controller only (method A).
> 
> "commonly used method" looks ok to me.
> 
> Binding example only shows the case for Method A, due to I think the writing
> of case for Method A  covers the writing of case for Method B.
> 
> DTS patches use both Method A and B. You can see patch 4 of this patchset,
> pcie_rc1 uses Method A, pcie_rc0 & pcie_rc2 use Method B.
> 
> > In general, for MSI implementations inside the PCIe IP, we don't usually add a
> > dedicated devicetree node since the IP is the same. But in your reply to the my
> > question on the bindings patch, you said it is a separate IP. I'm confused now.
> 
> I learned the writing of DTS from "brcm,iproc-pcie", see
> arch/arm/boot/dts/broadcom/bcm-cygnus.dtsi for example. Wouldn't it be
> clearer to embed an msi controller in topo?
> 
> And regarding what you said, "we don't usually add a dedicated devicetree
> node", do you have any example I can refer to?
> 

You can refer all DWC glue drivers under drivers/pci/controller/dwc/ and their
corresponding DT bindings.

- Mani

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-02-12  5:54             ` Chen Wang
  2025-02-17  8:40               ` Chen Wang
@ 2025-02-19 18:22               ` Bjorn Helgaas
  2025-02-21  3:29                 ` Chen Wang
  1 sibling, 1 reply; 34+ messages in thread
From: Bjorn Helgaas @ 2025-02-19 18:22 UTC (permalink / raw)
  To: Chen Wang
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

On Wed, Feb 12, 2025 at 01:54:11PM +0800, Chen Wang wrote:
> On 2025/2/12 12:25, Bjorn Helgaas wrote:
> [......]
> > > pcie_rc1 and pcie_rc2 share registers in cdns_pcie1_ctrl. By using
> > > different "sophgo,core-id" values, they can distinguish and access
> > > the registers they need in cdns_pcie1_ctrl.
> > Where does cdns_pcie1_ctrl fit in this example?  Does that enclose
> > both pcie_rc1 and pcie_rc2?
> 
> cdns_pcie1_ctrl is defined as a syscon node,  which contains registers
> shared by pcie_rc1 and pcie_rc2. In the binding yaml file, I drew a diagram
> to describe the relationship between them, copy here for your quick
> reference:
> 
> +                     +-- Core (Link0) <---> pcie_rc1  +-----------------+
> +                     |                                |                 |
> +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> +                     |                                |                 |
> +                     +-- Core (Link1) <---> pcie_rc2  +-----------------+
> 
> The following is an example with cdns_pcie1_ctrl added. For simplicity, I
> deleted pcie_rc0.

Looks good.  It would be nice if there were some naming similarity or
comment or other hint to connect sophgo,core-id with the syscon node.

> pcie_rc1: pcie@7062000000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     linux,pci-domain = <1>;
>     sophgo,core-id = <0>;
>     sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     };
> };
> 
> pcie_rc2: pcie@7062800000 {
>     compatible = "sophgo,sg2042-pcie-host";
>     ...... // host bride level properties
>     linux,pci-domain = <2>;
>     sophgo,core-id = <1>;
>     sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>     port {
>         // port level properties
>         vendor-id = <0x1f1c>;
>         device-id = <0x2042>;
>         num-lanes = <2>;
>     }
> 
> };
> 
> cdns_pcie1_ctrl: syscon@7063800000 {
>     compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
>     reg = <0x70 0x63800000 0x0 0x800000>;
> };
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: (subset) [PATCH v3 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible
  2025-02-12  0:48     ` Chen Wang
@ 2025-02-20 16:00       ` Lee Jones
  0 siblings, 0 replies; 34+ messages in thread
From: Lee Jones @ 2025-02-20 16:00 UTC (permalink / raw)
  To: Chen Wang
  Cc: kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt, guoren,
	inochiama, krzk+dt, lpieralisi, manivannan.sadhasivam, palmer,
	paul.walmsley, pbrobinson, robh, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li,
	helgaas, Chen Wang

On Wed, 12 Feb 2025, Chen Wang wrote:

> Hello, Lee
> 
> I would request that this patch not be merged yet, because it is related to
> PCIe changes, and the PCIe changes (bindings and dts) have not been
> confirmed yet.
> 
> Although this patch is small and will not affect other builds, it is best to
> submit it together with the PCIe patch after it is confirmed.
> 
> Sorry for the trouble.

Unapplied, thanks.

-- 
Lee Jones [李琼斯]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-02-19 18:22               ` Bjorn Helgaas
@ 2025-02-21  3:29                 ` Chen Wang
  2025-02-21 22:13                   ` Bjorn Helgaas
  0 siblings, 1 reply; 34+ messages in thread
From: Chen Wang @ 2025-02-21  3:29 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson, robh,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li


On 2025/2/20 2:22, Bjorn Helgaas wrote:
> On Wed, Feb 12, 2025 at 01:54:11PM +0800, Chen Wang wrote:
>> On 2025/2/12 12:25, Bjorn Helgaas wrote:
>> [......]
>>>> pcie_rc1 and pcie_rc2 share registers in cdns_pcie1_ctrl. By using
>>>> different "sophgo,core-id" values, they can distinguish and access
>>>> the registers they need in cdns_pcie1_ctrl.
>>> Where does cdns_pcie1_ctrl fit in this example?  Does that enclose
>>> both pcie_rc1 and pcie_rc2?
>> cdns_pcie1_ctrl is defined as a syscon node,  which contains registers
>> shared by pcie_rc1 and pcie_rc2. In the binding yaml file, I drew a diagram
>> to describe the relationship between them, copy here for your quick
>> reference:
>>
>> +                     +-- Core (Link0) <---> pcie_rc1  +-----------------+
>> +                     |                                |                 |
>> +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
>> +                     |                                |                 |
>> +                     +-- Core (Link1) <---> pcie_rc2  +-----------------+
>>
>> The following is an example with cdns_pcie1_ctrl added. For simplicity, I
>> deleted pcie_rc0.
> Looks good.  It would be nice if there were some naming similarity or
> comment or other hint to connect sophgo,core-id with the syscon node.
>
>> pcie_rc1: pcie@7062000000 {
>>      compatible = "sophgo,sg2042-pcie-host";
>>      ...... // host bride level properties
>>      linux,pci-domain = <1>;
>>      sophgo,core-id = <0>;
>>      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>>      port {
>>          // port level properties
>>          vendor-id = <0x1f1c>;
>>          device-id = <0x2042>;
>>          num-lanes = <2>;
>>      };
>> };
>>
>> pcie_rc2: pcie@7062800000 {
>>      compatible = "sophgo,sg2042-pcie-host";
>>      ...... // host bride level properties
>>      linux,pci-domain = <2>;
>>      sophgo,core-id = <1>;
>>      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
>>      port {
>>          // port level properties
>>          vendor-id = <0x1f1c>;
>>          device-id = <0x2042>;
>>          num-lanes = <2>;
>>      }
>>
>> };
>>
>> cdns_pcie1_ctrl: syscon@7063800000 {
>>      compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
>>      reg = <0x70 0x63800000 0x0 0x800000>;
>> };

Hi, Bjorn ,

I find dtb check will report error due to "port" is not a evaulated 
property for pcie host. Should we add a vendror specific property for this?

Or do you have any example for reference?

Thanks,

Chen



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-02-21  3:29                 ` Chen Wang
@ 2025-02-21 22:13                   ` Bjorn Helgaas
  2025-02-24  6:27                     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 34+ messages in thread
From: Bjorn Helgaas @ 2025-02-21 22:13 UTC (permalink / raw)
  To: Chen Wang, Rob Herring
  Cc: Chen Wang, kw, u.kleine-koenig, aou, arnd, bhelgaas, conor+dt,
	guoren, inochiama, krzk+dt, lee, lpieralisi,
	manivannan.sadhasivam, palmer, paul.walmsley, pbrobinson,
	devicetree, linux-kernel, linux-pci, linux-riscv, chao.wei,
	xiaoguang.xing, fengchun.li

[cc->to: Rob]

On Fri, Feb 21, 2025 at 11:29:20AM +0800, Chen Wang wrote:
> On 2025/2/20 2:22, Bjorn Helgaas wrote:
> > On Wed, Feb 12, 2025 at 01:54:11PM +0800, Chen Wang wrote:
> > > On 2025/2/12 12:25, Bjorn Helgaas wrote:
> > > [......]
> > > > > pcie_rc1 and pcie_rc2 share registers in cdns_pcie1_ctrl. By using
> > > > > different "sophgo,core-id" values, they can distinguish and access
> > > > > the registers they need in cdns_pcie1_ctrl.
> > > > Where does cdns_pcie1_ctrl fit in this example?  Does that enclose
> > > > both pcie_rc1 and pcie_rc2?
> > > cdns_pcie1_ctrl is defined as a syscon node,  which contains registers
> > > shared by pcie_rc1 and pcie_rc2. In the binding yaml file, I drew a diagram
> > > to describe the relationship between them, copy here for your quick
> > > reference:
> > > 
> > > +                     +-- Core (Link0) <---> pcie_rc1  +-----------------+
> > > +                     |                                |                 |
> > > +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> > > +                     |                                |                 |
> > > +                     +-- Core (Link1) <---> pcie_rc2  +-----------------+
> > > 
> > > The following is an example with cdns_pcie1_ctrl added. For simplicity, I
> > > deleted pcie_rc0.
> >
> > Looks good.  It would be nice if there were some naming similarity or
> > comment or other hint to connect sophgo,core-id with the syscon node.
> > 
> > > pcie_rc1: pcie@7062000000 {
> > >      compatible = "sophgo,sg2042-pcie-host";
> > >      ...... // host bride level properties
> > >      linux,pci-domain = <1>;
> > >      sophgo,core-id = <0>;
> > >      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
> > >      port {
> > >          // port level properties
> > >          vendor-id = <0x1f1c>;
> > >          device-id = <0x2042>;
> > >          num-lanes = <2>;
> > >      };
> > > };
> > > 
> > > pcie_rc2: pcie@7062800000 {
> > >      compatible = "sophgo,sg2042-pcie-host";
> > >      ...... // host bride level properties
> > >      linux,pci-domain = <2>;
> > >      sophgo,core-id = <1>;
> > >      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
> > >      port {
> > >          // port level properties
> > >          vendor-id = <0x1f1c>;
> > >          device-id = <0x2042>;
> > >          num-lanes = <2>;
> > >      }
> > > 
> > > };
> > > 
> > > cdns_pcie1_ctrl: syscon@7063800000 {
> > >      compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
> > >      reg = <0x70 0x63800000 0x0 0x800000>;
> > > };
> 
> I find dtb check will report error due to "port" is not a evaulated property
> for pcie host. Should we add a vendror specific property for this?
> 
> Or do you have any example for reference?

Sorry, I don't know enough about dtb to answer this.  Maybe Rob?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host
  2025-02-21 22:13                   ` Bjorn Helgaas
@ 2025-02-24  6:27                     ` Manivannan Sadhasivam
  0 siblings, 0 replies; 34+ messages in thread
From: Manivannan Sadhasivam @ 2025-02-24  6:27 UTC (permalink / raw)
  To: Bjorn Helgaas, Chen Wang
  Cc: Chen Wang, Rob Herring, Chen Wang, kw, u.kleine-koenig, aou, arnd,
	bhelgaas, conor+dt, guoren, inochiama, krzk+dt, lee, lpieralisi,
	palmer, paul.walmsley, pbrobinson, devicetree, linux-kernel,
	linux-pci, linux-riscv, chao.wei, xiaoguang.xing, fengchun.li

On Fri, Feb 21, 2025 at 04:13:30PM -0600, Bjorn Helgaas wrote:
> [cc->to: Rob]
> 
> On Fri, Feb 21, 2025 at 11:29:20AM +0800, Chen Wang wrote:
> > On 2025/2/20 2:22, Bjorn Helgaas wrote:
> > > On Wed, Feb 12, 2025 at 01:54:11PM +0800, Chen Wang wrote:
> > > > On 2025/2/12 12:25, Bjorn Helgaas wrote:
> > > > [......]
> > > > > > pcie_rc1 and pcie_rc2 share registers in cdns_pcie1_ctrl. By using
> > > > > > different "sophgo,core-id" values, they can distinguish and access
> > > > > > the registers they need in cdns_pcie1_ctrl.
> > > > > Where does cdns_pcie1_ctrl fit in this example?  Does that enclose
> > > > > both pcie_rc1 and pcie_rc2?
> > > > cdns_pcie1_ctrl is defined as a syscon node,  which contains registers
> > > > shared by pcie_rc1 and pcie_rc2. In the binding yaml file, I drew a diagram
> > > > to describe the relationship between them, copy here for your quick
> > > > reference:
> > > > 
> > > > +                     +-- Core (Link0) <---> pcie_rc1  +-----------------+
> > > > +                     |                                |                 |
> > > > +      Cadence IP 2 --+                                | cdns_pcie1_ctrl |
> > > > +                     |                                |                 |
> > > > +                     +-- Core (Link1) <---> pcie_rc2  +-----------------+
> > > > 
> > > > The following is an example with cdns_pcie1_ctrl added. For simplicity, I
> > > > deleted pcie_rc0.
> > >
> > > Looks good.  It would be nice if there were some naming similarity or
> > > comment or other hint to connect sophgo,core-id with the syscon node.
> > > 
> > > > pcie_rc1: pcie@7062000000 {
> > > >      compatible = "sophgo,sg2042-pcie-host";
> > > >      ...... // host bride level properties
> > > >      linux,pci-domain = <1>;
> > > >      sophgo,core-id = <0>;
> > > >      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
> > > >      port {
> > > >          // port level properties
> > > >          vendor-id = <0x1f1c>;
> > > >          device-id = <0x2042>;
> > > >          num-lanes = <2>;
> > > >      };
> > > > };
> > > > 
> > > > pcie_rc2: pcie@7062800000 {
> > > >      compatible = "sophgo,sg2042-pcie-host";
> > > >      ...... // host bride level properties
> > > >      linux,pci-domain = <2>;
> > > >      sophgo,core-id = <1>;
> > > >      sophgo,syscon-pcie-ctrl = <&cdns_pcie1_ctrl>;
> > > >      port {
> > > >          // port level properties
> > > >          vendor-id = <0x1f1c>;
> > > >          device-id = <0x2042>;
> > > >          num-lanes = <2>;
> > > >      }
> > > > 
> > > > };
> > > > 
> > > > cdns_pcie1_ctrl: syscon@7063800000 {
> > > >      compatible = "sophgo,sg2042-pcie-ctrl", "syscon";
> > > >      reg = <0x70 0x63800000 0x0 0x800000>;
> > > > };
> > 
> > I find dtb check will report error due to "port" is not a evaulated property
> > for pcie host. Should we add a vendror specific property for this?
> > 
> > Or do you have any example for reference?
> 

'port' is not a valid node name. It should be 'pcie' and should have the unit
address corresponding to the bridge BDF. Please refer DT nodes of other
platforms:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/boot/dts/qcom/sm8450.dtsi#n2071

- Mani

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2025-02-24  6:28 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-15  7:05 [PATCH v3 0/5] Add PCIe support to Sophgo SG2042 SoC Chen Wang
2025-01-15  7:06 ` [PATCH v3 1/5] dt-bindings: pci: Add Sophgo SG2042 PCIe host Chen Wang
2025-01-19 11:44   ` Manivannan Sadhasivam
2025-01-22 12:52     ` Chen Wang
2025-01-22 17:21       ` Manivannan Sadhasivam
2025-01-26  0:29         ` Chen Wang
2025-01-22 22:21   ` Bjorn Helgaas
2025-01-26  2:27     ` Chen Wang
2025-02-03  2:35       ` Chen Wang
2025-02-11 23:34       ` Bjorn Helgaas
2025-02-12  1:50         ` Chen Wang
2025-02-12  4:25           ` Bjorn Helgaas
2025-02-12  5:54             ` Chen Wang
2025-02-17  8:40               ` Chen Wang
2025-02-19 18:22               ` Bjorn Helgaas
2025-02-21  3:29                 ` Chen Wang
2025-02-21 22:13                   ` Bjorn Helgaas
2025-02-24  6:27                     ` Manivannan Sadhasivam
2025-01-15  7:06 ` [PATCH v3 2/5] PCI: sg2042: Add Sophgo SG2042 PCIe driver Chen Wang
2025-01-19 12:23   ` Manivannan Sadhasivam
2025-01-22 13:28     ` Chen Wang
2025-01-22 17:34       ` Manivannan Sadhasivam
2025-01-23 12:12         ` Marc Zyngier
2025-02-07 17:49           ` Manivannan Sadhasivam
2025-02-17  8:22         ` Chen Wang
2025-02-19 17:57           ` Manivannan Sadhasivam
2025-01-22 21:33   ` Bjorn Helgaas
2025-02-17  8:36     ` Chen Wang
2025-01-15  7:07 ` [PATCH v3 3/5] dt-bindings: mfd: syscon: Add sg2042 pcie ctrl compatible Chen Wang
2025-02-11 14:33   ` (subset) " Lee Jones
2025-02-12  0:48     ` Chen Wang
2025-02-20 16:00       ` Lee Jones
2025-01-15  7:07 ` [PATCH v3 4/5] riscv: sophgo: dts: add pcie controllers for SG2042 Chen Wang
2025-01-15  7:07 ` [PATCH v3 5/5] riscv: sophgo: dts: enable pcie for PioneerBox Chen Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).