linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/7] New DRM accel driver for Rockchip's RKNN NPU
@ 2025-02-25  7:55 Tomeu Vizoso
  2025-02-25  7:55 ` [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings Tomeu Vizoso
                   ` (6 more replies)
  0 siblings, 7 replies; 25+ messages in thread
From: Tomeu Vizoso @ 2025-02-25  7:55 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso

This series adds a new driver for the NPU that Rockchip includes in its
newer SoCs, developed by them on the NVDLA base.

In its current form, it supports the specific NPU in the RK3588 SoC.

The userspace driver is part of Mesa and an initial draft can be found at:

https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29698

Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
---
Changes in v2:
- Drop patch adding the rk3588 compatible to rockchip-iommu (Sebastian Reichel)
- Drop patch adding support for multiple power domains to rockchip-iommu (Sebastian Reichel)
- Link to v1: https://lore.kernel.org/r/20240612-6-10-rocket-v1-0-060e48eea250@tomeuvizoso.net

---
Tomeu Vizoso (7):
      dt-bindings: npu: rockchip,rknn: Add bindings
      arm64: dts: rockchip: Add nodes for NPU and its MMU to rk3588s
      arm64: dts: rockchip: Enable the NPU on quartzpro64
      accel/rocket: Add a new driver for Rockchip's NPU
      accel/rocket: Add IOCTL for BO creation
      accel/rocket: Add job submission IOCTL
      accel/rocket: Add IOCTLs for synchronizing memory accesses

 Documentation/accel/index.rst                      |    1 +
 Documentation/accel/rocket/index.rst               |   19 +
 .../bindings/npu/rockchip,rknn-core.yaml           |  152 +
 MAINTAINERS                                        |    8 +
 arch/arm64/boot/dts/rockchip/rk3588-base.dtsi      |   76 +
 .../arm64/boot/dts/rockchip/rk3588-quartzpro64.dts |   30 +
 drivers/accel/Kconfig                              |    1 +
 drivers/accel/Makefile                             |    1 +
 drivers/accel/rocket/Kconfig                       |   25 +
 drivers/accel/rocket/Makefile                      |   10 +
 drivers/accel/rocket/rocket_core.c                 |   77 +
 drivers/accel/rocket/rocket_core.h                 |   43 +
 drivers/accel/rocket/rocket_device.c               |   35 +
 drivers/accel/rocket/rocket_device.h               |   33 +
 drivers/accel/rocket/rocket_drv.c                  |  295 ++
 drivers/accel/rocket/rocket_drv.h                  |   17 +
 drivers/accel/rocket/rocket_gem.c                  |  216 +
 drivers/accel/rocket/rocket_gem.h                  |   32 +
 drivers/accel/rocket/rocket_job.c                  |  710 ++++
 drivers/accel/rocket/rocket_job.h                  |   50 +
 drivers/accel/rocket/rocket_registers.h            | 4425 ++++++++++++++++++++
 include/uapi/drm/rocket_accel.h                    |  116 +
 22 files changed, 6372 insertions(+)
---
base-commit: 585e191534efe95712df88a22eaa8d51228bcb43
change-id: 20240612-6-10-rocket-9316defc14c7

Best regards,
-- 
Tomeu Vizoso <tomeu@tomeuvizoso.net>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings
  2025-02-25  7:55 [PATCH v2 0/7] New DRM accel driver for Rockchip's RKNN NPU Tomeu Vizoso
@ 2025-02-25  7:55 ` Tomeu Vizoso
  2025-02-25 16:02   ` Rob Herring
  2025-04-25 18:50   ` Nicolas Frattaroli
  2025-02-25  7:55 ` [PATCH v2 2/7] arm64: dts: rockchip: Add nodes for NPU and its MMU to rk3588s Tomeu Vizoso
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 25+ messages in thread
From: Tomeu Vizoso @ 2025-02-25  7:55 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso

Add the bindings for the Neural Processing Unit IP from Rockchip.

v2:
- Adapt to new node structure (one node per core, each with its own
  IOMMU)
- Several misc. fixes from Sebastian Reichel

Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
---
 .../bindings/npu/rockchip,rknn-core.yaml           | 152 +++++++++++++++++++++
 1 file changed, 152 insertions(+)

diff --git a/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..e8d0afe4a7d1c4f166cf13a9f4aa7c1901362a3f
--- /dev/null
+++ b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
@@ -0,0 +1,152 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Neural Processing Unit IP from Rockchip
+
+maintainers:
+  - Tomeu Vizoso <tomeu@tomeuvizoso.net>
+
+description:
+  Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
+  open source NVDLA IP.
+
+properties:
+  $nodename:
+    pattern: '^npu-core@[a-f0-9]+$'
+
+  compatible:
+    oneOf:
+      - items:
+          - enum:
+              - rockchip,rk3588-rknn-core-top
+          - const: rockchip,rknn-core-top
+      - items:
+          - enum:
+              - rockchip,rk3588-rknn-core
+          - const: rockchip,rknn-core
+
+  reg:
+    maxItems: 1
+
+  clocks:
+    minItems: 2
+    maxItems: 4
+
+  clock-names:
+    items:
+      - const: aclk
+      - const: hclk
+      - const: npu
+      - const: pclk
+    minItems: 2
+
+  interrupts:
+    maxItems: 1
+
+  iommus:
+    maxItems: 1
+
+  npu-supply: true
+
+  power-domains:
+    maxItems: 1
+
+  resets:
+    maxItems: 2
+
+  reset-names:
+    items:
+      - const: srst_a
+      - const: srst_h
+
+  sram-supply: true
+
+required:
+  - compatible
+  - reg
+  - clocks
+  - clock-names
+  - interrupts
+  - iommus
+  - npu-supply
+  - power-domains
+  - resets
+  - reset-names
+  - sram-supply
+
+allOf:
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - rockchip,rknn-core-top
+    then:
+      properties:
+        clocks:
+          minItems: 4
+
+        clock-names:
+          minItems: 4
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - rockchip,rknn-core
+    then:
+      properties:
+        clocks:
+          maxItems: 2
+        clock-names:
+          maxItems: 2
+
+additionalProperties: false
+
+examples:
+  - |
+    #include <dt-bindings/clock/rockchip,rk3588-cru.h>
+    #include <dt-bindings/interrupt-controller/irq.h>
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+    #include <dt-bindings/power/rk3588-power.h>
+    #include <dt-bindings/reset/rockchip,rk3588-cru.h>
+
+    bus {
+      #address-cells = <2>;
+      #size-cells = <2>;
+
+      rknn_core_top: npu-core@fdab0000 {
+        compatible = "rockchip,rk3588-rknn-core-top", "rockchip,rknn-core-top";
+        reg = <0x0 0xfdab0000 0x0 0x9000>;
+        assigned-clocks = <&scmi_clk SCMI_CLK_NPU>;
+        assigned-clock-rates = <200000000>;
+        clocks = <&cru ACLK_NPU0>, <&cru HCLK_NPU0>,
+                 <&scmi_clk SCMI_CLK_NPU>, <&cru PCLK_NPU_ROOT>;
+        clock-names = "aclk", "hclk", "npu", "pclk";
+        interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH 0>;
+        iommus = <&rknn_mmu_top>;
+        npu-supply = <&vdd_npu_s0>;
+        power-domains = <&power RK3588_PD_NPUTOP>;
+        resets = <&cru SRST_A_RKNN0>, <&cru SRST_H_RKNN0>;
+        reset-names = "srst_a", "srst_h";
+        sram-supply = <&vdd_npu_mem_s0>;
+      };
+
+      rknn_core_1: npu-core@fdac0000 {
+        compatible = "rockchip,rk3588-rknn-core", "rockchip,rknn-core";
+        reg = <0x0 0xfdac0000 0x0 0x9000>;
+        clocks = <&cru ACLK_NPU1>, <&cru HCLK_NPU1>;
+        clock-names = "aclk", "hclk";
+        interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH 0>;
+        iommus = <&rknn_mmu_1>;
+        npu-supply = <&vdd_npu_s0>;
+        power-domains = <&power RK3588_PD_NPU1>;
+        resets = <&cru SRST_A_RKNN1>, <&cru SRST_H_RKNN1>;
+        reset-names = "srst_a", "srst_h";
+        sram-supply = <&vdd_npu_mem_s0>;
+      };
+    };
+...

-- 
2.48.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 2/7] arm64: dts: rockchip: Add nodes for NPU and its MMU to rk3588s
  2025-02-25  7:55 [PATCH v2 0/7] New DRM accel driver for Rockchip's RKNN NPU Tomeu Vizoso
  2025-02-25  7:55 ` [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings Tomeu Vizoso
@ 2025-02-25  7:55 ` Tomeu Vizoso
  2025-02-25  7:55 ` [PATCH v2 3/7] arm64: dts: rockchip: Enable the NPU on quartzpro64 Tomeu Vizoso
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Tomeu Vizoso @ 2025-02-25  7:55 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso

See Chapter 36 "RKNN" from the RK3588 TRM (Part 1).

This is a derivative of NVIDIA's NVDLA, but with its own front-end
processor.

The IP is divided in three cores, programmed independently. The first
core though is special, requiring to be powered on before any of the
others can be used.

The IOMMU of the first core is also special in that it has two subunits
(read/write?) that need to be programmed in sync.

v2:
- Have one device for each NPU core (Sebastian Reichel)
- Have one device for each IOMMU (Sebastian Reichel)
- Correctly sort nodes (Diederik de Haas)
- Add rockchip,iommu compatible to IOMMU nodes (Sebastian Reichel)

Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
---
 arch/arm64/boot/dts/rockchip/rk3588-base.dtsi | 76 +++++++++++++++++++++++++++
 1 file changed, 76 insertions(+)

diff --git a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
index 8cfa30837ce72581d0b513a8274ab0177eb5ae15..2680ed854e0c2ba5de167740ef18fcee167016fe 100644
--- a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+++ b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
@@ -1125,6 +1125,82 @@ power-domain@RK3588_PD_SDMMC {
 		};
 	};
 
+	rknn_core_top: npu-core@fdab0000 {
+		compatible = "rockchip,rk3588-rknn-core-top", "rockchip,rknn-core-top";
+		reg = <0x0 0xfdab0000 0x0 0x9000>;
+		interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH 0>;
+		clocks = <&scmi_clk SCMI_CLK_NPU>, <&cru PCLK_NPU_ROOT>,
+			 <&cru ACLK_NPU0>, <&cru HCLK_NPU0>;
+		clock-names = "aclk", "hclk", "npu", "pclk";
+		assigned-clocks = <&scmi_clk SCMI_CLK_NPU>;
+		assigned-clock-rates = <200000000>;
+		resets = <&cru SRST_A_RKNN0>, <&cru SRST_H_RKNN0>;
+		reset-names = "srst_a", "srst_h";
+		power-domains = <&power RK3588_PD_NPUTOP>;
+		iommus = <&rknn_mmu_top>;
+		status = "disabled";
+	};
+
+	rknn_mmu_top: iommu@fdab9000 {
+		compatible = "rockchip,rk3588-iommu", "rockchip,rk3568-iommu";
+		reg = <0x0 0xfdab9000 0x0 0x100>,
+		      <0x0 0xfdaba000 0x0 0x100>;
+		interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH 0>;
+		clocks = <&cru ACLK_NPU0>, <&cru HCLK_NPU0>;
+		clock-names = "aclk", "iface";
+		#iommu-cells = <0>;
+		power-domains = <&power RK3588_PD_NPUTOP>;
+		status = "disabled";
+	};
+
+	rknn_core_1: npu-core@fdac0000 {
+		compatible = "rockchip,rk3588-rknn-core", "rockchip,rknn-core";
+		reg = <0x0 0xfdac0000 0x0 0x9000>;
+		interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH 0>;
+		clocks = <&cru ACLK_NPU1>, <&cru HCLK_NPU1>;
+		clock-names = "aclk", "hclk";
+		resets = <&cru SRST_A_RKNN1>, <&cru SRST_H_RKNN1>;
+		reset-names = "srst_a", "srst_h";
+		power-domains = <&power RK3588_PD_NPU1>;
+		iommus = <&rknn_mmu_1>;
+		status = "disabled";
+	};
+
+	rknn_mmu_1: iommu@fdac9000 {
+		compatible = "rockchip,rk3588-iommu", "rockchip,rk3568-iommu";
+		reg = <0x0 0xfdaca000 0x0 0x100>;
+		interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH 0>;
+		clocks = <&cru ACLK_NPU1>, <&cru HCLK_NPU1>;
+		clock-names = "aclk", "iface";
+		#iommu-cells = <0>;
+		power-domains = <&power RK3588_PD_NPU1>;
+		status = "disabled";
+	};
+
+	rknn_core_2: npu-core@fdad0000 {
+		compatible = "rockchip,rk3588-rknn-core", "rockchip,rknn-core";
+		reg = <0x0 0xfdad0000 0x0 0x9000>;
+		interrupts = <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH 0>;
+		clocks = <&cru ACLK_NPU2>, <&cru HCLK_NPU2>;
+		clock-names = "aclk", "hclk";
+		resets = <&cru SRST_A_RKNN2>, <&cru SRST_H_RKNN2>;
+		reset-names = "srst_a", "srst_h";
+		power-domains = <&power RK3588_PD_NPU2>;
+		iommus = <&rknn_mmu_2>;
+		status = "disabled";
+	};
+
+	rknn_mmu_2: iommu@fdad9000 {
+		compatible = "rockchip,rk3588-iommu", "rockchip,rk3568-iommu";
+		reg = <0x0 0xfdada000 0x0 0x100>;
+		interrupts = <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH 0>;
+		clocks = <&cru ACLK_NPU2>, <&cru HCLK_NPU2>;
+		clock-names = "aclk", "iface";
+		#iommu-cells = <0>;
+		power-domains = <&power RK3588_PD_NPU2>;
+		status = "disabled";
+	};
+
 	vpu121: video-codec@fdb50000 {
 		compatible = "rockchip,rk3588-vpu121", "rockchip,rk3568-vpu";
 		reg = <0x0 0xfdb50000 0x0 0x800>;

-- 
2.48.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 3/7] arm64: dts: rockchip: Enable the NPU on quartzpro64
  2025-02-25  7:55 [PATCH v2 0/7] New DRM accel driver for Rockchip's RKNN NPU Tomeu Vizoso
  2025-02-25  7:55 ` [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings Tomeu Vizoso
  2025-02-25  7:55 ` [PATCH v2 2/7] arm64: dts: rockchip: Add nodes for NPU and its MMU to rk3588s Tomeu Vizoso
@ 2025-02-25  7:55 ` Tomeu Vizoso
  2025-02-25  7:55 ` [PATCH v2 5/7] accel/rocket: Add IOCTL for BO creation Tomeu Vizoso
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Tomeu Vizoso @ 2025-02-25  7:55 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso

Enable the nodes added in a previous commit to the rk3588s device tree.

v2:
- Split nodes (Sebastian Reichel)
- Sort nodes (Sebastian Reichel)
- Add board regulators (Sebastian Reichel)

Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
---
 .../arm64/boot/dts/rockchip/rk3588-quartzpro64.dts | 30 ++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/arch/arm64/boot/dts/rockchip/rk3588-quartzpro64.dts b/arch/arm64/boot/dts/rockchip/rk3588-quartzpro64.dts
index 088cfade6f6f14b6383ab844fa174c69fa711fc0..5f6b87dc46361eea93ea2a1fa373cb9ecdb7bbce 100644
--- a/arch/arm64/boot/dts/rockchip/rk3588-quartzpro64.dts
+++ b/arch/arm64/boot/dts/rockchip/rk3588-quartzpro64.dts
@@ -411,6 +411,36 @@ &pcie3x4 {
 	status = "okay";
 };
 
+&rknn_core_top {
+	npu-supply = <&vdd_npu_s0>;
+	sram-supply = <&vdd_npu_mem_s0>;
+	status = "okay";
+};
+
+&rknn_core_1 {
+	npu-supply = <&vdd_npu_s0>;
+	sram-supply = <&vdd_npu_mem_s0>;
+	status = "okay";
+};
+
+&rknn_core_2 {
+	npu-supply = <&vdd_npu_s0>;
+	sram-supply = <&vdd_npu_mem_s0>;
+	status = "okay";
+};
+
+&rknn_mmu_top {
+	status = "okay";
+};
+
+&rknn_mmu_1 {
+	status = "okay";
+};
+
+&rknn_mmu_2 {
+	status = "okay";
+};
+
 &saradc {
 	vref-supply = <&vcc_1v8_s0>;
 	status = "okay";

-- 
2.48.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 5/7] accel/rocket: Add IOCTL for BO creation
  2025-02-25  7:55 [PATCH v2 0/7] New DRM accel driver for Rockchip's RKNN NPU Tomeu Vizoso
                   ` (2 preceding siblings ...)
  2025-02-25  7:55 ` [PATCH v2 3/7] arm64: dts: rockchip: Enable the NPU on quartzpro64 Tomeu Vizoso
@ 2025-02-25  7:55 ` Tomeu Vizoso
  2025-02-25  8:35   ` Thomas Zimmermann
  2025-03-21 15:56   ` Jeffrey Hugo
  2025-02-25  7:55 ` [PATCH v2 6/7] accel/rocket: Add job submission IOCTL Tomeu Vizoso
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 25+ messages in thread
From: Tomeu Vizoso @ 2025-02-25  7:55 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso

This uses the SHMEM DRM helpers and we map right away to the CPU and NPU
sides, as all buffers are expected to be accessed from both.

v2:
- Sync the IOMMUs for the other cores when mapping and unmapping.

Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
---
 drivers/accel/rocket/Makefile        |   3 +-
 drivers/accel/rocket/rocket_device.c |   4 +
 drivers/accel/rocket/rocket_device.h |   2 +
 drivers/accel/rocket/rocket_drv.c    |   7 +-
 drivers/accel/rocket/rocket_gem.c    | 141 +++++++++++++++++++++++++++++++++++
 drivers/accel/rocket/rocket_gem.h    |  27 +++++++
 include/uapi/drm/rocket_accel.h      |  43 +++++++++++
 7 files changed, 225 insertions(+), 2 deletions(-)

diff --git a/drivers/accel/rocket/Makefile b/drivers/accel/rocket/Makefile
index 73a7280d260c068d37ad3048824f710482333540..875cac2243d902694e0d5d05e60b4ae551a633c4 100644
--- a/drivers/accel/rocket/Makefile
+++ b/drivers/accel/rocket/Makefile
@@ -5,4 +5,5 @@ obj-$(CONFIG_DRM_ACCEL_ROCKET) := rocket.o
 rocket-y := \
 	rocket_core.o \
 	rocket_device.o \
-	rocket_drv.o
+	rocket_drv.o \
+	rocket_gem.o
diff --git a/drivers/accel/rocket/rocket_device.c b/drivers/accel/rocket/rocket_device.c
index ce3b533f15c1011d8a7a23dd8132e907cc334c58..9af36357caba7148dcac764c8222699f3b572d60 100644
--- a/drivers/accel/rocket/rocket_device.c
+++ b/drivers/accel/rocket/rocket_device.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 /* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
 
+#include "linux/mutex.h"
 #include <linux/clk.h>
 
 #include "rocket_device.h"
@@ -10,6 +11,8 @@ int rocket_device_init(struct rocket_device *rdev)
 	struct device *dev = rdev->cores[0].dev;
 	int err;
 
+	mutex_init(&rdev->iommu_lock);
+
 	rdev->clk_npu = devm_clk_get(dev, "npu");
 	rdev->pclk = devm_clk_get(dev, "pclk");
 
@@ -26,4 +29,5 @@ int rocket_device_init(struct rocket_device *rdev)
 void rocket_device_fini(struct rocket_device *rdev)
 {
 	rocket_core_fini(&rdev->cores[0]);
+	mutex_destroy(&rdev->iommu_lock);
 }
diff --git a/drivers/accel/rocket/rocket_device.h b/drivers/accel/rocket/rocket_device.h
index 466edba9102c5dc5dfac5d3fcc1c904f206eaebb..c6152569fdd9e5587c8e8d7b0d7c2e2a77af6000 100644
--- a/drivers/accel/rocket/rocket_device.h
+++ b/drivers/accel/rocket/rocket_device.h
@@ -14,6 +14,8 @@ struct rocket_device {
 	struct clk *clk_npu;
 	struct clk *pclk;
 
+	struct mutex iommu_lock;
+
 	struct rocket_core *cores;
 	unsigned int num_cores;
 };
diff --git a/drivers/accel/rocket/rocket_drv.c b/drivers/accel/rocket/rocket_drv.c
index c22d965f20f1239a36b1d823d5fe5f372713555d..e5612b52952fa7a0cd0af02aef314984bc483b05 100644
--- a/drivers/accel/rocket/rocket_drv.c
+++ b/drivers/accel/rocket/rocket_drv.c
@@ -6,6 +6,7 @@
 #include <drm/drm_gem.h>
 #include <drm/drm_ioctl.h>
 #include <drm/drm_of.h>
+#include <drm/rocket_accel.h>
 #include <linux/clk.h>
 #include <linux/component.h>
 #include <linux/dma-mapping.h>
@@ -14,6 +15,7 @@
 #include <linux/pm_runtime.h>
 
 #include "rocket_drv.h"
+#include "rocket_gem.h"
 
 static int
 rocket_open(struct drm_device *dev, struct drm_file *file)
@@ -42,6 +44,8 @@ rocket_postclose(struct drm_device *dev, struct drm_file *file)
 static const struct drm_ioctl_desc rocket_drm_driver_ioctls[] = {
 #define ROCKET_IOCTL(n, func) \
 	DRM_IOCTL_DEF_DRV(ROCKET_##n, rocket_ioctl_##func, 0)
+
+	ROCKET_IOCTL(CREATE_BO, create_bo),
 };
 
 DEFINE_DRM_ACCEL_FOPS(rocket_accel_driver_fops);
@@ -51,9 +55,10 @@ DEFINE_DRM_ACCEL_FOPS(rocket_accel_driver_fops);
  * - 1.0 - initial interface
  */
 static const struct drm_driver rocket_drm_driver = {
-	.driver_features	= DRIVER_COMPUTE_ACCEL,
+	.driver_features	= DRIVER_COMPUTE_ACCEL | DRIVER_GEM,
 	.open			= rocket_open,
 	.postclose		= rocket_postclose,
+	.gem_create_object	= rocket_gem_create_object,
 	.ioctls			= rocket_drm_driver_ioctls,
 	.num_ioctls		= ARRAY_SIZE(rocket_drm_driver_ioctls),
 	.fops			= &rocket_accel_driver_fops,
diff --git a/drivers/accel/rocket/rocket_gem.c b/drivers/accel/rocket/rocket_gem.c
new file mode 100644
index 0000000000000000000000000000000000000000..d5337cf1e275c249a1491d0dd28e6b8ccd2ff2cb
--- /dev/null
+++ b/drivers/accel/rocket/rocket_gem.c
@@ -0,0 +1,141 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
+
+#include <drm/drm_device.h>
+#include <drm/drm_utils.h>
+#include <drm/rocket_accel.h>
+#include <linux/dma-mapping.h>
+#include <linux/iommu.h>
+
+#include "rocket_device.h"
+#include "rocket_gem.h"
+
+static void rocket_gem_bo_free(struct drm_gem_object *obj)
+{
+	struct rocket_device *rdev = to_rocket_device(obj->dev);
+	struct rocket_gem_object *bo = to_rocket_bo(obj);
+	struct sg_table *sgt;
+
+	drm_WARN_ON(obj->dev, bo->base.pages_use_count > 1);
+
+	mutex_lock(&rdev->iommu_lock);
+
+	sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
+
+	/* Unmap this object from the IOMMUs for cores > 0 */
+	for (unsigned int core = 1; core < rdev->num_cores; core++) {
+		struct iommu_domain *domain = iommu_get_domain_for_dev(rdev->cores[core].dev);
+		size_t unmapped = iommu_unmap(domain, sgt->sgl->dma_address, bo->size);
+
+		drm_WARN_ON(obj->dev, unmapped != bo->size);
+	}
+
+	/* This will unmap the pages from the IOMMU linked to core 0 */
+	drm_gem_shmem_free(&bo->base);
+
+	mutex_unlock(&rdev->iommu_lock);
+}
+
+static const struct drm_gem_object_funcs rocket_gem_funcs = {
+	.free = rocket_gem_bo_free,
+	.print_info = drm_gem_shmem_object_print_info,
+	.pin = drm_gem_shmem_object_pin,
+	.unpin = drm_gem_shmem_object_unpin,
+	.get_sg_table = drm_gem_shmem_object_get_sg_table,
+	.vmap = drm_gem_shmem_object_vmap,
+	.vunmap = drm_gem_shmem_object_vunmap,
+	.mmap = drm_gem_shmem_object_mmap,
+	.vm_ops = &drm_gem_shmem_vm_ops,
+};
+
+/**
+ * rocket_gem_create_object - Implementation of driver->gem_create_object.
+ * @dev: DRM device
+ * @size: Size in bytes of the memory the object will reference
+ *
+ * This lets the GEM helpers allocate object structs for us, and keep
+ * our BO stats correct.
+ */
+struct drm_gem_object *rocket_gem_create_object(struct drm_device *dev, size_t size)
+{
+	struct rocket_gem_object *obj;
+
+	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	obj->base.base.funcs = &rocket_gem_funcs;
+
+	return &obj->base.base;
+}
+
+int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct drm_rocket_create_bo *args = data;
+	struct rocket_device *rdev = to_rocket_device(dev);
+	struct drm_gem_shmem_object *shmem_obj;
+	struct rocket_gem_object *rkt_obj;
+	struct drm_gem_object *gem_obj;
+	struct sg_table *sgt;
+	int ret;
+
+	shmem_obj = drm_gem_shmem_create(dev, args->size);
+	if (IS_ERR(shmem_obj))
+		return PTR_ERR(shmem_obj);
+
+	gem_obj = &shmem_obj->base;
+	rkt_obj = to_rocket_bo(gem_obj);
+
+	rkt_obj->size = args->size;
+	rkt_obj->offset = 0;
+	mutex_init(&rkt_obj->mutex);
+
+	ret = drm_gem_handle_create(file, gem_obj, &args->handle);
+	drm_gem_object_put(gem_obj);
+	if (ret)
+		goto err;
+
+	mutex_lock(&rdev->iommu_lock);
+
+	/* This will map the pages to the IOMMU linked to core 0 */
+	sgt = drm_gem_shmem_get_pages_sgt(shmem_obj);
+	if (IS_ERR(sgt)) {
+		ret = PTR_ERR(sgt);
+		goto err_unlock;
+	}
+
+	/* Map the pages to the IOMMUs linked to the other cores, so all cores can access this BO */
+	for (unsigned int core = 1; core < rdev->num_cores; core++) {
+
+		ret = iommu_map_sgtable(iommu_get_domain_for_dev(rdev->cores[core].dev),
+					sgt->sgl->dma_address,
+					sgt,
+					IOMMU_READ | IOMMU_WRITE);
+		if (ret < 0 || ret < args->size) {
+			DRM_ERROR("failed to map buffer: size=%d request_size=%u\n",
+				ret, args->size);
+			ret = -ENOMEM;
+			goto err_unlock;
+		}
+
+		/* iommu_map_sgtable might have aligned the size */
+		rkt_obj->size = ret;
+
+		dma_sync_sgtable_for_device(rdev->cores[core].dev, shmem_obj->sgt,
+					    DMA_BIDIRECTIONAL);
+	}
+
+	mutex_unlock(&rdev->iommu_lock);
+
+	args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
+	args->dma_address = sg_dma_address(shmem_obj->sgt->sgl);
+
+	return 0;
+
+err_unlock:
+	mutex_unlock(&rdev->iommu_lock);
+err:
+	drm_gem_shmem_object_free(gem_obj);
+
+	return ret;
+}
diff --git a/drivers/accel/rocket/rocket_gem.h b/drivers/accel/rocket/rocket_gem.h
new file mode 100644
index 0000000000000000000000000000000000000000..19b0cf91ddd99bd126c1af30beb169d6101f6dee
--- /dev/null
+++ b/drivers/accel/rocket/rocket_gem.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
+
+#ifndef __ROCKET_GEM_H__
+#define __ROCKET_GEM_H__
+
+#include <drm/drm_gem_shmem_helper.h>
+
+struct rocket_gem_object {
+	struct drm_gem_shmem_object base;
+
+	struct mutex mutex;
+	size_t size;
+	u32 offset;
+};
+
+struct drm_gem_object *rocket_gem_create_object(struct drm_device *dev, size_t size);
+
+int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file);
+
+static inline
+struct  rocket_gem_object *to_rocket_bo(struct drm_gem_object *obj)
+{
+	return container_of(to_drm_gem_shmem_obj(obj), struct rocket_gem_object, base);
+}
+
+#endif
diff --git a/include/uapi/drm/rocket_accel.h b/include/uapi/drm/rocket_accel.h
new file mode 100644
index 0000000000000000000000000000000000000000..8338726a83c31b954608ca505cf78bcd70d3494b
--- /dev/null
+++ b/include/uapi/drm/rocket_accel.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2024 Tomeu Vizoso
+ */
+#ifndef _ROCKET_DRM_H_
+#define _ROCKET_DRM_H_
+
+#include "drm.h"
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+#define DRM_ROCKET_CREATE_BO			0x00
+
+#define DRM_IOCTL_ROCKET_CREATE_BO		DRM_IOWR(DRM_COMMAND_BASE + DRM_ROCKET_CREATE_BO, struct drm_rocket_create_bo)
+
+/**
+ * struct drm_rocket_create_bo - ioctl argument for creating Rocket BOs.
+ *
+ */
+struct drm_rocket_create_bo {
+	__u32 size;
+
+	/** Returned GEM handle for the BO. */
+	__u32 handle;
+
+	/**
+	 * Returned DMA address for the BO in the NPU address space.  This address
+	 * is private to the DRM fd and is valid for the lifetime of the GEM
+	 * handle.
+	 */
+	__u64 dma_address;
+
+	/** Offset into the drm node to use for subsequent mmap call. */
+	__u64 offset;
+};
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif /* _ROCKET_DRM_H_ */

-- 
2.48.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 6/7] accel/rocket: Add job submission IOCTL
  2025-02-25  7:55 [PATCH v2 0/7] New DRM accel driver for Rockchip's RKNN NPU Tomeu Vizoso
                   ` (3 preceding siblings ...)
  2025-02-25  7:55 ` [PATCH v2 5/7] accel/rocket: Add IOCTL for BO creation Tomeu Vizoso
@ 2025-02-25  7:55 ` Tomeu Vizoso
  2025-02-25  8:44   ` Thomas Zimmermann
                     ` (2 more replies)
  2025-02-25  7:55 ` [PATCH v2 7/7] accel/rocket: Add IOCTLs for synchronizing memory accesses Tomeu Vizoso
       [not found] ` <20250225-6-10-rocket-v2-4-d4dbcfafc141@tomeuvizoso.net>
  6 siblings, 3 replies; 25+ messages in thread
From: Tomeu Vizoso @ 2025-02-25  7:55 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso

Using the DRM GPU scheduler infrastructure, with a scheduler for each
core.

Userspace can decide for a series of tasks to be executed sequentially
in the same core, so SRAM locality can be taken advantage of.

The job submission code was initially based on Panfrost.

v2:
- Remove hardcoded number of cores
- Misc. style fixes (Jeffrey Hugo)
- Repack IOCTL struct (Jeffrey Hugo)

Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
---
 drivers/accel/rocket/Makefile        |   3 +-
 drivers/accel/rocket/rocket_core.c   |   6 +
 drivers/accel/rocket/rocket_core.h   |  14 +
 drivers/accel/rocket/rocket_device.c |   2 +
 drivers/accel/rocket/rocket_device.h |   2 +
 drivers/accel/rocket/rocket_drv.c    |  15 +
 drivers/accel/rocket/rocket_drv.h    |   4 +
 drivers/accel/rocket/rocket_job.c    | 710 +++++++++++++++++++++++++++++++++++
 drivers/accel/rocket/rocket_job.h    |  50 +++
 include/uapi/drm/rocket_accel.h      |  55 +++
 10 files changed, 860 insertions(+), 1 deletion(-)

diff --git a/drivers/accel/rocket/Makefile b/drivers/accel/rocket/Makefile
index 875cac2243d902694e0d5d05e60b4ae551a633c4..4d59036af8d9c213d3cac0559eb66e3ebb0320e7 100644
--- a/drivers/accel/rocket/Makefile
+++ b/drivers/accel/rocket/Makefile
@@ -6,4 +6,5 @@ rocket-y := \
 	rocket_core.o \
 	rocket_device.o \
 	rocket_drv.o \
-	rocket_gem.o
+	rocket_gem.o \
+	rocket_job.o
diff --git a/drivers/accel/rocket/rocket_core.c b/drivers/accel/rocket/rocket_core.c
index 09d966c826b5b1090a18cb24b3aa4aba286a12d4..2b522592693874eed90463e8f85653d5282ae5b8 100644
--- a/drivers/accel/rocket/rocket_core.c
+++ b/drivers/accel/rocket/rocket_core.c
@@ -6,6 +6,7 @@
 #include <linux/pm_runtime.h>
 
 #include "rocket_core.h"
+#include "rocket_job.h"
 #include "rocket_registers.h"
 
 static int rocket_clk_init(struct rocket_core *core)
@@ -48,6 +49,10 @@ int rocket_core_init(struct rocket_core *core)
 	if (IS_ERR(core->iomem))
 		return PTR_ERR(core->iomem);
 
+	err = rocket_job_init(core);
+	if (err)
+		return err;
+
 	pm_runtime_use_autosuspend(dev);
 	pm_runtime_set_autosuspend_delay(dev, 50); /* ~3 frames */
 	pm_runtime_enable(dev);
@@ -68,4 +73,5 @@ int rocket_core_init(struct rocket_core *core)
 void rocket_core_fini(struct rocket_core *core)
 {
 	pm_runtime_disable(core->dev);
+	rocket_job_fini(core);
 }
diff --git a/drivers/accel/rocket/rocket_core.h b/drivers/accel/rocket/rocket_core.h
index 2171eba7139ccc63fe24802dc81b4adb7f3abf31..045a46a2010a2ffd6122ed86c379e5fabc70365a 100644
--- a/drivers/accel/rocket/rocket_core.h
+++ b/drivers/accel/rocket/rocket_core.h
@@ -21,6 +21,20 @@ struct rocket_core {
 	void __iomem *iomem;
 	struct clk *a_clk;
 	struct clk *h_clk;
+
+	struct rocket_job *in_flight_job;
+
+	spinlock_t job_lock;
+
+	struct {
+		struct workqueue_struct *wq;
+		struct work_struct work;
+		atomic_t pending;
+	} reset;
+
+	struct drm_gpu_scheduler sched;
+	u64 fence_context;
+	u64 emit_seqno;
 };
 
 int rocket_core_init(struct rocket_core *core);
diff --git a/drivers/accel/rocket/rocket_device.c b/drivers/accel/rocket/rocket_device.c
index 9af36357caba7148dcac764c8222699f3b572d60..62c640e1e0200fe25b6834e45d71f6de139ff3ab 100644
--- a/drivers/accel/rocket/rocket_device.c
+++ b/drivers/accel/rocket/rocket_device.c
@@ -12,6 +12,7 @@ int rocket_device_init(struct rocket_device *rdev)
 	int err;
 
 	mutex_init(&rdev->iommu_lock);
+	mutex_init(&rdev->sched_lock);
 
 	rdev->clk_npu = devm_clk_get(dev, "npu");
 	rdev->pclk = devm_clk_get(dev, "pclk");
@@ -29,5 +30,6 @@ int rocket_device_init(struct rocket_device *rdev)
 void rocket_device_fini(struct rocket_device *rdev)
 {
 	rocket_core_fini(&rdev->cores[0]);
+	mutex_destroy(&rdev->sched_lock);
 	mutex_destroy(&rdev->iommu_lock);
 }
diff --git a/drivers/accel/rocket/rocket_device.h b/drivers/accel/rocket/rocket_device.h
index c6152569fdd9e5587c8e8d7b0d7c2e2a77af6000..4168ae8da2d38c2ea114b37c6e053b02611a0232 100644
--- a/drivers/accel/rocket/rocket_device.h
+++ b/drivers/accel/rocket/rocket_device.h
@@ -11,6 +11,8 @@
 struct rocket_device {
 	struct drm_device ddev;
 
+	struct mutex sched_lock;
+
 	struct clk *clk_npu;
 	struct clk *pclk;
 
diff --git a/drivers/accel/rocket/rocket_drv.c b/drivers/accel/rocket/rocket_drv.c
index e5612b52952fa7a0cd0af02aef314984bc483b05..a6b486e2d4f648d7b1d8831590b633bf661c7bc4 100644
--- a/drivers/accel/rocket/rocket_drv.c
+++ b/drivers/accel/rocket/rocket_drv.c
@@ -16,12 +16,14 @@
 
 #include "rocket_drv.h"
 #include "rocket_gem.h"
+#include "rocket_job.h"
 
 static int
 rocket_open(struct drm_device *dev, struct drm_file *file)
 {
 	struct rocket_device *rdev = to_rocket_device(dev);
 	struct rocket_file_priv *rocket_priv;
+	int ret;
 
 	rocket_priv = kzalloc(sizeof(*rocket_priv), GFP_KERNEL);
 	if (!rocket_priv)
@@ -30,7 +32,15 @@ rocket_open(struct drm_device *dev, struct drm_file *file)
 	rocket_priv->rdev = rdev;
 	file->driver_priv = rocket_priv;
 
+	ret = rocket_job_open(rocket_priv);
+	if (ret)
+		goto err_free;
+
 	return 0;
+
+err_free:
+	kfree(rocket_priv);
+	return ret;
 }
 
 static void
@@ -38,6 +48,7 @@ rocket_postclose(struct drm_device *dev, struct drm_file *file)
 {
 	struct rocket_file_priv *rocket_priv = file->driver_priv;
 
+	rocket_job_close(rocket_priv);
 	kfree(rocket_priv);
 }
 
@@ -46,6 +57,7 @@ static const struct drm_ioctl_desc rocket_drm_driver_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(ROCKET_##n, rocket_ioctl_##func, 0)
 
 	ROCKET_IOCTL(CREATE_BO, create_bo),
+	ROCKET_IOCTL(SUBMIT, submit),
 };
 
 DEFINE_DRM_ACCEL_FOPS(rocket_accel_driver_fops);
@@ -245,6 +257,9 @@ static int rocket_device_runtime_suspend(struct device *dev)
 		if (dev != rdev->cores[core].dev)
 			continue;
 
+		if (!rocket_job_is_idle(&rdev->cores[core]))
+			return -EBUSY;
+
 		clk_disable_unprepare(rdev->cores[core].a_clk);
 		clk_disable_unprepare(rdev->cores[core].h_clk);
 
diff --git a/drivers/accel/rocket/rocket_drv.h b/drivers/accel/rocket/rocket_drv.h
index ccdd50c69d4c033eea18cb800407fdcfb3bf2e9b..54e21a61006057aee293496016e54b495a2f6d55 100644
--- a/drivers/accel/rocket/rocket_drv.h
+++ b/drivers/accel/rocket/rocket_drv.h
@@ -4,10 +4,14 @@
 #ifndef __ROCKET_DRV_H__
 #define __ROCKET_DRV_H__
 
+#include <drm/gpu_scheduler.h>
+
 #include "rocket_device.h"
 
 struct rocket_file_priv {
 	struct rocket_device *rdev;
+
+	struct drm_sched_entity sched_entity;
 };
 
 #endif
diff --git a/drivers/accel/rocket/rocket_job.c b/drivers/accel/rocket/rocket_job.c
new file mode 100644
index 0000000000000000000000000000000000000000..25b31f28e932aaee86173b9a0962932c9c640c03
--- /dev/null
+++ b/drivers/accel/rocket/rocket_job.c
@@ -0,0 +1,710 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
+/* Copyright 2019 Collabora ltd. */
+/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
+
+#include <drm/drm_file.h>
+#include <drm/drm_gem.h>
+#include <drm/rocket_accel.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+
+#include "rocket_core.h"
+#include "rocket_device.h"
+#include "rocket_drv.h"
+#include "rocket_job.h"
+#include "rocket_registers.h"
+
+#define JOB_TIMEOUT_MS 500
+
+#define job_write(dev, reg, data) writel(data, dev->iomem + (reg))
+#define job_read(dev, reg) readl(dev->iomem + (reg))
+
+static struct rocket_job *
+to_rocket_job(struct drm_sched_job *sched_job)
+{
+	return container_of(sched_job, struct rocket_job, base);
+}
+
+struct rocket_fence {
+	struct dma_fence base;
+	struct drm_device *dev;
+	/* rocket seqno for signaled() test */
+	u64 seqno;
+	int queue;
+};
+
+static inline struct rocket_fence *
+to_rocket_fence(struct dma_fence *fence)
+{
+	return (struct rocket_fence *)fence;
+}
+
+static const char *rocket_fence_get_driver_name(struct dma_fence *fence)
+{
+	return "rocket";
+}
+
+static const char *rocket_fence_get_timeline_name(struct dma_fence *fence)
+{
+	return "rockchip-npu";
+}
+
+static const struct dma_fence_ops rocket_fence_ops = {
+	.get_driver_name = rocket_fence_get_driver_name,
+	.get_timeline_name = rocket_fence_get_timeline_name,
+};
+
+static struct dma_fence *rocket_fence_create(struct rocket_core *core)
+{
+	struct rocket_device *rdev = core->rdev;
+	struct rocket_fence *fence;
+
+	fence = kzalloc(sizeof(*fence), GFP_KERNEL);
+	if (!fence)
+		return ERR_PTR(-ENOMEM);
+
+	fence->dev = &rdev->ddev;
+	fence->seqno = ++core->emit_seqno;
+	dma_fence_init(&fence->base, &rocket_fence_ops, &core->job_lock,
+		       core->fence_context, fence->seqno);
+
+	return &fence->base;
+}
+
+static int
+rocket_copy_tasks(struct drm_device *dev,
+		  struct drm_file *file_priv,
+		  struct drm_rocket_job *job,
+		  struct rocket_job *rjob)
+{
+	struct drm_rocket_task *tasks;
+	int ret = 0;
+	int i;
+
+	rjob->task_count = job->task_count;
+
+	if (!rjob->task_count)
+		return 0;
+
+	tasks = kvmalloc_array(rjob->task_count, sizeof(*tasks), GFP_KERNEL);
+	if (!tasks) {
+		ret = -ENOMEM;
+		DRM_DEBUG("Failed to allocate incoming tasks\n");
+		goto fail;
+	}
+
+	if (copy_from_user(tasks,
+			   (void __user *)(uintptr_t)job->tasks,
+			   rjob->task_count * sizeof(*tasks))) {
+		ret = -EFAULT;
+		DRM_DEBUG("Failed to copy incoming tasks\n");
+		goto fail;
+	}
+
+	rjob->tasks = kvmalloc_array(job->task_count, sizeof(*rjob->tasks), GFP_KERNEL);
+	if (!rjob->tasks) {
+		DRM_DEBUG("Failed to allocate task array\n");
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	for (i = 0; i < rjob->task_count; i++) {
+		if (tasks[i].regcmd_count == 0) {
+			ret = -EINVAL;
+			goto fail;
+		}
+		rjob->tasks[i].regcmd = tasks[i].regcmd;
+		rjob->tasks[i].regcmd_count = tasks[i].regcmd_count;
+	}
+
+fail:
+	kvfree(tasks);
+	return ret;
+}
+
+static void rocket_job_hw_submit(struct rocket_core *core, struct rocket_job *job)
+{
+	struct rocket_task *task;
+	bool task_pp_en = 1;
+	bool task_count = 1;
+
+	/* GO ! */
+
+	/* Don't queue the job if a reset is in progress */
+	if (!atomic_read(&core->reset.pending)) {
+
+		task = &job->tasks[job->next_task_idx];
+		job->next_task_idx++;   /* TODO: Do this only after a successful run? */
+
+		rocket_write(core, REG_PC_BASE_ADDRESS, 0x1);
+
+		rocket_write(core, REG_CNA_S_POINTER, 0xe + 0x10000000 * core->index);
+		rocket_write(core, REG_CORE_S_POINTER, 0xe + 0x10000000 * core->index);
+
+		rocket_write(core, REG_PC_BASE_ADDRESS, task->regcmd);
+		rocket_write(core, REG_PC_REGISTER_AMOUNTS, (task->regcmd_count + 1) / 2 - 1);
+
+		rocket_write(core, REG_PC_INTERRUPT_MASK,
+			     PC_INTERRUPT_MASK_DPU_0 | PC_INTERRUPT_MASK_DPU_1);
+		rocket_write(core, REG_PC_INTERRUPT_CLEAR,
+			     PC_INTERRUPT_CLEAR_DPU_0 | PC_INTERRUPT_CLEAR_DPU_1);
+
+		rocket_write(core, REG_PC_TASK_CON, ((0x6 | task_pp_en) << 12) | task_count);
+
+		rocket_write(core, REG_PC_TASK_DMA_BASE_ADDR, 0x0);
+
+		rocket_write(core, REG_PC_OPERATION_ENABLE, 0x1);
+
+		dev_dbg(core->dev,
+			"Submitted regcmd at 0x%llx to core %d",
+			task->regcmd, core->index);
+	}
+}
+
+static int rocket_acquire_object_fences(struct drm_gem_object **bos,
+					int bo_count,
+					struct drm_sched_job *job,
+					bool is_write)
+{
+	int i, ret;
+
+	for (i = 0; i < bo_count; i++) {
+		ret = dma_resv_reserve_fences(bos[i]->resv, 1);
+		if (ret)
+			return ret;
+
+		ret = drm_sched_job_add_implicit_dependencies(job, bos[i],
+							      is_write);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static void rocket_attach_object_fences(struct drm_gem_object **bos,
+					  int bo_count,
+					  struct dma_fence *fence)
+{
+	int i;
+
+	for (i = 0; i < bo_count; i++)
+		dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE);
+}
+
+static int rocket_job_push(struct rocket_job *job)
+{
+	struct rocket_device *rdev = job->rdev;
+	struct drm_gem_object **bos;
+	struct ww_acquire_ctx acquire_ctx;
+	int ret = 0;
+
+	bos = kvmalloc_array(job->in_bo_count + job->out_bo_count, sizeof(void *),
+			     GFP_KERNEL);
+	memcpy(bos, job->in_bos, job->in_bo_count * sizeof(void *));
+	memcpy(&bos[job->in_bo_count], job->out_bos, job->out_bo_count * sizeof(void *));
+
+	ret = drm_gem_lock_reservations(bos, job->in_bo_count + job->out_bo_count, &acquire_ctx);
+	if (ret)
+		goto err;
+
+	mutex_lock(&rdev->sched_lock);
+	drm_sched_job_arm(&job->base);
+
+	job->inference_done_fence = dma_fence_get(&job->base.s_fence->finished);
+
+	ret = rocket_acquire_object_fences(job->in_bos, job->in_bo_count, &job->base, false);
+	if (ret) {
+		mutex_unlock(&rdev->sched_lock);
+		goto err_unlock;
+	}
+
+	ret = rocket_acquire_object_fences(job->out_bos, job->out_bo_count, &job->base, true);
+	if (ret) {
+		mutex_unlock(&rdev->sched_lock);
+		goto err_unlock;
+	}
+
+	kref_get(&job->refcount); /* put by scheduler job completion */
+
+	drm_sched_entity_push_job(&job->base);
+
+	mutex_unlock(&rdev->sched_lock);
+
+	rocket_attach_object_fences(job->out_bos, job->out_bo_count, job->inference_done_fence);
+
+err_unlock:
+	drm_gem_unlock_reservations(bos, job->in_bo_count + job->out_bo_count, &acquire_ctx);
+err:
+	kfree(bos);
+
+	return ret;
+}
+
+static void rocket_job_cleanup(struct kref *ref)
+{
+	struct rocket_job *job = container_of(ref, struct rocket_job,
+						refcount);
+	unsigned int i;
+
+	dma_fence_put(job->done_fence);
+	dma_fence_put(job->inference_done_fence);
+
+	if (job->in_bos) {
+		for (i = 0; i < job->in_bo_count; i++)
+			drm_gem_object_put(job->in_bos[i]);
+
+		kvfree(job->in_bos);
+	}
+
+	if (job->out_bos) {
+		for (i = 0; i < job->out_bo_count; i++)
+			drm_gem_object_put(job->out_bos[i]);
+
+		kvfree(job->out_bos);
+	}
+
+	kfree(job->tasks);
+
+	kfree(job);
+}
+
+static void rocket_job_put(struct rocket_job *job)
+{
+	kref_put(&job->refcount, rocket_job_cleanup);
+}
+
+static void rocket_job_free(struct drm_sched_job *sched_job)
+{
+	struct rocket_job *job = to_rocket_job(sched_job);
+
+	drm_sched_job_cleanup(sched_job);
+
+	rocket_job_put(job);
+}
+
+static struct rocket_core *sched_to_core(struct rocket_device *rdev,
+					 struct drm_gpu_scheduler *sched)
+{
+	unsigned int core;
+
+	for (core = 0; core < rdev->num_cores; core++) {
+		if (&rdev->cores[core].sched == sched)
+			return &rdev->cores[core];
+	}
+
+	return NULL;
+}
+
+static struct dma_fence *rocket_job_run(struct drm_sched_job *sched_job)
+{
+	struct rocket_job *job = to_rocket_job(sched_job);
+	struct rocket_device *rdev = job->rdev;
+	struct rocket_core *core = sched_to_core(rdev, sched_job->sched);
+	struct dma_fence *fence = NULL;
+	int ret;
+
+	if (unlikely(job->base.s_fence->finished.error))
+		return NULL;
+
+	/*
+	 * Nothing to execute: can happen if the job has finished while
+	 * we were resetting the GPU.
+	 */
+	if (job->next_task_idx == job->task_count)
+		return NULL;
+
+	fence = rocket_fence_create(core);
+	if (IS_ERR(fence))
+		return fence;
+
+	if (job->done_fence)
+		dma_fence_put(job->done_fence);
+	job->done_fence = dma_fence_get(fence);
+
+	ret = pm_runtime_get_sync(core->dev);
+	if (ret < 0)
+		return fence;
+
+	spin_lock(&core->job_lock);
+
+	core->in_flight_job = job;
+	rocket_job_hw_submit(core, job);
+
+	spin_unlock(&core->job_lock);
+
+	return fence;
+}
+
+static void rocket_job_handle_done(struct rocket_core *core,
+				   struct rocket_job *job)
+{
+	if (job->next_task_idx < job->task_count) {
+		rocket_job_hw_submit(core, job);
+		return;
+	}
+
+	core->in_flight_job = NULL;
+	dma_fence_signal_locked(job->done_fence);
+	pm_runtime_put_autosuspend(core->dev);
+}
+
+static void rocket_job_handle_irq(struct rocket_core *core)
+{
+	uint32_t status, raw_status;
+
+	pm_runtime_mark_last_busy(core->dev);
+
+	status = rocket_read(core, REG_PC_INTERRUPT_STATUS);
+	raw_status = rocket_read(core, REG_PC_INTERRUPT_RAW_STATUS);
+
+	rocket_write(core, REG_PC_OPERATION_ENABLE, 0x0);
+	rocket_write(core, REG_PC_INTERRUPT_CLEAR, 0x1ffff);
+
+	spin_lock(&core->job_lock);
+
+	if (core->in_flight_job)
+		rocket_job_handle_done(core, core->in_flight_job);
+
+	spin_unlock(&core->job_lock);
+}
+
+static void
+rocket_reset(struct rocket_core *core, struct drm_sched_job *bad)
+{
+	bool cookie;
+
+	if (!atomic_read(&core->reset.pending))
+		return;
+
+	/*
+	 * Stop the scheduler.
+	 *
+	 * FIXME: We temporarily get out of the dma_fence_signalling section
+	 * because the cleanup path generate lockdep splats when taking locks
+	 * to release job resources. We should rework the code to follow this
+	 * pattern:
+	 *
+	 *	try_lock
+	 *	if (locked)
+	 *		release
+	 *	else
+	 *		schedule_work_to_release_later
+	 */
+	drm_sched_stop(&core->sched, bad);
+
+	cookie = dma_fence_begin_signalling();
+
+	if (bad)
+		drm_sched_increase_karma(bad);
+
+	/*
+	 * Mask job interrupts and synchronize to make sure we won't be
+	 * interrupted during our reset.
+	 */
+	rocket_write(core, REG_PC_INTERRUPT_MASK, 0x0);
+	synchronize_irq(core->irq);
+
+	/* Handle the remaining interrupts before we reset. */
+	rocket_job_handle_irq(core);
+
+	/*
+	 * Remaining interrupts have been handled, but we might still have
+	 * stuck jobs. Let's make sure the PM counters stay balanced by
+	 * manually calling pm_runtime_put_noidle() and
+	 * rocket_devfreq_record_idle() for each stuck job.
+	 * Let's also make sure the cycle counting register's refcnt is
+	 * kept balanced to prevent it from running forever
+	 */
+	spin_lock(&core->job_lock);
+	if (core->in_flight_job)
+		pm_runtime_put_noidle(core->dev);
+
+	core->in_flight_job = NULL;
+	spin_unlock(&core->job_lock);
+
+	/* Proceed with reset now. */
+	pm_runtime_force_suspend(core->dev);
+	pm_runtime_force_resume(core->dev);
+
+	/* GPU has been reset, we can clear the reset pending bit. */
+	atomic_set(&core->reset.pending, 0);
+
+	/*
+	 * Now resubmit jobs that were previously queued but didn't have a
+	 * chance to finish.
+	 * FIXME: We temporarily get out of the DMA fence signalling section
+	 * while resubmitting jobs because the job submission logic will
+	 * allocate memory with the GFP_KERNEL flag which can trigger memory
+	 * reclaim and exposes a lock ordering issue.
+	 */
+	dma_fence_end_signalling(cookie);
+	drm_sched_resubmit_jobs(&core->sched);
+	cookie = dma_fence_begin_signalling();
+
+	/* Restart the scheduler */
+	drm_sched_start(&core->sched, 0);
+
+	dma_fence_end_signalling(cookie);
+}
+
+static enum drm_gpu_sched_stat rocket_job_timedout(struct drm_sched_job *sched_job)
+{
+	struct rocket_job *job = to_rocket_job(sched_job);
+	struct rocket_device *rdev = job->rdev;
+	struct rocket_core *core = sched_to_core(rdev, sched_job->sched);
+
+	/*
+	 * If the GPU managed to complete this jobs fence, the timeout is
+	 * spurious. Bail out.
+	 */
+	if (dma_fence_is_signaled(job->done_fence))
+		return DRM_GPU_SCHED_STAT_NOMINAL;
+
+	/*
+	 * Rocket IRQ handler may take a long time to process an interrupt
+	 * if there is another IRQ handler hogging the processing.
+	 * For example, the HDMI encoder driver might be stuck in the IRQ
+	 * handler for a significant time in a case of bad cable connection.
+	 * In order to catch such cases and not report spurious rocket
+	 * job timeouts, synchronize the IRQ handler and re-check the fence
+	 * status.
+	 */
+	synchronize_irq(core->irq);
+
+	if (dma_fence_is_signaled(job->done_fence)) {
+		dev_warn(core->dev, "unexpectedly high interrupt latency\n");
+		return DRM_GPU_SCHED_STAT_NOMINAL;
+	}
+
+	dev_err(core->dev, "gpu sched timeout");
+
+	atomic_set(&core->reset.pending, 1);
+	rocket_reset(core, sched_job);
+
+	return DRM_GPU_SCHED_STAT_NOMINAL;
+}
+
+static void rocket_reset_work(struct work_struct *work)
+{
+	struct rocket_core *core;
+
+	core = container_of(work, struct rocket_core, reset.work);
+	rocket_reset(core, NULL);
+}
+
+static const struct drm_sched_backend_ops rocket_sched_ops = {
+	.run_job = rocket_job_run,
+	.timedout_job = rocket_job_timedout,
+	.free_job = rocket_job_free
+};
+
+static irqreturn_t rocket_job_irq_handler_thread(int irq, void *data)
+{
+	struct rocket_core *core = data;
+
+	rocket_job_handle_irq(core);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t rocket_job_irq_handler(int irq, void *data)
+{
+	struct rocket_core *core = data;
+	uint32_t raw_status = rocket_read(core, REG_PC_INTERRUPT_RAW_STATUS);
+
+	WARN_ON(raw_status & PC_INTERRUPT_RAW_STATUS_DMA_READ_ERROR);
+	WARN_ON(raw_status & PC_INTERRUPT_RAW_STATUS_DMA_READ_ERROR);
+
+	if (!(raw_status & PC_INTERRUPT_RAW_STATUS_DPU_0 ||
+	      raw_status & PC_INTERRUPT_RAW_STATUS_DPU_1))
+		return IRQ_NONE;
+
+	rocket_write(core, REG_PC_INTERRUPT_MASK, 0x0);
+
+	return IRQ_WAKE_THREAD;
+}
+
+int rocket_job_init(struct rocket_core *core)
+{
+	int ret;
+
+	INIT_WORK(&core->reset.work, rocket_reset_work);
+	spin_lock_init(&core->job_lock);
+
+	core->irq = platform_get_irq(to_platform_device(core->dev), 0);
+	if (core->irq < 0)
+		return core->irq;
+
+	ret = devm_request_threaded_irq(core->dev, core->irq,
+					rocket_job_irq_handler,
+					rocket_job_irq_handler_thread,
+					IRQF_SHARED, KBUILD_MODNAME "-job",
+					core);
+	if (ret) {
+		dev_err(core->dev, "failed to request job irq");
+		return ret;
+	}
+
+	core->reset.wq = alloc_ordered_workqueue("rocket-reset-%d", 0, core->index);
+	if (!core->reset.wq)
+		return -ENOMEM;
+
+	core->fence_context = dma_fence_context_alloc(1);
+
+	ret = drm_sched_init(&core->sched,
+				&rocket_sched_ops, NULL,
+				DRM_SCHED_PRIORITY_COUNT,
+				1, 0,
+				msecs_to_jiffies(JOB_TIMEOUT_MS),
+				core->reset.wq,
+				NULL, "rocket", core->dev);
+	if (ret) {
+		dev_err(core->dev, "Failed to create scheduler: %d.", ret);
+		goto err_sched;
+	}
+
+	return 0;
+
+err_sched:
+	drm_sched_fini(&core->sched);
+
+	destroy_workqueue(core->reset.wq);
+	return ret;
+}
+
+void rocket_job_fini(struct rocket_core *core)
+{
+	drm_sched_fini(&core->sched);
+
+	cancel_work_sync(&core->reset.work);
+	destroy_workqueue(core->reset.wq);
+}
+
+int rocket_job_open(struct rocket_file_priv *rocket_priv)
+{
+	struct rocket_device *rdev = rocket_priv->rdev;
+	struct drm_gpu_scheduler **scheds = kmalloc_array(rdev->num_cores, sizeof(scheds),
+							  GFP_KERNEL);
+	unsigned int core;
+	int ret;
+
+	for (core = 0; core < rdev->num_cores; core++)
+		scheds[core] = &rdev->cores[core].sched;
+
+	ret = drm_sched_entity_init(&rocket_priv->sched_entity,
+				    DRM_SCHED_PRIORITY_NORMAL,
+				    scheds,
+				    rdev->num_cores, NULL);
+	if (WARN_ON(ret))
+		return ret;
+
+	return 0;
+}
+
+void rocket_job_close(struct rocket_file_priv *rocket_priv)
+{
+	struct drm_sched_entity *entity = &rocket_priv->sched_entity;
+
+	kfree(entity->sched_list);
+	drm_sched_entity_destroy(entity);
+}
+
+int rocket_job_is_idle(struct rocket_core *core)
+{
+	/* If there are any jobs in this HW queue, we're not idle */
+	if (atomic_read(&core->sched.credit_count))
+		return false;
+
+	return true;
+}
+
+static int rocket_ioctl_submit_job(struct drm_device *dev, struct drm_file *file,
+				   struct drm_rocket_job *job)
+{
+	struct rocket_device *rdev = to_rocket_device(dev);
+	struct rocket_file_priv *file_priv = file->driver_priv;
+	struct rocket_job *rjob = NULL;
+	int ret = 0;
+
+	if (job->task_count == 0)
+		return -EINVAL;
+
+	rjob = kzalloc(sizeof(*rjob), GFP_KERNEL);
+	if (!rjob)
+		return -ENOMEM;
+
+	kref_init(&rjob->refcount);
+
+	rjob->rdev = rdev;
+
+	ret = drm_sched_job_init(&rjob->base,
+				 &file_priv->sched_entity,
+				 1, NULL);
+	if (ret)
+		goto out_put_job;
+
+	ret = rocket_copy_tasks(dev, file, job, rjob);
+	if (ret)
+		goto out_cleanup_job;
+
+	ret = drm_gem_objects_lookup(file,
+				     (void __user *)(uintptr_t)job->in_bo_handles,
+				     job->in_bo_handle_count, &rjob->in_bos);
+	if (ret)
+		goto out_cleanup_job;
+
+	rjob->in_bo_count = job->in_bo_handle_count;
+
+	ret = drm_gem_objects_lookup(file,
+				     (void __user *)(uintptr_t)job->out_bo_handles,
+				     job->out_bo_handle_count, &rjob->out_bos);
+	if (ret)
+		goto out_cleanup_job;
+
+	rjob->out_bo_count = job->out_bo_handle_count;
+
+	ret = rocket_job_push(rjob);
+	if (ret)
+		goto out_cleanup_job;
+
+out_cleanup_job:
+	if (ret)
+		drm_sched_job_cleanup(&rjob->base);
+out_put_job:
+	rocket_job_put(rjob);
+
+	return ret;
+}
+
+int rocket_ioctl_submit(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct drm_rocket_submit *args = data;
+	struct drm_rocket_job *jobs;
+	int ret = 0;
+	unsigned int i = 0;
+
+	jobs = kvmalloc_array(args->job_count, sizeof(*jobs), GFP_KERNEL);
+	if (!jobs) {
+		DRM_DEBUG("Failed to allocate incoming job array\n");
+		return -ENOMEM;
+	}
+
+	if (copy_from_user(jobs,
+			   (void __user *)(uintptr_t)args->jobs,
+			   args->job_count * sizeof(*jobs))) {
+		ret = -EFAULT;
+		DRM_DEBUG("Failed to copy incoming job array\n");
+		goto exit;
+	}
+
+	for (i = 0; i < args->job_count; i++)
+		rocket_ioctl_submit_job(dev, file, &jobs[i]);
+
+exit:
+	kfree(jobs);
+
+	return ret;
+}
diff --git a/drivers/accel/rocket/rocket_job.h b/drivers/accel/rocket/rocket_job.h
new file mode 100644
index 0000000000000000000000000000000000000000..93fa1f988c72adb7a405acbf08c1c9b87d22f9c5
--- /dev/null
+++ b/drivers/accel/rocket/rocket_job.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
+
+#ifndef __ROCKET_JOB_H__
+#define __ROCKET_JOB_H__
+
+#include <drm/drm_drv.h>
+#include <drm/gpu_scheduler.h>
+
+#include "rocket_core.h"
+#include "rocket_drv.h"
+
+struct rocket_task {
+	u64 regcmd;
+	u32 regcmd_count;
+};
+
+struct rocket_job {
+	struct drm_sched_job base;
+
+	struct rocket_device *rdev;
+
+	struct drm_gem_object **in_bos;
+	struct drm_gem_object **out_bos;
+
+	u32 in_bo_count;
+	u32 out_bo_count;
+
+	struct rocket_task *tasks;
+	u32 task_count;
+	u32 next_task_idx;
+
+	/* Fence to be signaled by drm-sched once its done with the job */
+	struct dma_fence *inference_done_fence;
+
+	/* Fence to be signaled by IRQ handler when the job is complete. */
+	struct dma_fence *done_fence;
+
+	struct kref refcount;
+};
+
+int rocket_ioctl_submit(struct drm_device *dev, void *data, struct drm_file *file);
+
+int rocket_job_init(struct rocket_core *core);
+void rocket_job_fini(struct rocket_core *core);
+int rocket_job_open(struct rocket_file_priv *rocket_priv);
+void rocket_job_close(struct rocket_file_priv *rocket_priv);
+int rocket_job_is_idle(struct rocket_core *core);
+
+#endif
diff --git a/include/uapi/drm/rocket_accel.h b/include/uapi/drm/rocket_accel.h
index 8338726a83c31b954608ca505cf78bcd70d3494b..eb886351134ebef62969b1e1182ccc174f88fe9d 100644
--- a/include/uapi/drm/rocket_accel.h
+++ b/include/uapi/drm/rocket_accel.h
@@ -12,8 +12,10 @@ extern "C" {
 #endif
 
 #define DRM_ROCKET_CREATE_BO			0x00
+#define DRM_ROCKET_SUBMIT			0x01
 
 #define DRM_IOCTL_ROCKET_CREATE_BO		DRM_IOWR(DRM_COMMAND_BASE + DRM_ROCKET_CREATE_BO, struct drm_rocket_create_bo)
+#define DRM_IOCTL_ROCKET_SUBMIT			DRM_IOW(DRM_COMMAND_BASE + DRM_ROCKET_SUBMIT, struct drm_rocket_submit)
 
 /**
  * struct drm_rocket_create_bo - ioctl argument for creating Rocket BOs.
@@ -36,6 +38,59 @@ struct drm_rocket_create_bo {
 	__u64 offset;
 };
 
+/**
+ * struct drm_rocket_task - A task to be run on the NPU
+ *
+ * A task is the smallest unit of work that can be run on the NPU.
+ */
+struct drm_rocket_task {
+	/** DMA address to NPU mapping of register command buffer */
+	__u64 regcmd;
+
+	/** Number of commands in the register command buffer */
+	__u32 regcmd_count;
+};
+
+/**
+ * struct drm_rocket_job - A job to be run on the NPU
+ *
+ * The kernel will schedule the execution of this job taking into account its
+ * dependencies with other jobs. All tasks in the same job will be executed
+ * sequentially on the same core, to benefit from memory residency in SRAM.
+ */
+struct drm_rocket_job {
+	/** Pointer to an array of struct drm_rocket_task. */
+	__u64 tasks;
+
+	/** Pointer to a u32 array of the BOs that are read by the job. */
+	__u64 in_bo_handles;
+
+	/** Pointer to a u32 array of the BOs that are written to by the job. */
+	__u64 out_bo_handles;
+
+	/** Number of tasks passed in. */
+	__u32 task_count;
+
+	/** Number of input BO handles passed in (size is that times 4). */
+	__u32 in_bo_handle_count;
+
+	/** Number of output BO handles passed in (size is that times 4). */
+	__u32 out_bo_handle_count;
+};
+
+/**
+ * struct drm_rocket_submit - ioctl argument for submitting commands to the NPU.
+ *
+ * The kernel will schedule the execution of these jobs in dependency order.
+ */
+struct drm_rocket_submit {
+	/** Pointer to an array of struct drm_rocket_job. */
+	__u64 jobs;
+
+	/** Number of jobs passed in. */
+	__u32 job_count;
+};
+
 #if defined(__cplusplus)
 }
 #endif

-- 
2.48.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 7/7] accel/rocket: Add IOCTLs for synchronizing memory accesses
  2025-02-25  7:55 [PATCH v2 0/7] New DRM accel driver for Rockchip's RKNN NPU Tomeu Vizoso
                   ` (4 preceding siblings ...)
  2025-02-25  7:55 ` [PATCH v2 6/7] accel/rocket: Add job submission IOCTL Tomeu Vizoso
@ 2025-02-25  7:55 ` Tomeu Vizoso
  2025-03-21 16:15   ` Jeffrey Hugo
       [not found] ` <20250225-6-10-rocket-v2-4-d4dbcfafc141@tomeuvizoso.net>
  6 siblings, 1 reply; 25+ messages in thread
From: Tomeu Vizoso @ 2025-02-25  7:55 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso

The NPU cores have their own access to the memory bus, and this isn't
cache coherent with the CPUs.

Add IOCTLs so userspace can mark when the caches need to be flushed, and
also when a writer job needs to be waited for before the buffer can be
accessed from the CPU.

Initially based on the same IOCTLs from the Etnaviv driver.

v2:
- Don't break UABI by reordering the IOCTL IDs (Jeffrey Hugo)

Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
---
 drivers/accel/rocket/rocket_drv.c |  2 ++
 drivers/accel/rocket/rocket_gem.c | 75 +++++++++++++++++++++++++++++++++++++++
 drivers/accel/rocket/rocket_gem.h |  5 +++
 include/uapi/drm/rocket_accel.h   | 18 ++++++++++
 4 files changed, 100 insertions(+)

diff --git a/drivers/accel/rocket/rocket_drv.c b/drivers/accel/rocket/rocket_drv.c
index a6b486e2d4f648d7b1d8831590b633bf661c7bc4..cc3531f66839b777e7abc1d41cb50cffd9685ea0 100644
--- a/drivers/accel/rocket/rocket_drv.c
+++ b/drivers/accel/rocket/rocket_drv.c
@@ -58,6 +58,8 @@ static const struct drm_ioctl_desc rocket_drm_driver_ioctls[] = {
 
 	ROCKET_IOCTL(CREATE_BO, create_bo),
 	ROCKET_IOCTL(SUBMIT, submit),
+	ROCKET_IOCTL(PREP_BO, prep_bo),
+	ROCKET_IOCTL(FINI_BO, fini_bo),
 };
 
 DEFINE_DRM_ACCEL_FOPS(rocket_accel_driver_fops);
diff --git a/drivers/accel/rocket/rocket_gem.c b/drivers/accel/rocket/rocket_gem.c
index d5337cf1e275c249a1491d0dd28e6b8ccd2ff2cb..6a0a7f6958c34bce4611cfdf033590029c3ac026 100644
--- a/drivers/accel/rocket/rocket_gem.c
+++ b/drivers/accel/rocket/rocket_gem.c
@@ -139,3 +139,78 @@ int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *
 
 	return ret;
 }
+
+static inline enum dma_data_direction rocket_op_to_dma_dir(u32 op)
+{
+	if (op & ROCKET_PREP_READ)
+		return DMA_FROM_DEVICE;
+	else if (op & ROCKET_PREP_WRITE)
+		return DMA_TO_DEVICE;
+	else
+		return DMA_BIDIRECTIONAL;
+}
+
+int rocket_ioctl_prep_bo(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct drm_rocket_prep_bo *args = data;
+	unsigned long timeout = drm_timeout_abs_to_jiffies(args->timeout_ns);
+	struct rocket_device *rdev = to_rocket_device(dev);
+	struct drm_gem_object *gem_obj;
+	struct drm_gem_shmem_object *shmem_obj;
+	bool write = !!(args->op & ROCKET_PREP_WRITE);
+	long ret = 0;
+
+	if (args->op & ~(ROCKET_PREP_READ | ROCKET_PREP_WRITE))
+		return -EINVAL;
+
+	gem_obj = drm_gem_object_lookup(file, args->handle);
+	if (!gem_obj)
+		return -ENOENT;
+
+	ret = dma_resv_wait_timeout(gem_obj->resv, dma_resv_usage_rw(write),
+				    true, timeout);
+	if (!ret)
+		ret = timeout ? -ETIMEDOUT : -EBUSY;
+
+	shmem_obj = &to_rocket_bo(gem_obj)->base;
+
+	for (unsigned int core = 1; core < rdev->num_cores; core++) {
+		dma_sync_sgtable_for_cpu(rdev->cores[core].dev, shmem_obj->sgt,
+					 rocket_op_to_dma_dir(args->op));
+	}
+
+	to_rocket_bo(gem_obj)->last_cpu_prep_op = args->op;
+
+	drm_gem_object_put(gem_obj);
+
+	return ret;
+}
+
+int rocket_ioctl_fini_bo(struct drm_device *dev, void *data, struct drm_file *file)
+{
+	struct drm_rocket_fini_bo *args = data;
+	struct drm_gem_object *gem_obj;
+	struct rocket_gem_object *rkt_obj;
+	struct drm_gem_shmem_object *shmem_obj;
+	struct rocket_device *rdev = to_rocket_device(dev);
+
+	gem_obj = drm_gem_object_lookup(file, args->handle);
+	if (!gem_obj)
+		return -ENOENT;
+
+	rkt_obj = to_rocket_bo(gem_obj);
+	shmem_obj = &rkt_obj->base;
+
+	WARN_ON(rkt_obj->last_cpu_prep_op == 0);
+
+	for (unsigned int core = 1; core < rdev->num_cores; core++) {
+		dma_sync_sgtable_for_device(rdev->cores[core].dev, shmem_obj->sgt,
+					    rocket_op_to_dma_dir(rkt_obj->last_cpu_prep_op));
+	}
+
+	rkt_obj->last_cpu_prep_op = 0;
+
+	drm_gem_object_put(gem_obj);
+
+	return 0;
+}
diff --git a/drivers/accel/rocket/rocket_gem.h b/drivers/accel/rocket/rocket_gem.h
index 19b0cf91ddd99bd126c1af30beb169d6101f6dee..1fd11441f5856c4b10ed77b63f34f157cd13e242 100644
--- a/drivers/accel/rocket/rocket_gem.h
+++ b/drivers/accel/rocket/rocket_gem.h
@@ -12,12 +12,17 @@ struct rocket_gem_object {
 	struct mutex mutex;
 	size_t size;
 	u32 offset;
+	u32 last_cpu_prep_op;
 };
 
 struct drm_gem_object *rocket_gem_create_object(struct drm_device *dev, size_t size);
 
 int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file);
 
+int rocket_ioctl_prep_bo(struct drm_device *dev, void *data, struct drm_file *file);
+
+int rocket_ioctl_fini_bo(struct drm_device *dev, void *data, struct drm_file *file);
+
 static inline
 struct  rocket_gem_object *to_rocket_bo(struct drm_gem_object *obj)
 {
diff --git a/include/uapi/drm/rocket_accel.h b/include/uapi/drm/rocket_accel.h
index eb886351134ebef62969b1e1182ccc174f88fe9d..ad6589884880126a248fa646aab7c4034600c11c 100644
--- a/include/uapi/drm/rocket_accel.h
+++ b/include/uapi/drm/rocket_accel.h
@@ -13,9 +13,13 @@ extern "C" {
 
 #define DRM_ROCKET_CREATE_BO			0x00
 #define DRM_ROCKET_SUBMIT			0x01
+#define DRM_ROCKET_PREP_BO			0x02
+#define DRM_ROCKET_FINI_BO			0x03
 
 #define DRM_IOCTL_ROCKET_CREATE_BO		DRM_IOWR(DRM_COMMAND_BASE + DRM_ROCKET_CREATE_BO, struct drm_rocket_create_bo)
 #define DRM_IOCTL_ROCKET_SUBMIT			DRM_IOW(DRM_COMMAND_BASE + DRM_ROCKET_SUBMIT, struct drm_rocket_submit)
+#define DRM_IOCTL_ROCKET_PREP_BO		DRM_IOW(DRM_COMMAND_BASE + DRM_ROCKET_PREP_BO, struct drm_rocket_prep_bo)
+#define DRM_IOCTL_ROCKET_FINI_BO		DRM_IOW(DRM_COMMAND_BASE + DRM_ROCKET_FINI_BO, struct drm_rocket_fini_bo)
 
 /**
  * struct drm_rocket_create_bo - ioctl argument for creating Rocket BOs.
@@ -38,6 +42,20 @@ struct drm_rocket_create_bo {
 	__u64 offset;
 };
 
+#define ROCKET_PREP_READ        0x01
+#define ROCKET_PREP_WRITE       0x02
+
+struct drm_rocket_prep_bo {
+	__u32 handle;		/* in */
+	__u32 op;		/* in, mask of ROCKET_PREP_x */
+	__s64 timeout_ns;	/* in */
+};
+
+struct drm_rocket_fini_bo {
+	__u32 handle;		/* in */
+	__u32 flags;		/* in, placeholder for now, no defined values */
+};
+
 /**
  * struct drm_rocket_task - A task to be run on the NPU
  *

-- 
2.48.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 5/7] accel/rocket: Add IOCTL for BO creation
  2025-02-25  7:55 ` [PATCH v2 5/7] accel/rocket: Add IOCTL for BO creation Tomeu Vizoso
@ 2025-02-25  8:35   ` Thomas Zimmermann
  2025-03-21 15:56   ` Jeffrey Hugo
  1 sibling, 0 replies; 25+ messages in thread
From: Thomas Zimmermann @ 2025-02-25  8:35 UTC (permalink / raw)
  To: Tomeu Vizoso, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
	Heiko Stuebner, Oded Gabbay, Jonathan Corbet, Maarten Lankhorst,
	Maxime Ripard, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

Hi

Am 25.02.25 um 08:55 schrieb Tomeu Vizoso:
> This uses the SHMEM DRM helpers and we map right away to the CPU and NPU
> sides, as all buffers are expected to be accessed from both.
>
> v2:
> - Sync the IOMMUs for the other cores when mapping and unmapping.
>
> Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> ---
>   drivers/accel/rocket/Makefile        |   3 +-
>   drivers/accel/rocket/rocket_device.c |   4 +
>   drivers/accel/rocket/rocket_device.h |   2 +
>   drivers/accel/rocket/rocket_drv.c    |   7 +-
>   drivers/accel/rocket/rocket_gem.c    | 141 +++++++++++++++++++++++++++++++++++
>   drivers/accel/rocket/rocket_gem.h    |  27 +++++++
>   include/uapi/drm/rocket_accel.h      |  43 +++++++++++
>   7 files changed, 225 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/accel/rocket/Makefile b/drivers/accel/rocket/Makefile
> index 73a7280d260c068d37ad3048824f710482333540..875cac2243d902694e0d5d05e60b4ae551a633c4 100644
> --- a/drivers/accel/rocket/Makefile
> +++ b/drivers/accel/rocket/Makefile
> @@ -5,4 +5,5 @@ obj-$(CONFIG_DRM_ACCEL_ROCKET) := rocket.o
>   rocket-y := \
>   	rocket_core.o \
>   	rocket_device.o \
> -	rocket_drv.o
> +	rocket_drv.o \
> +	rocket_gem.o
> diff --git a/drivers/accel/rocket/rocket_device.c b/drivers/accel/rocket/rocket_device.c
> index ce3b533f15c1011d8a7a23dd8132e907cc334c58..9af36357caba7148dcac764c8222699f3b572d60 100644
> --- a/drivers/accel/rocket/rocket_device.c
> +++ b/drivers/accel/rocket/rocket_device.c
> @@ -1,6 +1,7 @@
>   // SPDX-License-Identifier: GPL-2.0
>   /* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
>   
> +#include "linux/mutex.h"

Include with angle brackets please.

>   #include <linux/clk.h>
>   
>   #include "rocket_device.h"
> @@ -10,6 +11,8 @@ int rocket_device_init(struct rocket_device *rdev)
>   	struct device *dev = rdev->cores[0].dev;
>   	int err;
>   
> +	mutex_init(&rdev->iommu_lock);

In DRM, we have drmm_mutex_init() in drm/drm_managed.h. Managed cleanup 
is generally preferred.

> +
>   	rdev->clk_npu = devm_clk_get(dev, "npu");
>   	rdev->pclk = devm_clk_get(dev, "pclk");
>   
> @@ -26,4 +29,5 @@ int rocket_device_init(struct rocket_device *rdev)
>   void rocket_device_fini(struct rocket_device *rdev)
>   {
>   	rocket_core_fini(&rdev->cores[0]);
> +	mutex_destroy(&rdev->iommu_lock);
>   }
> diff --git a/drivers/accel/rocket/rocket_device.h b/drivers/accel/rocket/rocket_device.h
> index 466edba9102c5dc5dfac5d3fcc1c904f206eaebb..c6152569fdd9e5587c8e8d7b0d7c2e2a77af6000 100644
> --- a/drivers/accel/rocket/rocket_device.h
> +++ b/drivers/accel/rocket/rocket_device.h
> @@ -14,6 +14,8 @@ struct rocket_device {
>   	struct clk *clk_npu;
>   	struct clk *pclk;
>   
> +	struct mutex iommu_lock;
> +
>   	struct rocket_core *cores;
>   	unsigned int num_cores;
>   };
> diff --git a/drivers/accel/rocket/rocket_drv.c b/drivers/accel/rocket/rocket_drv.c
> index c22d965f20f1239a36b1d823d5fe5f372713555d..e5612b52952fa7a0cd0af02aef314984bc483b05 100644
> --- a/drivers/accel/rocket/rocket_drv.c
> +++ b/drivers/accel/rocket/rocket_drv.c
> @@ -6,6 +6,7 @@
>   #include <drm/drm_gem.h>
>   #include <drm/drm_ioctl.h>
>   #include <drm/drm_of.h>
> +#include <drm/rocket_accel.h>
>   #include <linux/clk.h>
>   #include <linux/component.h>
>   #include <linux/dma-mapping.h>
> @@ -14,6 +15,7 @@
>   #include <linux/pm_runtime.h>
>   
>   #include "rocket_drv.h"
> +#include "rocket_gem.h"
>   
>   static int
>   rocket_open(struct drm_device *dev, struct drm_file *file)
> @@ -42,6 +44,8 @@ rocket_postclose(struct drm_device *dev, struct drm_file *file)
>   static const struct drm_ioctl_desc rocket_drm_driver_ioctls[] = {
>   #define ROCKET_IOCTL(n, func) \
>   	DRM_IOCTL_DEF_DRV(ROCKET_##n, rocket_ioctl_##func, 0)
> +
> +	ROCKET_IOCTL(CREATE_BO, create_bo),
>   };
>   
>   DEFINE_DRM_ACCEL_FOPS(rocket_accel_driver_fops);
> @@ -51,9 +55,10 @@ DEFINE_DRM_ACCEL_FOPS(rocket_accel_driver_fops);
>    * - 1.0 - initial interface
>    */
>   static const struct drm_driver rocket_drm_driver = {
> -	.driver_features	= DRIVER_COMPUTE_ACCEL,
> +	.driver_features	= DRIVER_COMPUTE_ACCEL | DRIVER_GEM,
>   	.open			= rocket_open,
>   	.postclose		= rocket_postclose,
> +	.gem_create_object	= rocket_gem_create_object,
>   	.ioctls			= rocket_drm_driver_ioctls,
>   	.num_ioctls		= ARRAY_SIZE(rocket_drm_driver_ioctls),
>   	.fops			= &rocket_accel_driver_fops,
> diff --git a/drivers/accel/rocket/rocket_gem.c b/drivers/accel/rocket/rocket_gem.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..d5337cf1e275c249a1491d0dd28e6b8ccd2ff2cb
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_gem.c
> @@ -0,0 +1,141 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
> +
> +#include <drm/drm_device.h>
> +#include <drm/drm_utils.h>
> +#include <drm/rocket_accel.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/iommu.h>
> +
> +#include "rocket_device.h"
> +#include "rocket_gem.h"
> +
> +static void rocket_gem_bo_free(struct drm_gem_object *obj)
> +{
> +	struct rocket_device *rdev = to_rocket_device(obj->dev);
> +	struct rocket_gem_object *bo = to_rocket_bo(obj);
> +	struct sg_table *sgt;
> +
> +	drm_WARN_ON(obj->dev, bo->base.pages_use_count > 1);
> +
> +	mutex_lock(&rdev->iommu_lock);
> +
> +	sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
> +
> +	/* Unmap this object from the IOMMUs for cores > 0 */
> +	for (unsigned int core = 1; core < rdev->num_cores; core++) {
> +		struct iommu_domain *domain = iommu_get_domain_for_dev(rdev->cores[core].dev);
> +		size_t unmapped = iommu_unmap(domain, sgt->sgl->dma_address, bo->size);
> +
> +		drm_WARN_ON(obj->dev, unmapped != bo->size);
> +	}
> +
> +	/* This will unmap the pages from the IOMMU linked to core 0 */
> +	drm_gem_shmem_free(&bo->base);
> +
> +	mutex_unlock(&rdev->iommu_lock);
> +}
> +
> +static const struct drm_gem_object_funcs rocket_gem_funcs = {
> +	.free = rocket_gem_bo_free,
> +	.print_info = drm_gem_shmem_object_print_info,
> +	.pin = drm_gem_shmem_object_pin,
> +	.unpin = drm_gem_shmem_object_unpin,
> +	.get_sg_table = drm_gem_shmem_object_get_sg_table,
> +	.vmap = drm_gem_shmem_object_vmap,
> +	.vunmap = drm_gem_shmem_object_vunmap,
> +	.mmap = drm_gem_shmem_object_mmap,
> +	.vm_ops = &drm_gem_shmem_vm_ops,
> +};
> +
> +/**
> + * rocket_gem_create_object - Implementation of driver->gem_create_object.
> + * @dev: DRM device
> + * @size: Size in bytes of the memory the object will reference
> + *
> + * This lets the GEM helpers allocate object structs for us, and keep
> + * our BO stats correct.
> + */
> +struct drm_gem_object *rocket_gem_create_object(struct drm_device *dev, size_t size)
> +{
> +	struct rocket_gem_object *obj;
> +
> +	obj = kzalloc(sizeof(*obj), GFP_KERNEL);
> +	if (!obj)
> +		return ERR_PTR(-ENOMEM);
> +
> +	obj->base.base.funcs = &rocket_gem_funcs;
> +
> +	return &obj->base.base;
> +}
> +
> +int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file)
> +{
> +	struct drm_rocket_create_bo *args = data;
> +	struct rocket_device *rdev = to_rocket_device(dev);
> +	struct drm_gem_shmem_object *shmem_obj;
> +	struct rocket_gem_object *rkt_obj;
> +	struct drm_gem_object *gem_obj;
> +	struct sg_table *sgt;
> +	int ret;
> +
> +	shmem_obj = drm_gem_shmem_create(dev, args->size);
> +	if (IS_ERR(shmem_obj))
> +		return PTR_ERR(shmem_obj);
> +
> +	gem_obj = &shmem_obj->base;
> +	rkt_obj = to_rocket_bo(gem_obj);
> +
> +	rkt_obj->size = args->size;
> +	rkt_obj->offset = 0;
> +	mutex_init(&rkt_obj->mutex);
> +
> +	ret = drm_gem_handle_create(file, gem_obj, &args->handle);
> +	drm_gem_object_put(gem_obj);
> +	if (ret)
> +		goto err;
> +
> +	mutex_lock(&rdev->iommu_lock);
> +
> +	/* This will map the pages to the IOMMU linked to core 0 */
> +	sgt = drm_gem_shmem_get_pages_sgt(shmem_obj);
> +	if (IS_ERR(sgt)) {
> +		ret = PTR_ERR(sgt);
> +		goto err_unlock;
> +	}
> +
> +	/* Map the pages to the IOMMUs linked to the other cores, so all cores can access this BO */
> +	for (unsigned int core = 1; core < rdev->num_cores; core++) {
> +
> +		ret = iommu_map_sgtable(iommu_get_domain_for_dev(rdev->cores[core].dev),
> +					sgt->sgl->dma_address,
> +					sgt,
> +					IOMMU_READ | IOMMU_WRITE);
> +		if (ret < 0 || ret < args->size) {
> +			DRM_ERROR("failed to map buffer: size=%d request_size=%u\n",
> +				ret, args->size);
> +			ret = -ENOMEM;
> +			goto err_unlock;
> +		}
> +
> +		/* iommu_map_sgtable might have aligned the size */
> +		rkt_obj->size = ret;
> +
> +		dma_sync_sgtable_for_device(rdev->cores[core].dev, shmem_obj->sgt,
> +					    DMA_BIDIRECTIONAL);
> +	}
> +
> +	mutex_unlock(&rdev->iommu_lock);
> +
> +	args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
> +	args->dma_address = sg_dma_address(shmem_obj->sgt->sgl);
> +
> +	return 0;
> +
> +err_unlock:
> +	mutex_unlock(&rdev->iommu_lock);
> +err:
> +	drm_gem_shmem_object_free(gem_obj);
> +
> +	return ret;
> +}
> diff --git a/drivers/accel/rocket/rocket_gem.h b/drivers/accel/rocket/rocket_gem.h
> new file mode 100644
> index 0000000000000000000000000000000000000000..19b0cf91ddd99bd126c1af30beb169d6101f6dee
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_gem.h
> @@ -0,0 +1,27 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
> +
> +#ifndef __ROCKET_GEM_H__
> +#define __ROCKET_GEM_H__
> +
> +#include <drm/drm_gem_shmem_helper.h>
> +
> +struct rocket_gem_object {
> +	struct drm_gem_shmem_object base;
> +
> +	struct mutex mutex;

You init this mutex, but never destroy it.

I strongly recommend to scratch all per-object locks and use the 
object's reservation lock instead (found in base.resv), if possible. 
With multiple locks and dma-buf buffer sharing, there's otherwise 
endless fun from getting the locking order right.

> +	size_t size;
> +	u32 offset;
> +};
> +
> +struct drm_gem_object *rocket_gem_create_object(struct drm_device *dev, size_t size);
> +
> +int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *file);
> +
> +static inline
> +struct  rocket_gem_object *to_rocket_bo(struct drm_gem_object *obj)
> +{
> +	return container_of(to_drm_gem_shmem_obj(obj), struct rocket_gem_object, base);
> +}
> +
> +#endif
> diff --git a/include/uapi/drm/rocket_accel.h b/include/uapi/drm/rocket_accel.h
> new file mode 100644
> index 0000000000000000000000000000000000000000..8338726a83c31b954608ca505cf78bcd70d3494b
> --- /dev/null
> +++ b/include/uapi/drm/rocket_accel.h
> @@ -0,0 +1,43 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2024 Tomeu Vizoso
> + */
> +#ifndef _ROCKET_DRM_H_
> +#define _ROCKET_DRM_H_

For UAPI headers, it makes sense to use a more verbose include guard; 
say __DRM_UAPI_ROCKET_ACCEL_H__. Avoids possible conflicts.

Best regards
Thomas

> +
> +#include "drm.h"
> +
> +#if defined(__cplusplus)
> +extern "C" {
> +#endif
> +
> +#define DRM_ROCKET_CREATE_BO			0x00
> +
> +#define DRM_IOCTL_ROCKET_CREATE_BO		DRM_IOWR(DRM_COMMAND_BASE + DRM_ROCKET_CREATE_BO, struct drm_rocket_create_bo)
> +
> +/**
> + * struct drm_rocket_create_bo - ioctl argument for creating Rocket BOs.
> + *
> + */
> +struct drm_rocket_create_bo {
> +	__u32 size;
> +
> +	/** Returned GEM handle for the BO. */
> +	__u32 handle;
> +
> +	/**
> +	 * Returned DMA address for the BO in the NPU address space.  This address
> +	 * is private to the DRM fd and is valid for the lifetime of the GEM
> +	 * handle.
> +	 */
> +	__u64 dma_address;
> +
> +	/** Offset into the drm node to use for subsequent mmap call. */
> +	__u64 offset;
> +};
> +
> +#if defined(__cplusplus)
> +}
> +#endif
> +
> +#endif /* _ROCKET_DRM_H_ */
>

-- 
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 6/7] accel/rocket: Add job submission IOCTL
  2025-02-25  7:55 ` [PATCH v2 6/7] accel/rocket: Add job submission IOCTL Tomeu Vizoso
@ 2025-02-25  8:44   ` Thomas Zimmermann
  2025-03-21 16:09   ` Jeff Hugo
  2025-04-29 10:05   ` Nicolas Frattaroli
  2 siblings, 0 replies; 25+ messages in thread
From: Thomas Zimmermann @ 2025-02-25  8:44 UTC (permalink / raw)
  To: Tomeu Vizoso, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
	Heiko Stuebner, Oded Gabbay, Jonathan Corbet, Maarten Lankhorst,
	Maxime Ripard, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

Hi

Am 25.02.25 um 08:55 schrieb Tomeu Vizoso:
> Using the DRM GPU scheduler infrastructure, with a scheduler for each
> core.
>
> Userspace can decide for a series of tasks to be executed sequentially
> in the same core, so SRAM locality can be taken advantage of.
>
> The job submission code was initially based on Panfrost.
>
> v2:
> - Remove hardcoded number of cores
> - Misc. style fixes (Jeffrey Hugo)
> - Repack IOCTL struct (Jeffrey Hugo)
>
> Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> ---
>   drivers/accel/rocket/Makefile        |   3 +-
>   drivers/accel/rocket/rocket_core.c   |   6 +
>   drivers/accel/rocket/rocket_core.h   |  14 +
>   drivers/accel/rocket/rocket_device.c |   2 +
>   drivers/accel/rocket/rocket_device.h |   2 +
>   drivers/accel/rocket/rocket_drv.c    |  15 +
>   drivers/accel/rocket/rocket_drv.h    |   4 +
>   drivers/accel/rocket/rocket_job.c    | 710 +++++++++++++++++++++++++++++++++++
>   drivers/accel/rocket/rocket_job.h    |  50 +++
>   include/uapi/drm/rocket_accel.h      |  55 +++
>   10 files changed, 860 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/accel/rocket/Makefile b/drivers/accel/rocket/Makefile
> index 875cac2243d902694e0d5d05e60b4ae551a633c4..4d59036af8d9c213d3cac0559eb66e3ebb0320e7 100644
> --- a/drivers/accel/rocket/Makefile
> +++ b/drivers/accel/rocket/Makefile
> @@ -6,4 +6,5 @@ rocket-y := \
>   	rocket_core.o \
>   	rocket_device.o \
>   	rocket_drv.o \
> -	rocket_gem.o
> +	rocket_gem.o \
> +	rocket_job.o
> diff --git a/drivers/accel/rocket/rocket_core.c b/drivers/accel/rocket/rocket_core.c
> index 09d966c826b5b1090a18cb24b3aa4aba286a12d4..2b522592693874eed90463e8f85653d5282ae5b8 100644
> --- a/drivers/accel/rocket/rocket_core.c
> +++ b/drivers/accel/rocket/rocket_core.c
> @@ -6,6 +6,7 @@
>   #include <linux/pm_runtime.h>
>   
>   #include "rocket_core.h"
> +#include "rocket_job.h"
>   #include "rocket_registers.h"
>   
>   static int rocket_clk_init(struct rocket_core *core)
> @@ -48,6 +49,10 @@ int rocket_core_init(struct rocket_core *core)
>   	if (IS_ERR(core->iomem))
>   		return PTR_ERR(core->iomem);
>   
> +	err = rocket_job_init(core);
> +	if (err)
> +		return err;
> +
>   	pm_runtime_use_autosuspend(dev);
>   	pm_runtime_set_autosuspend_delay(dev, 50); /* ~3 frames */
>   	pm_runtime_enable(dev);
> @@ -68,4 +73,5 @@ int rocket_core_init(struct rocket_core *core)
>   void rocket_core_fini(struct rocket_core *core)
>   {
>   	pm_runtime_disable(core->dev);
> +	rocket_job_fini(core);
>   }
> diff --git a/drivers/accel/rocket/rocket_core.h b/drivers/accel/rocket/rocket_core.h
> index 2171eba7139ccc63fe24802dc81b4adb7f3abf31..045a46a2010a2ffd6122ed86c379e5fabc70365a 100644
> --- a/drivers/accel/rocket/rocket_core.h
> +++ b/drivers/accel/rocket/rocket_core.h
> @@ -21,6 +21,20 @@ struct rocket_core {
>   	void __iomem *iomem;
>   	struct clk *a_clk;
>   	struct clk *h_clk;
> +
> +	struct rocket_job *in_flight_job;
> +
> +	spinlock_t job_lock;
> +
> +	struct {
> +		struct workqueue_struct *wq;
> +		struct work_struct work;
> +		atomic_t pending;
> +	} reset;
> +
> +	struct drm_gpu_scheduler sched;
> +	u64 fence_context;
> +	u64 emit_seqno;
>   };
>   
>   int rocket_core_init(struct rocket_core *core);
> diff --git a/drivers/accel/rocket/rocket_device.c b/drivers/accel/rocket/rocket_device.c
> index 9af36357caba7148dcac764c8222699f3b572d60..62c640e1e0200fe25b6834e45d71f6de139ff3ab 100644
> --- a/drivers/accel/rocket/rocket_device.c
> +++ b/drivers/accel/rocket/rocket_device.c
> @@ -12,6 +12,7 @@ int rocket_device_init(struct rocket_device *rdev)
>   	int err;
>   
>   	mutex_init(&rdev->iommu_lock);
> +	mutex_init(&rdev->sched_lock);
>   
>   	rdev->clk_npu = devm_clk_get(dev, "npu");
>   	rdev->pclk = devm_clk_get(dev, "pclk");
> @@ -29,5 +30,6 @@ int rocket_device_init(struct rocket_device *rdev)
>   void rocket_device_fini(struct rocket_device *rdev)
>   {
>   	rocket_core_fini(&rdev->cores[0]);
> +	mutex_destroy(&rdev->sched_lock);
>   	mutex_destroy(&rdev->iommu_lock);
>   }
> diff --git a/drivers/accel/rocket/rocket_device.h b/drivers/accel/rocket/rocket_device.h
> index c6152569fdd9e5587c8e8d7b0d7c2e2a77af6000..4168ae8da2d38c2ea114b37c6e053b02611a0232 100644
> --- a/drivers/accel/rocket/rocket_device.h
> +++ b/drivers/accel/rocket/rocket_device.h
> @@ -11,6 +11,8 @@
>   struct rocket_device {
>   	struct drm_device ddev;
>   
> +	struct mutex sched_lock;
> +
>   	struct clk *clk_npu;
>   	struct clk *pclk;
>   
> diff --git a/drivers/accel/rocket/rocket_drv.c b/drivers/accel/rocket/rocket_drv.c
> index e5612b52952fa7a0cd0af02aef314984bc483b05..a6b486e2d4f648d7b1d8831590b633bf661c7bc4 100644
> --- a/drivers/accel/rocket/rocket_drv.c
> +++ b/drivers/accel/rocket/rocket_drv.c
> @@ -16,12 +16,14 @@
>   
>   #include "rocket_drv.h"
>   #include "rocket_gem.h"
> +#include "rocket_job.h"
>   
>   static int
>   rocket_open(struct drm_device *dev, struct drm_file *file)
>   {
>   	struct rocket_device *rdev = to_rocket_device(dev);
>   	struct rocket_file_priv *rocket_priv;
> +	int ret;
>   
>   	rocket_priv = kzalloc(sizeof(*rocket_priv), GFP_KERNEL);
>   	if (!rocket_priv)
> @@ -30,7 +32,15 @@ rocket_open(struct drm_device *dev, struct drm_file *file)
>   	rocket_priv->rdev = rdev;
>   	file->driver_priv = rocket_priv;
>   
> +	ret = rocket_job_open(rocket_priv);
> +	if (ret)
> +		goto err_free;
> +
>   	return 0;
> +
> +err_free:
> +	kfree(rocket_priv);
> +	return ret;
>   }
>   
>   static void
> @@ -38,6 +48,7 @@ rocket_postclose(struct drm_device *dev, struct drm_file *file)
>   {
>   	struct rocket_file_priv *rocket_priv = file->driver_priv;
>   
> +	rocket_job_close(rocket_priv);
>   	kfree(rocket_priv);
>   }
>   
> @@ -46,6 +57,7 @@ static const struct drm_ioctl_desc rocket_drm_driver_ioctls[] = {
>   	DRM_IOCTL_DEF_DRV(ROCKET_##n, rocket_ioctl_##func, 0)
>   
>   	ROCKET_IOCTL(CREATE_BO, create_bo),
> +	ROCKET_IOCTL(SUBMIT, submit),
>   };
>   
>   DEFINE_DRM_ACCEL_FOPS(rocket_accel_driver_fops);
> @@ -245,6 +257,9 @@ static int rocket_device_runtime_suspend(struct device *dev)
>   		if (dev != rdev->cores[core].dev)
>   			continue;
>   
> +		if (!rocket_job_is_idle(&rdev->cores[core]))
> +			return -EBUSY;
> +
>   		clk_disable_unprepare(rdev->cores[core].a_clk);
>   		clk_disable_unprepare(rdev->cores[core].h_clk);
>   
> diff --git a/drivers/accel/rocket/rocket_drv.h b/drivers/accel/rocket/rocket_drv.h
> index ccdd50c69d4c033eea18cb800407fdcfb3bf2e9b..54e21a61006057aee293496016e54b495a2f6d55 100644
> --- a/drivers/accel/rocket/rocket_drv.h
> +++ b/drivers/accel/rocket/rocket_drv.h
> @@ -4,10 +4,14 @@
>   #ifndef __ROCKET_DRV_H__
>   #define __ROCKET_DRV_H__
>   
> +#include <drm/gpu_scheduler.h>
> +
>   #include "rocket_device.h"
>   
>   struct rocket_file_priv {
>   	struct rocket_device *rdev;
> +
> +	struct drm_sched_entity sched_entity;
>   };
>   
>   #endif
> diff --git a/drivers/accel/rocket/rocket_job.c b/drivers/accel/rocket/rocket_job.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..25b31f28e932aaee86173b9a0962932c9c640c03
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_job.c
> @@ -0,0 +1,710 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */
> +/* Copyright 2019 Collabora ltd. */
> +/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
> +
> +#include <drm/drm_file.h>
> +#include <drm/drm_gem.h>
> +#include <drm/rocket_accel.h>
> +#include <linux/interrupt.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "rocket_core.h"
> +#include "rocket_device.h"
> +#include "rocket_drv.h"
> +#include "rocket_job.h"
> +#include "rocket_registers.h"
> +
> +#define JOB_TIMEOUT_MS 500
> +
> +#define job_write(dev, reg, data) writel(data, dev->iomem + (reg))
> +#define job_read(dev, reg) readl(dev->iomem + (reg))
> +
> +static struct rocket_job *
> +to_rocket_job(struct drm_sched_job *sched_job)
> +{
> +	return container_of(sched_job, struct rocket_job, base);
> +}
> +
> +struct rocket_fence {
> +	struct dma_fence base;
> +	struct drm_device *dev;
> +	/* rocket seqno for signaled() test */
> +	u64 seqno;
> +	int queue;
> +};
> +
> +static inline struct rocket_fence *
> +to_rocket_fence(struct dma_fence *fence)
> +{
> +	return (struct rocket_fence *)fence;

Proper upcast with container_of() please.

> +}
> +
> +static const char *rocket_fence_get_driver_name(struct dma_fence *fence)
> +{
> +	return "rocket";
> +}
> +
> +static const char *rocket_fence_get_timeline_name(struct dma_fence *fence)
> +{
> +	return "rockchip-npu";
> +}
> +
> +static const struct dma_fence_ops rocket_fence_ops = {
> +	.get_driver_name = rocket_fence_get_driver_name,
> +	.get_timeline_name = rocket_fence_get_timeline_name,
> +};
> +
> +static struct dma_fence *rocket_fence_create(struct rocket_core *core)
> +{
> +	struct rocket_device *rdev = core->rdev;
> +	struct rocket_fence *fence;
> +
> +	fence = kzalloc(sizeof(*fence), GFP_KERNEL);
> +	if (!fence)
> +		return ERR_PTR(-ENOMEM);
> +
> +	fence->dev = &rdev->ddev;
> +	fence->seqno = ++core->emit_seqno;
> +	dma_fence_init(&fence->base, &rocket_fence_ops, &core->job_lock,
> +		       core->fence_context, fence->seqno);
> +
> +	return &fence->base;
> +}
> +
> +static int
> +rocket_copy_tasks(struct drm_device *dev,
> +		  struct drm_file *file_priv,
> +		  struct drm_rocket_job *job,
> +		  struct rocket_job *rjob)
> +{
> +	struct drm_rocket_task *tasks;
> +	int ret = 0;
> +	int i;
> +
> +	rjob->task_count = job->task_count;
> +
> +	if (!rjob->task_count)
> +		return 0;
> +
> +	tasks = kvmalloc_array(rjob->task_count, sizeof(*tasks), GFP_KERNEL);
> +	if (!tasks) {
> +		ret = -ENOMEM;
> +		DRM_DEBUG("Failed to allocate incoming tasks\n");

These logging macros are deprecated. Please use drm_dbg() and similar 
functions instead. Here and everywhere else.

> +		goto fail;
> +	}
> +
> +	if (copy_from_user(tasks,
> +			   (void __user *)(uintptr_t)job->tasks,
> +			   rjob->task_count * sizeof(*tasks))) {
> +		ret = -EFAULT;
> +		DRM_DEBUG("Failed to copy incoming tasks\n");
> +		goto fail;
> +	}
> +
> +	rjob->tasks = kvmalloc_array(job->task_count, sizeof(*rjob->tasks), GFP_KERNEL);
> +	if (!rjob->tasks) {
> +		DRM_DEBUG("Failed to allocate task array\n");
> +		ret = -ENOMEM;
> +		goto fail;
> +	}
> +
> +	for (i = 0; i < rjob->task_count; i++) {
> +		if (tasks[i].regcmd_count == 0) {
> +			ret = -EINVAL;
> +			goto fail;
> +		}
> +		rjob->tasks[i].regcmd = tasks[i].regcmd;
> +		rjob->tasks[i].regcmd_count = tasks[i].regcmd_count;
> +	}
> +
> +fail:
> +	kvfree(tasks);
> +	return ret;
> +}
> +
> +static void rocket_job_hw_submit(struct rocket_core *core, struct rocket_job *job)
> +{
> +	struct rocket_task *task;
> +	bool task_pp_en = 1;
> +	bool task_count = 1;
> +
> +	/* GO ! */
> +
> +	/* Don't queue the job if a reset is in progress */
> +	if (!atomic_read(&core->reset.pending)) {
> +
> +		task = &job->tasks[job->next_task_idx];
> +		job->next_task_idx++;   /* TODO: Do this only after a successful run? */
> +
> +		rocket_write(core, REG_PC_BASE_ADDRESS, 0x1);

Just 'write' seems a little imprecise. Maybe 'writel' would be better.

Best regards
Thomas

> +
> +		rocket_write(core, REG_CNA_S_POINTER, 0xe + 0x10000000 * core->index);
> +		rocket_write(core, REG_CORE_S_POINTER, 0xe + 0x10000000 * core->index);
> +
> +		rocket_write(core, REG_PC_BASE_ADDRESS, task->regcmd);
> +		rocket_write(core, REG_PC_REGISTER_AMOUNTS, (task->regcmd_count + 1) / 2 - 1);
> +
> +		rocket_write(core, REG_PC_INTERRUPT_MASK,
> +			     PC_INTERRUPT_MASK_DPU_0 | PC_INTERRUPT_MASK_DPU_1);
> +		rocket_write(core, REG_PC_INTERRUPT_CLEAR,
> +			     PC_INTERRUPT_CLEAR_DPU_0 | PC_INTERRUPT_CLEAR_DPU_1);
> +
> +		rocket_write(core, REG_PC_TASK_CON, ((0x6 | task_pp_en) << 12) | task_count);
> +
> +		rocket_write(core, REG_PC_TASK_DMA_BASE_ADDR, 0x0);
> +
> +		rocket_write(core, REG_PC_OPERATION_ENABLE, 0x1);
> +
> +		dev_dbg(core->dev,
> +			"Submitted regcmd at 0x%llx to core %d",
> +			task->regcmd, core->index);
> +	}
> +}
> +
> +static int rocket_acquire_object_fences(struct drm_gem_object **bos,
> +					int bo_count,
> +					struct drm_sched_job *job,
> +					bool is_write)
> +{
> +	int i, ret;
> +
> +	for (i = 0; i < bo_count; i++) {
> +		ret = dma_resv_reserve_fences(bos[i]->resv, 1);
> +		if (ret)
> +			return ret;
> +
> +		ret = drm_sched_job_add_implicit_dependencies(job, bos[i],
> +							      is_write);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void rocket_attach_object_fences(struct drm_gem_object **bos,
> +					  int bo_count,
> +					  struct dma_fence *fence)
> +{
> +	int i;
> +
> +	for (i = 0; i < bo_count; i++)
> +		dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE);
> +}
> +
> +static int rocket_job_push(struct rocket_job *job)
> +{
> +	struct rocket_device *rdev = job->rdev;
> +	struct drm_gem_object **bos;
> +	struct ww_acquire_ctx acquire_ctx;
> +	int ret = 0;
> +
> +	bos = kvmalloc_array(job->in_bo_count + job->out_bo_count, sizeof(void *),
> +			     GFP_KERNEL);
> +	memcpy(bos, job->in_bos, job->in_bo_count * sizeof(void *));
> +	memcpy(&bos[job->in_bo_count], job->out_bos, job->out_bo_count * sizeof(void *));
> +
> +	ret = drm_gem_lock_reservations(bos, job->in_bo_count + job->out_bo_count, &acquire_ctx);
> +	if (ret)
> +		goto err;
> +
> +	mutex_lock(&rdev->sched_lock);
> +	drm_sched_job_arm(&job->base);
> +
> +	job->inference_done_fence = dma_fence_get(&job->base.s_fence->finished);
> +
> +	ret = rocket_acquire_object_fences(job->in_bos, job->in_bo_count, &job->base, false);
> +	if (ret) {
> +		mutex_unlock(&rdev->sched_lock);
> +		goto err_unlock;
> +	}
> +
> +	ret = rocket_acquire_object_fences(job->out_bos, job->out_bo_count, &job->base, true);
> +	if (ret) {
> +		mutex_unlock(&rdev->sched_lock);
> +		goto err_unlock;
> +	}
> +
> +	kref_get(&job->refcount); /* put by scheduler job completion */
> +
> +	drm_sched_entity_push_job(&job->base);
> +
> +	mutex_unlock(&rdev->sched_lock);
> +
> +	rocket_attach_object_fences(job->out_bos, job->out_bo_count, job->inference_done_fence);
> +
> +err_unlock:
> +	drm_gem_unlock_reservations(bos, job->in_bo_count + job->out_bo_count, &acquire_ctx);
> +err:
> +	kfree(bos);
> +
> +	return ret;
> +}
> +
> +static void rocket_job_cleanup(struct kref *ref)
> +{
> +	struct rocket_job *job = container_of(ref, struct rocket_job,
> +						refcount);
> +	unsigned int i;
> +
> +	dma_fence_put(job->done_fence);
> +	dma_fence_put(job->inference_done_fence);
> +
> +	if (job->in_bos) {
> +		for (i = 0; i < job->in_bo_count; i++)
> +			drm_gem_object_put(job->in_bos[i]);
> +
> +		kvfree(job->in_bos);
> +	}
> +
> +	if (job->out_bos) {
> +		for (i = 0; i < job->out_bo_count; i++)
> +			drm_gem_object_put(job->out_bos[i]);
> +
> +		kvfree(job->out_bos);
> +	}
> +
> +	kfree(job->tasks);
> +
> +	kfree(job);
> +}
> +
> +static void rocket_job_put(struct rocket_job *job)
> +{
> +	kref_put(&job->refcount, rocket_job_cleanup);
> +}
> +
> +static void rocket_job_free(struct drm_sched_job *sched_job)
> +{
> +	struct rocket_job *job = to_rocket_job(sched_job);
> +
> +	drm_sched_job_cleanup(sched_job);
> +
> +	rocket_job_put(job);
> +}
> +
> +static struct rocket_core *sched_to_core(struct rocket_device *rdev,
> +					 struct drm_gpu_scheduler *sched)
> +{
> +	unsigned int core;
> +
> +	for (core = 0; core < rdev->num_cores; core++) {
> +		if (&rdev->cores[core].sched == sched)
> +			return &rdev->cores[core];
> +	}
> +
> +	return NULL;
> +}
> +
> +static struct dma_fence *rocket_job_run(struct drm_sched_job *sched_job)
> +{
> +	struct rocket_job *job = to_rocket_job(sched_job);
> +	struct rocket_device *rdev = job->rdev;
> +	struct rocket_core *core = sched_to_core(rdev, sched_job->sched);
> +	struct dma_fence *fence = NULL;
> +	int ret;
> +
> +	if (unlikely(job->base.s_fence->finished.error))
> +		return NULL;
> +
> +	/*
> +	 * Nothing to execute: can happen if the job has finished while
> +	 * we were resetting the GPU.
> +	 */
> +	if (job->next_task_idx == job->task_count)
> +		return NULL;
> +
> +	fence = rocket_fence_create(core);
> +	if (IS_ERR(fence))
> +		return fence;
> +
> +	if (job->done_fence)
> +		dma_fence_put(job->done_fence);
> +	job->done_fence = dma_fence_get(fence);
> +
> +	ret = pm_runtime_get_sync(core->dev);
> +	if (ret < 0)
> +		return fence;
> +
> +	spin_lock(&core->job_lock);
> +
> +	core->in_flight_job = job;
> +	rocket_job_hw_submit(core, job);
> +
> +	spin_unlock(&core->job_lock);
> +
> +	return fence;
> +}
> +
> +static void rocket_job_handle_done(struct rocket_core *core,
> +				   struct rocket_job *job)
> +{
> +	if (job->next_task_idx < job->task_count) {
> +		rocket_job_hw_submit(core, job);
> +		return;
> +	}
> +
> +	core->in_flight_job = NULL;
> +	dma_fence_signal_locked(job->done_fence);
> +	pm_runtime_put_autosuspend(core->dev);
> +}
> +
> +static void rocket_job_handle_irq(struct rocket_core *core)
> +{
> +	uint32_t status, raw_status;
> +
> +	pm_runtime_mark_last_busy(core->dev);
> +
> +	status = rocket_read(core, REG_PC_INTERRUPT_STATUS);
> +	raw_status = rocket_read(core, REG_PC_INTERRUPT_RAW_STATUS);
> +
> +	rocket_write(core, REG_PC_OPERATION_ENABLE, 0x0);
> +	rocket_write(core, REG_PC_INTERRUPT_CLEAR, 0x1ffff);
> +
> +	spin_lock(&core->job_lock);
> +
> +	if (core->in_flight_job)
> +		rocket_job_handle_done(core, core->in_flight_job);
> +
> +	spin_unlock(&core->job_lock);
> +}
> +
> +static void
> +rocket_reset(struct rocket_core *core, struct drm_sched_job *bad)
> +{
> +	bool cookie;
> +
> +	if (!atomic_read(&core->reset.pending))
> +		return;
> +
> +	/*
> +	 * Stop the scheduler.
> +	 *
> +	 * FIXME: We temporarily get out of the dma_fence_signalling section
> +	 * because the cleanup path generate lockdep splats when taking locks
> +	 * to release job resources. We should rework the code to follow this
> +	 * pattern:
> +	 *
> +	 *	try_lock
> +	 *	if (locked)
> +	 *		release
> +	 *	else
> +	 *		schedule_work_to_release_later
> +	 */
> +	drm_sched_stop(&core->sched, bad);
> +
> +	cookie = dma_fence_begin_signalling();
> +
> +	if (bad)
> +		drm_sched_increase_karma(bad);
> +
> +	/*
> +	 * Mask job interrupts and synchronize to make sure we won't be
> +	 * interrupted during our reset.
> +	 */
> +	rocket_write(core, REG_PC_INTERRUPT_MASK, 0x0);
> +	synchronize_irq(core->irq);
> +
> +	/* Handle the remaining interrupts before we reset. */
> +	rocket_job_handle_irq(core);
> +
> +	/*
> +	 * Remaining interrupts have been handled, but we might still have
> +	 * stuck jobs. Let's make sure the PM counters stay balanced by
> +	 * manually calling pm_runtime_put_noidle() and
> +	 * rocket_devfreq_record_idle() for each stuck job.
> +	 * Let's also make sure the cycle counting register's refcnt is
> +	 * kept balanced to prevent it from running forever
> +	 */
> +	spin_lock(&core->job_lock);
> +	if (core->in_flight_job)
> +		pm_runtime_put_noidle(core->dev);
> +
> +	core->in_flight_job = NULL;
> +	spin_unlock(&core->job_lock);
> +
> +	/* Proceed with reset now. */
> +	pm_runtime_force_suspend(core->dev);
> +	pm_runtime_force_resume(core->dev);
> +
> +	/* GPU has been reset, we can clear the reset pending bit. */
> +	atomic_set(&core->reset.pending, 0);
> +
> +	/*
> +	 * Now resubmit jobs that were previously queued but didn't have a
> +	 * chance to finish.
> +	 * FIXME: We temporarily get out of the DMA fence signalling section
> +	 * while resubmitting jobs because the job submission logic will
> +	 * allocate memory with the GFP_KERNEL flag which can trigger memory
> +	 * reclaim and exposes a lock ordering issue.
> +	 */
> +	dma_fence_end_signalling(cookie);
> +	drm_sched_resubmit_jobs(&core->sched);
> +	cookie = dma_fence_begin_signalling();
> +
> +	/* Restart the scheduler */
> +	drm_sched_start(&core->sched, 0);
> +
> +	dma_fence_end_signalling(cookie);
> +}
> +
> +static enum drm_gpu_sched_stat rocket_job_timedout(struct drm_sched_job *sched_job)
> +{
> +	struct rocket_job *job = to_rocket_job(sched_job);
> +	struct rocket_device *rdev = job->rdev;
> +	struct rocket_core *core = sched_to_core(rdev, sched_job->sched);
> +
> +	/*
> +	 * If the GPU managed to complete this jobs fence, the timeout is
> +	 * spurious. Bail out.
> +	 */
> +	if (dma_fence_is_signaled(job->done_fence))
> +		return DRM_GPU_SCHED_STAT_NOMINAL;
> +
> +	/*
> +	 * Rocket IRQ handler may take a long time to process an interrupt
> +	 * if there is another IRQ handler hogging the processing.
> +	 * For example, the HDMI encoder driver might be stuck in the IRQ
> +	 * handler for a significant time in a case of bad cable connection.
> +	 * In order to catch such cases and not report spurious rocket
> +	 * job timeouts, synchronize the IRQ handler and re-check the fence
> +	 * status.
> +	 */
> +	synchronize_irq(core->irq);
> +
> +	if (dma_fence_is_signaled(job->done_fence)) {
> +		dev_warn(core->dev, "unexpectedly high interrupt latency\n");
> +		return DRM_GPU_SCHED_STAT_NOMINAL;
> +	}
> +
> +	dev_err(core->dev, "gpu sched timeout");
> +
> +	atomic_set(&core->reset.pending, 1);
> +	rocket_reset(core, sched_job);
> +
> +	return DRM_GPU_SCHED_STAT_NOMINAL;
> +}
> +
> +static void rocket_reset_work(struct work_struct *work)
> +{
> +	struct rocket_core *core;
> +
> +	core = container_of(work, struct rocket_core, reset.work);
> +	rocket_reset(core, NULL);
> +}
> +
> +static const struct drm_sched_backend_ops rocket_sched_ops = {
> +	.run_job = rocket_job_run,
> +	.timedout_job = rocket_job_timedout,
> +	.free_job = rocket_job_free
> +};
> +
> +static irqreturn_t rocket_job_irq_handler_thread(int irq, void *data)
> +{
> +	struct rocket_core *core = data;
> +
> +	rocket_job_handle_irq(core);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t rocket_job_irq_handler(int irq, void *data)
> +{
> +	struct rocket_core *core = data;
> +	uint32_t raw_status = rocket_read(core, REG_PC_INTERRUPT_RAW_STATUS);
> +
> +	WARN_ON(raw_status & PC_INTERRUPT_RAW_STATUS_DMA_READ_ERROR);
> +	WARN_ON(raw_status & PC_INTERRUPT_RAW_STATUS_DMA_READ_ERROR);
> +
> +	if (!(raw_status & PC_INTERRUPT_RAW_STATUS_DPU_0 ||
> +	      raw_status & PC_INTERRUPT_RAW_STATUS_DPU_1))
> +		return IRQ_NONE;
> +
> +	rocket_write(core, REG_PC_INTERRUPT_MASK, 0x0);
> +
> +	return IRQ_WAKE_THREAD;
> +}
> +
> +int rocket_job_init(struct rocket_core *core)
> +{
> +	int ret;
> +
> +	INIT_WORK(&core->reset.work, rocket_reset_work);
> +	spin_lock_init(&core->job_lock);
> +
> +	core->irq = platform_get_irq(to_platform_device(core->dev), 0);
> +	if (core->irq < 0)
> +		return core->irq;
> +
> +	ret = devm_request_threaded_irq(core->dev, core->irq,
> +					rocket_job_irq_handler,
> +					rocket_job_irq_handler_thread,
> +					IRQF_SHARED, KBUILD_MODNAME "-job",
> +					core);
> +	if (ret) {
> +		dev_err(core->dev, "failed to request job irq");
> +		return ret;
> +	}
> +
> +	core->reset.wq = alloc_ordered_workqueue("rocket-reset-%d", 0, core->index);
> +	if (!core->reset.wq)
> +		return -ENOMEM;
> +
> +	core->fence_context = dma_fence_context_alloc(1);
> +
> +	ret = drm_sched_init(&core->sched,
> +				&rocket_sched_ops, NULL,
> +				DRM_SCHED_PRIORITY_COUNT,
> +				1, 0,
> +				msecs_to_jiffies(JOB_TIMEOUT_MS),
> +				core->reset.wq,
> +				NULL, "rocket", core->dev);
> +	if (ret) {
> +		dev_err(core->dev, "Failed to create scheduler: %d.", ret);
> +		goto err_sched;
> +	}
> +
> +	return 0;
> +
> +err_sched:
> +	drm_sched_fini(&core->sched);
> +
> +	destroy_workqueue(core->reset.wq);
> +	return ret;
> +}
> +
> +void rocket_job_fini(struct rocket_core *core)
> +{
> +	drm_sched_fini(&core->sched);
> +
> +	cancel_work_sync(&core->reset.work);
> +	destroy_workqueue(core->reset.wq);
> +}
> +
> +int rocket_job_open(struct rocket_file_priv *rocket_priv)
> +{
> +	struct rocket_device *rdev = rocket_priv->rdev;
> +	struct drm_gpu_scheduler **scheds = kmalloc_array(rdev->num_cores, sizeof(scheds),
> +							  GFP_KERNEL);
> +	unsigned int core;
> +	int ret;
> +
> +	for (core = 0; core < rdev->num_cores; core++)
> +		scheds[core] = &rdev->cores[core].sched;
> +
> +	ret = drm_sched_entity_init(&rocket_priv->sched_entity,
> +				    DRM_SCHED_PRIORITY_NORMAL,
> +				    scheds,
> +				    rdev->num_cores, NULL);
> +	if (WARN_ON(ret))
> +		return ret;
> +
> +	return 0;
> +}
> +
> +void rocket_job_close(struct rocket_file_priv *rocket_priv)
> +{
> +	struct drm_sched_entity *entity = &rocket_priv->sched_entity;
> +
> +	kfree(entity->sched_list);
> +	drm_sched_entity_destroy(entity);
> +}
> +
> +int rocket_job_is_idle(struct rocket_core *core)
> +{
> +	/* If there are any jobs in this HW queue, we're not idle */
> +	if (atomic_read(&core->sched.credit_count))
> +		return false;
> +
> +	return true;
> +}
> +
> +static int rocket_ioctl_submit_job(struct drm_device *dev, struct drm_file *file,
> +				   struct drm_rocket_job *job)
> +{
> +	struct rocket_device *rdev = to_rocket_device(dev);
> +	struct rocket_file_priv *file_priv = file->driver_priv;
> +	struct rocket_job *rjob = NULL;
> +	int ret = 0;
> +
> +	if (job->task_count == 0)
> +		return -EINVAL;
> +
> +	rjob = kzalloc(sizeof(*rjob), GFP_KERNEL);
> +	if (!rjob)
> +		return -ENOMEM;
> +
> +	kref_init(&rjob->refcount);
> +
> +	rjob->rdev = rdev;
> +
> +	ret = drm_sched_job_init(&rjob->base,
> +				 &file_priv->sched_entity,
> +				 1, NULL);
> +	if (ret)
> +		goto out_put_job;
> +
> +	ret = rocket_copy_tasks(dev, file, job, rjob);
> +	if (ret)
> +		goto out_cleanup_job;
> +
> +	ret = drm_gem_objects_lookup(file,
> +				     (void __user *)(uintptr_t)job->in_bo_handles,
> +				     job->in_bo_handle_count, &rjob->in_bos);
> +	if (ret)
> +		goto out_cleanup_job;
> +
> +	rjob->in_bo_count = job->in_bo_handle_count;
> +
> +	ret = drm_gem_objects_lookup(file,
> +				     (void __user *)(uintptr_t)job->out_bo_handles,
> +				     job->out_bo_handle_count, &rjob->out_bos);
> +	if (ret)
> +		goto out_cleanup_job;
> +
> +	rjob->out_bo_count = job->out_bo_handle_count;
> +
> +	ret = rocket_job_push(rjob);
> +	if (ret)
> +		goto out_cleanup_job;
> +
> +out_cleanup_job:
> +	if (ret)
> +		drm_sched_job_cleanup(&rjob->base);
> +out_put_job:
> +	rocket_job_put(rjob);
> +
> +	return ret;
> +}
> +
> +int rocket_ioctl_submit(struct drm_device *dev, void *data, struct drm_file *file)
> +{
> +	struct drm_rocket_submit *args = data;
> +	struct drm_rocket_job *jobs;
> +	int ret = 0;
> +	unsigned int i = 0;
> +
> +	jobs = kvmalloc_array(args->job_count, sizeof(*jobs), GFP_KERNEL);
> +	if (!jobs) {
> +		DRM_DEBUG("Failed to allocate incoming job array\n");
> +		return -ENOMEM;
> +	}
> +
> +	if (copy_from_user(jobs,
> +			   (void __user *)(uintptr_t)args->jobs,
> +			   args->job_count * sizeof(*jobs))) {
> +		ret = -EFAULT;
> +		DRM_DEBUG("Failed to copy incoming job array\n");
> +		goto exit;
> +	}
> +
> +	for (i = 0; i < args->job_count; i++)
> +		rocket_ioctl_submit_job(dev, file, &jobs[i]);
> +
> +exit:
> +	kfree(jobs);
> +
> +	return ret;
> +}
> diff --git a/drivers/accel/rocket/rocket_job.h b/drivers/accel/rocket/rocket_job.h
> new file mode 100644
> index 0000000000000000000000000000000000000000..93fa1f988c72adb7a405acbf08c1c9b87d22f9c5
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_job.h
> @@ -0,0 +1,50 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
> +
> +#ifndef __ROCKET_JOB_H__
> +#define __ROCKET_JOB_H__
> +
> +#include <drm/drm_drv.h>
> +#include <drm/gpu_scheduler.h>
> +
> +#include "rocket_core.h"
> +#include "rocket_drv.h"
> +
> +struct rocket_task {
> +	u64 regcmd;
> +	u32 regcmd_count;
> +};
> +
> +struct rocket_job {
> +	struct drm_sched_job base;
> +
> +	struct rocket_device *rdev;
> +
> +	struct drm_gem_object **in_bos;
> +	struct drm_gem_object **out_bos;
> +
> +	u32 in_bo_count;
> +	u32 out_bo_count;
> +
> +	struct rocket_task *tasks;
> +	u32 task_count;
> +	u32 next_task_idx;
> +
> +	/* Fence to be signaled by drm-sched once its done with the job */
> +	struct dma_fence *inference_done_fence;
> +
> +	/* Fence to be signaled by IRQ handler when the job is complete. */
> +	struct dma_fence *done_fence;
> +
> +	struct kref refcount;
> +};
> +
> +int rocket_ioctl_submit(struct drm_device *dev, void *data, struct drm_file *file);
> +
> +int rocket_job_init(struct rocket_core *core);
> +void rocket_job_fini(struct rocket_core *core);
> +int rocket_job_open(struct rocket_file_priv *rocket_priv);
> +void rocket_job_close(struct rocket_file_priv *rocket_priv);
> +int rocket_job_is_idle(struct rocket_core *core);
> +
> +#endif
> diff --git a/include/uapi/drm/rocket_accel.h b/include/uapi/drm/rocket_accel.h
> index 8338726a83c31b954608ca505cf78bcd70d3494b..eb886351134ebef62969b1e1182ccc174f88fe9d 100644
> --- a/include/uapi/drm/rocket_accel.h
> +++ b/include/uapi/drm/rocket_accel.h
> @@ -12,8 +12,10 @@ extern "C" {
>   #endif
>   
>   #define DRM_ROCKET_CREATE_BO			0x00
> +#define DRM_ROCKET_SUBMIT			0x01
>   
>   #define DRM_IOCTL_ROCKET_CREATE_BO		DRM_IOWR(DRM_COMMAND_BASE + DRM_ROCKET_CREATE_BO, struct drm_rocket_create_bo)
> +#define DRM_IOCTL_ROCKET_SUBMIT			DRM_IOW(DRM_COMMAND_BASE + DRM_ROCKET_SUBMIT, struct drm_rocket_submit)
>   
>   /**
>    * struct drm_rocket_create_bo - ioctl argument for creating Rocket BOs.
> @@ -36,6 +38,59 @@ struct drm_rocket_create_bo {
>   	__u64 offset;
>   };
>   
> +/**
> + * struct drm_rocket_task - A task to be run on the NPU
> + *
> + * A task is the smallest unit of work that can be run on the NPU.
> + */
> +struct drm_rocket_task {
> +	/** DMA address to NPU mapping of register command buffer */
> +	__u64 regcmd;
> +
> +	/** Number of commands in the register command buffer */
> +	__u32 regcmd_count;
> +};
> +
> +/**
> + * struct drm_rocket_job - A job to be run on the NPU
> + *
> + * The kernel will schedule the execution of this job taking into account its
> + * dependencies with other jobs. All tasks in the same job will be executed
> + * sequentially on the same core, to benefit from memory residency in SRAM.
> + */
> +struct drm_rocket_job {
> +	/** Pointer to an array of struct drm_rocket_task. */
> +	__u64 tasks;
> +
> +	/** Pointer to a u32 array of the BOs that are read by the job. */
> +	__u64 in_bo_handles;
> +
> +	/** Pointer to a u32 array of the BOs that are written to by the job. */
> +	__u64 out_bo_handles;
> +
> +	/** Number of tasks passed in. */
> +	__u32 task_count;
> +
> +	/** Number of input BO handles passed in (size is that times 4). */
> +	__u32 in_bo_handle_count;
> +
> +	/** Number of output BO handles passed in (size is that times 4). */
> +	__u32 out_bo_handle_count;
> +};
> +
> +/**
> + * struct drm_rocket_submit - ioctl argument for submitting commands to the NPU.
> + *
> + * The kernel will schedule the execution of these jobs in dependency order.
> + */
> +struct drm_rocket_submit {
> +	/** Pointer to an array of struct drm_rocket_job. */
> +	__u64 jobs;
> +
> +	/** Number of jobs passed in. */
> +	__u32 job_count;
> +};
> +
>   #if defined(__cplusplus)
>   }
>   #endif
>

-- 
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings
  2025-02-25  7:55 ` [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings Tomeu Vizoso
@ 2025-02-25 16:02   ` Rob Herring
  2025-05-14 16:26     ` Tomeu Vizoso
  2025-04-25 18:50   ` Nicolas Frattaroli
  1 sibling, 1 reply; 25+ messages in thread
From: Rob Herring @ 2025-02-25 16:02 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner, Oded Gabbay,
	Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo, devicetree,
	linux-arm-kernel, linux-rockchip, linux-kernel, dri-devel,
	linux-doc, linux-media, linaro-mm-sig

On Tue, Feb 25, 2025 at 08:55:47AM +0100, Tomeu Vizoso wrote:
> Add the bindings for the Neural Processing Unit IP from Rockchip.
> 
> v2:
> - Adapt to new node structure (one node per core, each with its own
>   IOMMU)
> - Several misc. fixes from Sebastian Reichel
> 
> Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
> ---
>  .../bindings/npu/rockchip,rknn-core.yaml           | 152 +++++++++++++++++++++
>  1 file changed, 152 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> new file mode 100644
> index 0000000000000000000000000000000000000000..e8d0afe4a7d1c4f166cf13a9f4aa7c1901362a3f
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> @@ -0,0 +1,152 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Neural Processing Unit IP from Rockchip
> +
> +maintainers:
> +  - Tomeu Vizoso <tomeu@tomeuvizoso.net>
> +
> +description:
> +  Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
> +  open source NVDLA IP.
> +
> +properties:
> +  $nodename:
> +    pattern: '^npu-core@[a-f0-9]+$'
> +
> +  compatible:
> +    oneOf:
> +      - items:
> +          - enum:
> +              - rockchip,rk3588-rknn-core-top
> +          - const: rockchip,rknn-core-top

Drop the fallbacks unless you have some evidence that the IP is the 
same across a lot of SoCs. If you don't, then 
rockchip,rk3588-rknn-core-top can be the fallback whenever there are 
more compatible SoCs.

Or if there's version/feature registers that otherwise make it 
discoverable, then a common compatible is fine.

> +      - items:
> +          - enum:
> +              - rockchip,rk3588-rknn-core
> +          - const: rockchip,rknn-core

I don't understand the difference between core and core-top. That needs 
to be explained in the top-level description.

> +
> +  reg:
> +    maxItems: 1
> +
> +  clocks:
> +    minItems: 2
> +    maxItems: 4
> +
> +  clock-names:
> +    items:
> +      - const: aclk
> +      - const: hclk
> +      - const: npu
> +      - const: pclk
> +    minItems: 2
> +
> +  interrupts:
> +    maxItems: 1
> +
> +  iommus:
> +    maxItems: 1
> +
> +  npu-supply: true
> +
> +  power-domains:
> +    maxItems: 1
> +
> +  resets:
> +    maxItems: 2
> +
> +  reset-names:
> +    items:
> +      - const: srst_a
> +      - const: srst_h
> +
> +  sram-supply: true

Group supply properties together

> +
> +required:
> +  - compatible
> +  - reg
> +  - clocks
> +  - clock-names
> +  - interrupts
> +  - iommus
> +  - npu-supply
> +  - power-domains
> +  - resets
> +  - reset-names
> +  - sram-supply
> +
> +allOf:
> +  - if:
> +      properties:
> +        compatible:
> +          contains:
> +            enum:
> +              - rockchip,rknn-core-top
> +    then:
> +      properties:
> +        clocks:
> +          minItems: 4
> +
> +        clock-names:
> +          minItems: 4
> +  - if:
> +      properties:
> +        compatible:
> +          contains:
> +            enum:
> +              - rockchip,rknn-core
> +    then:
> +      properties:
> +        clocks:
> +          maxItems: 2
> +        clock-names:
> +          maxItems: 2
> +
> +additionalProperties: false
> +
> +examples:
> +  - |
> +    #include <dt-bindings/clock/rockchip,rk3588-cru.h>
> +    #include <dt-bindings/interrupt-controller/irq.h>
> +    #include <dt-bindings/interrupt-controller/arm-gic.h>
> +    #include <dt-bindings/power/rk3588-power.h>
> +    #include <dt-bindings/reset/rockchip,rk3588-cru.h>
> +
> +    bus {
> +      #address-cells = <2>;
> +      #size-cells = <2>;
> +
> +      rknn_core_top: npu-core@fdab0000 {

npu@...

> +        compatible = "rockchip,rk3588-rknn-core-top", "rockchip,rknn-core-top";
> +        reg = <0x0 0xfdab0000 0x0 0x9000>;
> +        assigned-clocks = <&scmi_clk SCMI_CLK_NPU>;
> +        assigned-clock-rates = <200000000>;
> +        clocks = <&cru ACLK_NPU0>, <&cru HCLK_NPU0>,
> +                 <&scmi_clk SCMI_CLK_NPU>, <&cru PCLK_NPU_ROOT>;
> +        clock-names = "aclk", "hclk", "npu", "pclk";
> +        interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH 0>;
> +        iommus = <&rknn_mmu_top>;
> +        npu-supply = <&vdd_npu_s0>;
> +        power-domains = <&power RK3588_PD_NPUTOP>;
> +        resets = <&cru SRST_A_RKNN0>, <&cru SRST_H_RKNN0>;
> +        reset-names = "srst_a", "srst_h";
> +        sram-supply = <&vdd_npu_mem_s0>;
> +      };
> +
> +      rknn_core_1: npu-core@fdac0000 {
> +        compatible = "rockchip,rk3588-rknn-core", "rockchip,rknn-core";
> +        reg = <0x0 0xfdac0000 0x0 0x9000>;
> +        clocks = <&cru ACLK_NPU1>, <&cru HCLK_NPU1>;
> +        clock-names = "aclk", "hclk";
> +        interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH 0>;
> +        iommus = <&rknn_mmu_1>;
> +        npu-supply = <&vdd_npu_s0>;
> +        power-domains = <&power RK3588_PD_NPU1>;
> +        resets = <&cru SRST_A_RKNN1>, <&cru SRST_H_RKNN1>;
> +        reset-names = "srst_a", "srst_h";
> +        sram-supply = <&vdd_npu_mem_s0>;
> +      };
> +    };
> +...
> 
> -- 
> 2.48.1
> 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/7] accel/rocket: Add a new driver for Rockchip's NPU
       [not found] ` <20250225-6-10-rocket-v2-4-d4dbcfafc141@tomeuvizoso.net>
@ 2025-03-21 15:48   ` Jeff Hugo
  2025-04-25 18:22   ` Nicolas Frattaroli
  2025-04-29  9:39   ` Nicolas Frattaroli
  2 siblings, 0 replies; 25+ messages in thread
From: Jeff Hugo @ 2025-03-21 15:48 UTC (permalink / raw)
  To: Tomeu Vizoso, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
	Heiko Stuebner, Oded Gabbay, Jonathan Corbet, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, Christian König, Sebastian Reichel
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

On 2/25/2025 12:55 AM, Tomeu Vizoso wrote:
>   
> diff --git a/Documentation/accel/rocket/index.rst b/Documentation/accel/rocket/index.rst
> new file mode 100644
> index 0000000000000000000000000000000000000000..ad33194dec0325d0dab362768fd349e8dc286970
> --- /dev/null
> +++ b/Documentation/accel/rocket/index.rst
> @@ -0,0 +1,19 @@
> +.. SPDX-License-Identifier: GPL-2.0-only
> +
> +=====================================
> + accel/rocket Rockchip NPU driver
> +=====================================
> +
> +The accel/rocket driver supports the Neural Processing Units (NPUs) inside some Rockchip SoCs such as the RK3588. Rockchip calls it RKNN and sometimes RKNPU.
> +
> +This NPU is closely based on the NVDLA IP released by NVIDIA as open hardware in 2018, along with open source kernel and userspace drivers.
> +
> +The frontend unit in Rockchip's NPU though is completely different from that in the open source IP, so this kernel driver is specific to Rockchip's version.
> +
> +The hardware is described in chapter 36 in the RK3588 TRM.
> +
> +This driver just powers the hardware on and off, allocates and maps buffers to the device and submits jobs to the frontend unit. Everything else is done in userspace, as a Gallium driver that is part of the Mesa3D project.

Nit: name the specific Gallium driver here?

> diff --git a/drivers/accel/rocket/Kconfig b/drivers/accel/rocket/Kconfig
> new file mode 100644
> index 0000000000000000000000000000000000000000..83a401129ab2dc2847ccc30c6364e8802f43648d
> --- /dev/null
> +++ b/drivers/accel/rocket/Kconfig
> @@ -0,0 +1,25 @@
> +# SPDX-License-Identifier: GPL-2.0

I'm curious, is there a specific reason for "GPL-2.0"?  This tag is 
depreciated and everywhere else in the accel subsystem "GPL-2.0-only" is 
used.

Same comment for the Makefile and everything else in this patch.

> +
> +config DRM_ACCEL_ROCKET
> +       tristate "Rocket (support for Rockchip NPUs)"
> +       depends on DRM
> +       depends on ARM64 || COMPILE_TEST
> +       depends on MMU
> +       select DRM_SCHED
> +       select IOMMU_SUPPORT
> +       select IOMMU_IO_PGTABLE_LPAE
> +       select DRM_GEM_SHMEM_HELPER
> +       help
> +	  Choose this option if you have a Rockchip SoC that contains a
> +	  compatible Neural Processing Unit (NPU), such as the RK3588. Called by
> +	  Rockchip either RKNN or RKNPU, it accelerates inference of neural
> +	  networks.
> +
> +	  The interface exposed to userspace is described in
> +	  include/uapi/drm/rocket_accel.h and is used by the userspace driver in
> +	  Mesa3D.

Nit: name the specific Mesa driver here?

> +
> +	  If unsure, say N.
> +
> +	  To compile this driver as a module, choose M here: the
> +	  module will be called rocket.
> diff --git a/drivers/accel/rocket/rocket_core.c b/drivers/accel/rocket/rocket_core.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..09d966c826b5b1090a18cb24b3aa4aba286a12d4
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_core.c
> @@ -0,0 +1,71 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
> +
> +#include <linux/clk.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>

This list of includes seems insufficient. You use PTR_ERR but don't have 
the include for it (linux/err.h). Same for dev_err/dev_info. I'm 
guessing this comment applies elsewhere.

> +#include "rocket_core.h"
> +#include "rocket_registers.h"
> +
> +static int rocket_clk_init(struct rocket_core *core)
> +{
> +	struct device *dev = core->dev;
> +	int err;
> +
> +	core->a_clk = devm_clk_get(dev, "aclk");
> +	if (IS_ERR(core->a_clk)) {
> +		err = PTR_ERR(core->a_clk);
> +		dev_err(dev, "devm_clk_get_enabled failed %d for core %d\n", err, core->index);

Do you think it would be useful to mention "aclk" in the message? Seems 
like if this or the hclk get fails, you won't know just from the kernel log.

> +		return err;
> +	}
> +
> +	core->h_clk = devm_clk_get(dev, "hclk");
> +	if (IS_ERR(core->h_clk)) {
> +		err = PTR_ERR(core->h_clk);
> +		dev_err(dev, "devm_clk_get_enabled failed %d for core %d\n", err, core->index);
> +		clk_disable_unprepare(core->a_clk);
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +int rocket_core_init(struct rocket_core *core)
> +{
> +	struct device *dev = core->dev;
> +	struct platform_device *pdev = to_platform_device(dev);
> +	uint32_t version;
> +	int err = 0;
> +
> +	err = rocket_clk_init(core);
> +	if (err) {
> +		dev_err(dev, "clk init failed %d\n", err);

This feels redundant since rocket_clk_init() already printed an error 
message with more details.

> +		return err;
> +	}
> +
> +	core->iomem = devm_platform_ioremap_resource(pdev, 0);
> +	if (IS_ERR(core->iomem))
> +		return PTR_ERR(core->iomem);
> +
> +	pm_runtime_use_autosuspend(dev);
> +	pm_runtime_set_autosuspend_delay(dev, 50); /* ~3 frames */

Frames? That feels like a rendering term, but this driver is not doing 
rendering, right? Is there a better way to describe this?

> +	pm_runtime_enable(dev);
> +
> +	err = pm_runtime_get_sync(dev);
> +
> +	version = rocket_read(core, REG_PC_VERSION);
> +	version += rocket_read(core, REG_PC_VERSION_NUM) & 0xffff;
> +
> +	pm_runtime_mark_last_busy(dev);
> +	pm_runtime_put_autosuspend(dev);
> +
> +	dev_info(dev, "Rockchip NPU core %d version: %d\n", core->index, version);
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/accel/rocket/rocket_device.c b/drivers/accel/rocket/rocket_device.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..ce3b533f15c1011d8a7a23dd8132e907cc334c58
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_device.c
> @@ -0,0 +1,29 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright 2024 Tomeu Vizoso <tomeu@tomeuvizoso.net> */
> +
> +#include <linux/clk.h>
> +
> +#include "rocket_device.h"
> +
> +int rocket_device_init(struct rocket_device *rdev)
> +{
> +	struct device *dev = rdev->cores[0].dev;
> +	int err;
> +
> +	rdev->clk_npu = devm_clk_get(dev, "npu");
> +	rdev->pclk = devm_clk_get(dev, "pclk");

Are these optional? It would appear that error handling is missing.

> +
> +	/* Initialize core 0 (top) */
> +	err = rocket_core_init(&rdev->cores[0]);
> +	if (err) {
> +		rocket_device_fini(rdev);
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +static void rocket_drm_unbind(struct device *dev)
> +{
> +	struct rocket_device *rdev = dev_get_drvdata(dev);
> +	struct drm_device *ddev = &rdev->ddev;
> +
> +	drm_dev_unregister(ddev);

You allocated this with devm, shouldn't that make this unnecessary?

> +
> +	component_unbind_all(dev, rdev);
> +
> +	rocket_device_fini(rdev);
> +}


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 5/7] accel/rocket: Add IOCTL for BO creation
  2025-02-25  7:55 ` [PATCH v2 5/7] accel/rocket: Add IOCTL for BO creation Tomeu Vizoso
  2025-02-25  8:35   ` Thomas Zimmermann
@ 2025-03-21 15:56   ` Jeffrey Hugo
  1 sibling, 0 replies; 25+ messages in thread
From: Jeffrey Hugo @ 2025-03-21 15:56 UTC (permalink / raw)
  To: Tomeu Vizoso, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
	Heiko Stuebner, Oded Gabbay, Jonathan Corbet, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, Christian König, Sebastian Reichel
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

On 2/25/2025 12:55 AM, Tomeu Vizoso wrote:
> +/**
> + * rocket_gem_create_object - Implementation of driver->gem_create_object.
> + * @dev: DRM device
> + * @size: Size in bytes of the memory the object will reference
> + *
> + * This lets the GEM helpers allocate object structs for us, and keep
> + * our BO stats correct.
> + */

I would expect that this would throw a warning when making the kernel 
documentation since you are not describing the return value.




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 6/7] accel/rocket: Add job submission IOCTL
  2025-02-25  7:55 ` [PATCH v2 6/7] accel/rocket: Add job submission IOCTL Tomeu Vizoso
  2025-02-25  8:44   ` Thomas Zimmermann
@ 2025-03-21 16:09   ` Jeff Hugo
  2025-04-29 10:05   ` Nicolas Frattaroli
  2 siblings, 0 replies; 25+ messages in thread
From: Jeff Hugo @ 2025-03-21 16:09 UTC (permalink / raw)
  To: Tomeu Vizoso, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
	Heiko Stuebner, Oded Gabbay, Jonathan Corbet, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, Christian König, Sebastian Reichel
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

On 2/25/2025 12:55 AM, Tomeu Vizoso wrote:
> +/**
> + * struct drm_rocket_task - A task to be run on the NPU
> + *
> + * A task is the smallest unit of work that can be run on the NPU.
> + */
> +struct drm_rocket_task {
> +	/** DMA address to NPU mapping of register command buffer */
> +	__u64 regcmd;
> +
> +	/** Number of commands in the register command buffer */
> +	__u32 regcmd_count;
> +};
> +
> +/**
> + * struct drm_rocket_job - A job to be run on the NPU
> + *
> + * The kernel will schedule the execution of this job taking into account its
> + * dependencies with other jobs. All tasks in the same job will be executed
> + * sequentially on the same core, to benefit from memory residency in SRAM.
> + */
> +struct drm_rocket_job {
> +	/** Pointer to an array of struct drm_rocket_task. */
> +	__u64 tasks;
> +
> +	/** Pointer to a u32 array of the BOs that are read by the job. */
> +	__u64 in_bo_handles;
> +
> +	/** Pointer to a u32 array of the BOs that are written to by the job. */
> +	__u64 out_bo_handles;
> +
> +	/** Number of tasks passed in. */
> +	__u32 task_count;
> +
> +	/** Number of input BO handles passed in (size is that times 4). */
> +	__u32 in_bo_handle_count;
> +
> +	/** Number of output BO handles passed in (size is that times 4). */
> +	__u32 out_bo_handle_count;
> +};
> +
> +/**
> + * struct drm_rocket_submit - ioctl argument for submitting commands to the NPU.
> + *
> + * The kernel will schedule the execution of these jobs in dependency order.
> + */
> +struct drm_rocket_submit {
> +	/** Pointer to an array of struct drm_rocket_job. */
> +	__u64 jobs;
> +
> +	/** Number of jobs passed in. */
> +	__u32 job_count;
> +};

These 3 structs will be different sizes in 32-bit env vs 64-bit env. Yes 
the driver depends on ARM64, but compat (32-bit userspace with 64-bit 
kernel) is still possible. They should all be padded out to 64-bit 
alignment.  When you do that, you should specify that the padding must 
be zero, and check for that in the driver so that you have the option to 
use the padding in the future.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 7/7] accel/rocket: Add IOCTLs for synchronizing memory accesses
  2025-02-25  7:55 ` [PATCH v2 7/7] accel/rocket: Add IOCTLs for synchronizing memory accesses Tomeu Vizoso
@ 2025-03-21 16:15   ` Jeffrey Hugo
  0 siblings, 0 replies; 25+ messages in thread
From: Jeffrey Hugo @ 2025-03-21 16:15 UTC (permalink / raw)
  To: Tomeu Vizoso, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
	Heiko Stuebner, Oded Gabbay, Jonathan Corbet, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, Christian König, Sebastian Reichel
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

On 2/25/2025 12:55 AM, Tomeu Vizoso wrote:
> +int rocket_ioctl_fini_bo(struct drm_device *dev, void *data, struct drm_file *file)
> +{
> +	struct drm_rocket_fini_bo *args = data;
> +	struct drm_gem_object *gem_obj;
> +	struct rocket_gem_object *rkt_obj;
> +	struct drm_gem_shmem_object *shmem_obj;
> +	struct rocket_device *rdev = to_rocket_device(dev);
> +
> +	gem_obj = drm_gem_object_lookup(file, args->handle);
> +	if (!gem_obj)
> +		return -ENOENT;
> +
> +	rkt_obj = to_rocket_bo(gem_obj);
> +	shmem_obj = &rkt_obj->base;
> +
> +	WARN_ON(rkt_obj->last_cpu_prep_op == 0);
> +
> +	for (unsigned int core = 1; core < rdev->num_cores; core++) {
> +		dma_sync_sgtable_for_device(rdev->cores[core].dev, shmem_obj->sgt,
> +					    rocket_op_to_dma_dir(rkt_obj->last_cpu_prep_op));
> +	}
> +
> +	rkt_obj->last_cpu_prep_op = 0;
> +
> +	drm_gem_object_put(gem_obj);
> +
> +	return 0;
> +}

flags must be 0, and you must check that here. You do not appear to be 
doing that. Otherwise, userspace may put a value in flags, which is 
ignored now, but later when you define flags for a purpose, existing 
userspace will be broken - a uapi violation.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/7] accel/rocket: Add a new driver for Rockchip's NPU
       [not found] ` <20250225-6-10-rocket-v2-4-d4dbcfafc141@tomeuvizoso.net>
  2025-03-21 15:48   ` [PATCH v2 4/7] accel/rocket: Add a new driver for Rockchip's NPU Jeff Hugo
@ 2025-04-25 18:22   ` Nicolas Frattaroli
  2025-05-16  9:15     ` Tomeu Vizoso
  2025-04-29  9:39   ` Nicolas Frattaroli
  2 siblings, 1 reply; 25+ messages in thread
From: Nicolas Frattaroli @ 2025-04-25 18:22 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip, Tomeu Vizoso
  Cc: devicetree, linux-arm-kernel, linux-kernel, dri-devel, linux-doc,
	linux-media, linaro-mm-sig

On Tuesday, 25 February 2025 08:55:50 Central European Summer Time Tomeu Vizoso wrote:
> This initial version supports the NPU as shipped in the RK3588 SoC and
> described in the first part of its TRM, in Chapter 36.
> 
> This NPU contains 3 independent cores that the driver can submit jobs
> to.
> 
> This commit adds just hardware initialization and power management.
> 
> v2:
> - Split cores and IOMMUs as independent devices (Sebastian Reichel)
> - Add some documentation (Jeffrey Hugo)
> - Be more explicit in the Kconfig documentation (Jeffrey Hugo)
> - Remove resets, as these haven't been found useful so far (Zenghui Yu)
> - Repack structs (Jeffrey Hugo)
> - Use DEFINE_DRM_ACCEL_FOPS (Jeffrey Hugo)
> - Use devm_drm_dev_alloc (Jeffrey Hugo)
> - Use probe log helper (Jeffrey Hugo)
> - Introduce UABI header in a later patch (Jeffrey Hugo)
> 
> Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> ---
>  Documentation/accel/index.rst           |    1 +
>  Documentation/accel/rocket/index.rst    |   19 +
>  MAINTAINERS                             |    8 +
>  drivers/accel/Kconfig                   |    1 +
>  drivers/accel/Makefile                  |    1 +
>  drivers/accel/rocket/Kconfig            |   25 +
>  drivers/accel/rocket/Makefile           |    8 +
>  drivers/accel/rocket/rocket_core.c      |   71 +
>  drivers/accel/rocket/rocket_core.h      |   29 +
>  drivers/accel/rocket/rocket_device.c    |   29 +
>  drivers/accel/rocket/rocket_device.h    |   29 +
>  drivers/accel/rocket/rocket_drv.c       |  273 ++
>  drivers/accel/rocket/rocket_drv.h       |   13 +
>  drivers/accel/rocket/rocket_registers.h | 4425 +++++++++++++++++++++++++++++++
>  14 files changed, 4932 insertions(+)
> 
> [...]
> diff --git a/drivers/accel/rocket/rocket_drv.c b/drivers/accel/rocket/rocket_drv.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..c22d965f20f1239a36b1d823d5fe5f372713555d
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_drv.c
> @@ -0,0 +1,273 @@
> [...]
> +static int rocket_probe(struct platform_device *pdev)
> +{
> +	struct component_match *match = NULL;
> +	struct device_node *core_node;
> +
> +	if (fwnode_device_is_compatible(pdev->dev.fwnode, "rockchip,rk3588-rknn-core"))
> +		return component_add(&pdev->dev, &rocket_core_ops);
> +
> +	for_each_compatible_node(core_node, NULL, "rockchip,rk3588-rknn-core") {
> +		if (!of_device_is_available(core_node))
> +			continue;
> +
> +		drm_of_component_match_add(&pdev->dev, &match,
> +					   component_compare_of, core_node);
> +	}
> +
> +	return component_master_add_with_match(&pdev->dev, &rocket_drm_ops, match);
> +}

Hi Tomeu,

something I've noticed while playing with this: currently, it doesn't seem like
it's possible to support 1-core NPUs. rknn-core-top is a real core, but if no
rknn-core is enabled beside it, it'll call component_master_add_with_match with
match being NULL. This causes a kernel Oops.

I'm not sure what the proper fix is, since the component API doesn't seem to
really have a consideration for a master with no other components.

I ran into this when I was trying to debug why I get job timeouts followed by
a full SoC lock-up on RK3576 by running with only one of the two cores enabled.

As an aside note, my throwaway rocket-on-RK3576-hacking-branch is at [1] and
contains some changes you may want to consider for v3, e.g. [2] and [3]+[4]. In
[4], specifically the `domain-supply` part which means the NPU regulators don't
have to be always-on. Though feel free to pull in my entire ROCK 5B enablement
patch.

Kind regards,
Nicolas Frattaroli, who discovered that his cat is apparently 5% space heater
according to mobilenet while playing with this patch series.

[1]: https://gitlab.collabora.com/fratti/linux/-/commits/tomeu-npu?ref_type=heads
[2]: https://gitlab.collabora.com/fratti/linux/-/commit/73aba31a00b34c254be575b524da568e115d985d
[3]: https://gitlab.collabora.com/fratti/linux/-/commit/bd3a7bf5054c54c2915a9dc0396730d0f24b3b7c
[4]: https://gitlab.collabora.com/fratti/linux/-/commit/5da44d61b09c345309f76159574d447d071c295d





^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings
  2025-02-25  7:55 ` [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings Tomeu Vizoso
  2025-02-25 16:02   ` Rob Herring
@ 2025-04-25 18:50   ` Nicolas Frattaroli
  2025-05-14 15:18     ` Tomeu Vizoso
  1 sibling, 1 reply; 25+ messages in thread
From: Nicolas Frattaroli @ 2025-04-25 18:50 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip, Tomeu Vizoso
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso

On Tuesday, 25 February 2025 08:55:47 Central European Summer Time Tomeu Vizoso wrote:
> Add the bindings for the Neural Processing Unit IP from Rockchip.
> 
> v2:
> - Adapt to new node structure (one node per core, each with its own
>   IOMMU)
> - Several misc. fixes from Sebastian Reichel
> 
> Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
> ---
>  .../bindings/npu/rockchip,rknn-core.yaml           | 152 +++++++++++++++++++++
>  1 file changed, 152 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> new file mode 100644
> index 0000000000000000000000000000000000000000..e8d0afe4a7d1c4f166cf13a9f4aa7c1901362a3f
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> @@ -0,0 +1,152 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Neural Processing Unit IP from Rockchip
> +
> +maintainers:
> +  - Tomeu Vizoso <tomeu@tomeuvizoso.net>
> +
> +description:
> +  Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
> +  open source NVDLA IP.
> +
> +properties:
> +  $nodename:
> +    pattern: '^npu-core@[a-f0-9]+$'
> +
> +  compatible:
> +    oneOf:
> +      - items:
> +          - enum:
> +              - rockchip,rk3588-rknn-core-top
> +          - const: rockchip,rknn-core-top
> +      - items:
> +          - enum:
> +              - rockchip,rk3588-rknn-core
> +          - const: rockchip,rknn-core
> +
> +  reg:
> +    maxItems: 1

Hi Tomeu,

as you probably know, RK3576 has quite a similar NPU. This is why I'm currently
poking at this patch series. One of the differences I ran into was that the
IOMMU of each NPU core now sits within the reg address space range of the core
as described by the single reg item binding and assumed by the driver.

This seemed weird to me at first, since I would've guessed the cores would be
exactly the same, but I noticed that they kind of still are; the RK3588's NPU
also has a "hole" between 0x2000 and 0x2fff on each core, which is where RK3576
put its IOMMU.

This is some information I gleaned from the RK3588 TRM, specifically section
36.4.1 "Internal Address Mapping", which shows where each "part" of the NPU core
has its address space.

Right now we just represent this as a single reg item per core. I've played
with the idea of splitting this up into the distinct ranges the TRM lists and
giving each a reg-names entry, but this would require a major rework of the
driver from what I can tell, including to the auto-generated register header.

For now, my hack on RK3576 is to just ioremap the range defined by resource 
start to resource end inside rocket manually if I get -EBUSY trying to ioremap 
the resource proper. This is quite an ugly hack though, it means the IOMMU node 
still has its address overlapping with another node in the DT, and it also means 
we have an unavoidable error message printed into the kernel log. This is also
what the vendor driver seems to do.

What do you reckon is a reg setup in the binding that is both reasonable to
implement in the driver while accurately describing the hardware?

The RK3568, which uses a similar NPU design has the IOMMU at an offset of 0xb000 
from the core's start of PC, so probably after any core specifics but before the 
global registers if I hazard a guess.

For those without access to the TRM: splitting this up into multiple reg items
per core precisely the way the TRM does it would result in no less than 10 reg
items on RK3588, if I count correctly.

Kind regards,
Nicolas Frattaroli




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/7] accel/rocket: Add a new driver for Rockchip's NPU
       [not found] ` <20250225-6-10-rocket-v2-4-d4dbcfafc141@tomeuvizoso.net>
  2025-03-21 15:48   ` [PATCH v2 4/7] accel/rocket: Add a new driver for Rockchip's NPU Jeff Hugo
  2025-04-25 18:22   ` Nicolas Frattaroli
@ 2025-04-29  9:39   ` Nicolas Frattaroli
  2 siblings, 0 replies; 25+ messages in thread
From: Nicolas Frattaroli @ 2025-04-29  9:39 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso,
	Tomeu Vizoso

On Tuesday, 25 February 2025 08:55:50 Central European Summer Time Tomeu Vizoso wrote:
> This initial version supports the NPU as shipped in the RK3588 SoC and
> described in the first part of its TRM, in Chapter 36.
> 
> This NPU contains 3 independent cores that the driver can submit jobs
> to.
> 
> This commit adds just hardware initialization and power management.
> 
> v2:
> - Split cores and IOMMUs as independent devices (Sebastian Reichel)
> - Add some documentation (Jeffrey Hugo)
> - Be more explicit in the Kconfig documentation (Jeffrey Hugo)
> - Remove resets, as these haven't been found useful so far (Zenghui Yu)
> - Repack structs (Jeffrey Hugo)
> - Use DEFINE_DRM_ACCEL_FOPS (Jeffrey Hugo)
> - Use devm_drm_dev_alloc (Jeffrey Hugo)
> - Use probe log helper (Jeffrey Hugo)
> - Introduce UABI header in a later patch (Jeffrey Hugo)
> 
> Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> ---
>  Documentation/accel/index.rst           |    1 +
>  Documentation/accel/rocket/index.rst    |   19 +
>  MAINTAINERS                             |    8 +
>  drivers/accel/Kconfig                   |    1 +
>  drivers/accel/Makefile                  |    1 +
>  drivers/accel/rocket/Kconfig            |   25 +
>  drivers/accel/rocket/Makefile           |    8 +
>  drivers/accel/rocket/rocket_core.c      |   71 +
>  drivers/accel/rocket/rocket_core.h      |   29 +
>  drivers/accel/rocket/rocket_device.c    |   29 +
>  drivers/accel/rocket/rocket_device.h    |   29 +
>  drivers/accel/rocket/rocket_drv.c       |  273 ++
>  drivers/accel/rocket/rocket_drv.h       |   13 +
>  drivers/accel/rocket/rocket_registers.h | 4425 +++++++++++++++++++++++++++++++
>  14 files changed, 4932 insertions(+)

Hi Tomeu,

I've got some more comments on the driver, this time specific to some power
management stuff I've noticed.

> +++ b/drivers/accel/rocket/rocket_core.c
>
> [...]
>
> +int rocket_core_init(struct rocket_core *core)
> +{
> +	struct device *dev = core->dev;
> +	struct platform_device *pdev = to_platform_device(dev);
> +	uint32_t version;
> +	int err = 0;
> +
> +	err = rocket_clk_init(core);
> +	if (err) {
> +		dev_err(dev, "clk init failed %d\n", err);
> +		return err;
> +	}
> +
> +	core->iomem = devm_platform_ioremap_resource(pdev, 0);
> +	if (IS_ERR(core->iomem))
> +		return PTR_ERR(core->iomem);
> +
> +	pm_runtime_use_autosuspend(dev);

We're enabling autosuspend here, but don't use
  pm_runtime_dont_use_autosuspend(core->dev);
in rocket_core_fini. dont_use_autosuspend is only handled for us automagically
on driver unload if we use devm wrappers for pm_runtime_enable, so this is most
definitely an oversight.

> +	pm_runtime_set_autosuspend_delay(dev, 50); /* ~3 frames */

The 50 = 3 frames thing here seems suspect. 3 frames of what, and why? If it's
3 frames of something the hardware processed, then doesn't that depend on the
specific hardware and its clock rate, which may change? Plus, shouldn't auto-
suspend be blocked anyway when there's still a job processing? The RK3588 TRM
doesn't make a mention of "frame" in the RKNN section, so if this refers to a
specific workload then that will be another parameter.

> +	pm_runtime_enable(dev);
> +
> +	err = pm_runtime_get_sync(dev);

No error checking done here, so if a clock fails to enable, we just hang on the
read later if it was the register's clock. Though that particular error case is
never passed out from the runtime resume callback, which should probably be
fixed as well. 

> +
> +	version = rocket_read(core, REG_PC_VERSION);
> +	version += rocket_read(core, REG_PC_VERSION_NUM) & 0xffff;
> +
> +	pm_runtime_mark_last_busy(dev);
> +	pm_runtime_put_autosuspend(dev);
> +
> +	dev_info(dev, "Rockchip NPU core %d version: %d\n", core->index, version);
> +
> +	return 0;
> +}
> +
> +void rocket_core_fini(struct rocket_core *core)
> +{
> +	pm_runtime_disable(core->dev);
> +}
>
> [...]
>
> diff --git a/drivers/accel/rocket/rocket_drv.c b/drivers/accel/rocket/rocket_drv.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..c22d965f20f1239a36b1d823d5fe5f372713555d
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_drv.c
>
> [...]
>
> +static int rocket_device_runtime_resume(struct device *dev)
> +{
> +	struct rocket_device *rdev = dev_get_drvdata(dev);
> +
> +	for (unsigned int core = 0; core < rdev->num_cores; core++) {
> +		if (dev != rdev->cores[core].dev)
> +			continue;
> +
> +		if (core == 0) {
> +			clk_prepare_enable(rdev->clk_npu);
> +			clk_prepare_enable(rdev->pclk);
> +		}
> +
> +		clk_prepare_enable(rdev->cores[core].a_clk);
> +		clk_prepare_enable(rdev->cores[core].h_clk);
> +	}
> +
> +	return 0;
> +}

Here is where we will probably want to check the return code of each
clk_prepare_enable, and potentially do quite ugly "always return hardware to
known state" handling if any of them fails to enable, i.e. unwind the enables
in the function exit before returning the error code.

Seems pointless because if a clock fails to enable it's a nuclear meltdown type
situation anyway, but especially when people are writing DTSes or porting things
to new SoCs, it can be nicer to have the driver fail rather than the whole SoC.

I do wish we had cleanup.h helpers for clock enables though...

> +
> +static int rocket_device_runtime_suspend(struct device *dev)
> +{
> +	struct rocket_device *rdev = dev_get_drvdata(dev);
> +
> +	for (unsigned int core = 0; core < rdev->num_cores; core++) {
> +		if (dev != rdev->cores[core].dev)
> +			continue;
> +
> +		clk_disable_unprepare(rdev->cores[core].a_clk);
> +		clk_disable_unprepare(rdev->cores[core].h_clk);
> +
> +		if (core == 0) {
> +			clk_disable_unprepare(rdev->pclk);
> +			clk_disable_unprepare(rdev->clk_npu);
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +EXPORT_GPL_DEV_PM_OPS(rocket_pm_ops) = {
> +	RUNTIME_PM_OPS(rocket_device_runtime_suspend, rocket_device_runtime_resume, NULL)
> +	SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
> +};
> +
> +static struct platform_driver rocket_driver = {
> +	.probe = rocket_probe,
> +	.remove = rocket_remove,
> +	.driver	 = {
> +		.name = "rocket",
> +		.pm = pm_ptr(&rocket_pm_ops),
> +		.of_match_table = dt_match,
> +	},
> +};
> +module_platform_driver(rocket_driver);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_DESCRIPTION("DRM driver for the Rockchip NPU IP");
> +MODULE_AUTHOR("Tomeu Vizoso");
>
> [...]

I'll send a second reply with PM comments on the job stuff in the patch that
adds it, since I found something peculiar there while experimenting on RK3576.

Kind regards,
Nicolas Frattaroli




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 6/7] accel/rocket: Add job submission IOCTL
  2025-02-25  7:55 ` [PATCH v2 6/7] accel/rocket: Add job submission IOCTL Tomeu Vizoso
  2025-02-25  8:44   ` Thomas Zimmermann
  2025-03-21 16:09   ` Jeff Hugo
@ 2025-04-29 10:05   ` Nicolas Frattaroli
  2 siblings, 0 replies; 25+ messages in thread
From: Nicolas Frattaroli @ 2025-04-29 10:05 UTC (permalink / raw)
  To: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip, Tomeu Vizoso
  Cc: devicetree, linux-arm-kernel, linux-rockchip, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig, Tomeu Vizoso

On Tuesday, 25 February 2025 08:55:52 Central European Summer Time Tomeu Vizoso wrote:
> Using the DRM GPU scheduler infrastructure, with a scheduler for each
> core.
> 
> Userspace can decide for a series of tasks to be executed sequentially
> in the same core, so SRAM locality can be taken advantage of.
> 
> The job submission code was initially based on Panfrost.
> 
> v2:
> - Remove hardcoded number of cores
> - Misc. style fixes (Jeffrey Hugo)
> - Repack IOCTL struct (Jeffrey Hugo)
> 
> Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> ---
>  drivers/accel/rocket/Makefile        |   3 +-
>  drivers/accel/rocket/rocket_core.c   |   6 +
>  drivers/accel/rocket/rocket_core.h   |  14 +
>  drivers/accel/rocket/rocket_device.c |   2 +
>  drivers/accel/rocket/rocket_device.h |   2 +
>  drivers/accel/rocket/rocket_drv.c    |  15 +
>  drivers/accel/rocket/rocket_drv.h    |   4 +
>  drivers/accel/rocket/rocket_job.c    | 710 +++++++++++++++++++++++++++++++++++
>  drivers/accel/rocket/rocket_job.h    |  50 +++
>  include/uapi/drm/rocket_accel.h      |  55 +++
>  10 files changed, 860 insertions(+), 1 deletion(-)

Hi Tomeu,

some more power management things I've noticed here.

>
> [...]
>
> diff --git a/drivers/accel/rocket/rocket_job.c b/drivers/accel/rocket/rocket_job.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..25b31f28e932aaee86173b9a0962932c9c640c03
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_job.c
>
> [...]
>
> +static void
> +rocket_reset(struct rocket_core *core, struct drm_sched_job *bad)
> +{
> +	bool cookie;
> +
> +	if (!atomic_read(&core->reset.pending))
> +		return;
> +
> +	/*
> +	 * Stop the scheduler.
> +	 *
> +	 * FIXME: We temporarily get out of the dma_fence_signalling section
> +	 * because the cleanup path generate lockdep splats when taking locks
> +	 * to release job resources. We should rework the code to follow this
> +	 * pattern:
> +	 *
> +	 *	try_lock
> +	 *	if (locked)
> +	 *		release
> +	 *	else
> +	 *		schedule_work_to_release_later
> +	 */
> +	drm_sched_stop(&core->sched, bad);
> +
> +	cookie = dma_fence_begin_signalling();
> +
> +	if (bad)
> +		drm_sched_increase_karma(bad);
> +
> +	/*
> +	 * Mask job interrupts and synchronize to make sure we won't be
> +	 * interrupted during our reset.
> +	 */
> +	rocket_write(core, REG_PC_INTERRUPT_MASK, 0x0);
> +	synchronize_irq(core->irq);
> +
> +	/* Handle the remaining interrupts before we reset. */
> +	rocket_job_handle_irq(core);
> +
> +	/*
> +	 * Remaining interrupts have been handled, but we might still have
> +	 * stuck jobs. Let's make sure the PM counters stay balanced by
> +	 * manually calling pm_runtime_put_noidle() and
> +	 * rocket_devfreq_record_idle() for each stuck job.
> +	 * Let's also make sure the cycle counting register's refcnt is
> +	 * kept balanced to prevent it from running forever
> +	 */
> +	spin_lock(&core->job_lock);
> +	if (core->in_flight_job)
> +		pm_runtime_put_noidle(core->dev);

This particular line of code caused me issues when I was experimenting with the
driver on RK3576. My current situation is that every job that gets submitted
times out because of some IRQs never arriving, which is besides the point, but
it did expose a power management bug here that I suspect may affect RK3588 as
well. After like 3 timeouts, we reset again and hang in the interrupt mask write
just a few lines above. This is because we somehow managed to get into a
situation where this function is called with pclk disabled, and this manual
balancing act may be part of it. Not doing the manual balancing at all here
doesn't fix it, I had to explicitly wrap the reset function in
pm_runtime_get_sync and pm_runtime_put_noidle to avoid having this function
execute with disabled clocks. That seems like the "wrong solution" though
because it means something already powered off our hardware too eagerly before
we got the reset done.

My gut instinct tells me that there's a bug here with multiple timed out jobs
being in flight, but I'm not 100% on it. Maybe you'll be able to reproduce it
on an RK3588 by artificially forcing all your jobs to time out with some very
low timeout value or by masking the interrupts.

> +
> +	core->in_flight_job = NULL;
> +	spin_unlock(&core->job_lock);
> +
> +	/* Proceed with reset now. */
> +	pm_runtime_force_suspend(core->dev);
> +	pm_runtime_force_resume(core->dev);

Do we need to guarantee some sort of exclusivity here so that nothing tries to
concurrently use the device while we're forcing it off and back on again? I'm
unfamiliar with the drm fence stuff, so I may be overthinking this.

> +
> +	/* GPU has been reset, we can clear the reset pending bit. */
> +	atomic_set(&core->reset.pending, 0);

Nitpick: should probably be "NPU" ;)

> +
> +	/*
> +	 * Now resubmit jobs that were previously queued but didn't have a
> +	 * chance to finish.
> +	 * FIXME: We temporarily get out of the DMA fence signalling section
> +	 * while resubmitting jobs because the job submission logic will
> +	 * allocate memory with the GFP_KERNEL flag which can trigger memory
> +	 * reclaim and exposes a lock ordering issue.
> +	 */
> +	dma_fence_end_signalling(cookie);
> +	drm_sched_resubmit_jobs(&core->sched);
> +	cookie = dma_fence_begin_signalling();
> +
> +	/* Restart the scheduler */
> +	drm_sched_start(&core->sched, 0);
> +
> +	dma_fence_end_signalling(cookie);
> +}
>
> [...]
>

Kind regards,
Nicolas Frattaroli




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings
  2025-04-25 18:50   ` Nicolas Frattaroli
@ 2025-05-14 15:18     ` Tomeu Vizoso
  2025-05-14 17:50       ` Nicolas Frattaroli
  0 siblings, 1 reply; 25+ messages in thread
From: Tomeu Vizoso @ 2025-05-14 15:18 UTC (permalink / raw)
  To: Nicolas Frattaroli
  Cc: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip, devicetree, linux-arm-kernel, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

Hi Nicolas,

Thanks for looking at this. Some thoughts below:

On Fri, Apr 25, 2025 at 8:50 PM Nicolas Frattaroli
<nicolas.frattaroli@collabora.com> wrote:
>
> On Tuesday, 25 February 2025 08:55:47 Central European Summer Time Tomeu Vizoso wrote:
> > Add the bindings for the Neural Processing Unit IP from Rockchip.
> >
> > v2:
> > - Adapt to new node structure (one node per core, each with its own
> >   IOMMU)
> > - Several misc. fixes from Sebastian Reichel
> >
> > Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
> > ---
> >  .../bindings/npu/rockchip,rknn-core.yaml           | 152 +++++++++++++++++++++
> >  1 file changed, 152 insertions(+)
> >
> > diff --git a/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > new file mode 100644
> > index 0000000000000000000000000000000000000000..e8d0afe4a7d1c4f166cf13a9f4aa7c1901362a3f
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > @@ -0,0 +1,152 @@
> > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> > +%YAML 1.2
> > +---
> > +$id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
> > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > +
> > +title: Neural Processing Unit IP from Rockchip
> > +
> > +maintainers:
> > +  - Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > +
> > +description:
> > +  Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
> > +  open source NVDLA IP.
> > +
> > +properties:
> > +  $nodename:
> > +    pattern: '^npu-core@[a-f0-9]+$'
> > +
> > +  compatible:
> > +    oneOf:
> > +      - items:
> > +          - enum:
> > +              - rockchip,rk3588-rknn-core-top
> > +          - const: rockchip,rknn-core-top
> > +      - items:
> > +          - enum:
> > +              - rockchip,rk3588-rknn-core
> > +          - const: rockchip,rknn-core
> > +
> > +  reg:
> > +    maxItems: 1
>
> Hi Tomeu,
>
> as you probably know, RK3576 has quite a similar NPU. This is why I'm currently
> poking at this patch series. One of the differences I ran into was that the
> IOMMU of each NPU core now sits within the reg address space range of the core
> as described by the single reg item binding and assumed by the driver.

But this is not a difference, right?

> This seemed weird to me at first, since I would've guessed the cores would be
> exactly the same, but I noticed that they kind of still are; the RK3588's NPU
> also has a "hole" between 0x2000 and 0x2fff on each core, which is where RK3576
> put its IOMMU.

So this is the same in both RK3576 and RK3588, right?

> This is some information I gleaned from the RK3588 TRM, specifically section
> 36.4.1 "Internal Address Mapping", which shows where each "part" of the NPU core
> has its address space.
>
> Right now we just represent this as a single reg item per core. I've played
> with the idea of splitting this up into the distinct ranges the TRM lists and
> giving each a reg-names entry, but this would require a major rework of the
> driver from what I can tell, including to the auto-generated register header.
>
> For now, my hack on RK3576 is to just ioremap the range defined by resource
> start to resource end inside rocket manually if I get -EBUSY trying to ioremap
> the resource proper. This is quite an ugly hack though, it means the IOMMU node
> still has its address overlapping with another node in the DT, and it also means
> we have an unavoidable error message printed into the kernel log. This is also
> what the vendor driver seems to do.
>
> What do you reckon is a reg setup in the binding that is both reasonable to
> implement in the driver while accurately describing the hardware?

Guess we could go with some smaller granularity and have 3 register
areas per core, instead of 10:

- CORE: PC+CNA (0x0000 ~ 0x1fff)
- AUX: CORE+DPU+PPU+DDMA+SDMA (0x3000 ~ 0x9fff)
- GLOBAL (0xf000 ~ 0xf004)

So the IOMMU on all the known SoCs can have its own regmap. I have
chosen to call the first one CORE because these are the components
that are absolutely needed in any NPU that is oriented towards
convolutional networks (convolutions, basically). I have named the
second AUX because it contains hardware units that are optional and
are used to implement operations that may be common but that aren't as
computational expensive as convolutions and thus might be skipped in
lower-end versions of the IP.

What do you think?

Regards,

Tomeu

> The RK3568, which uses a similar NPU design has the IOMMU at an offset of 0xb000
> from the core's start of PC, so probably after any core specifics but before the
> global registers if I hazard a guess.
>
> For those without access to the TRM: splitting this up into multiple reg items
> per core precisely the way the TRM does it would result in no less than 10 reg
> items on RK3588, if I count correctly.
>
> Kind regards,
> Nicolas Frattaroli
>
>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings
  2025-02-25 16:02   ` Rob Herring
@ 2025-05-14 16:26     ` Tomeu Vizoso
  0 siblings, 0 replies; 25+ messages in thread
From: Tomeu Vizoso @ 2025-05-14 16:26 UTC (permalink / raw)
  To: Rob Herring
  Cc: Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner, Oded Gabbay,
	Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo, devicetree,
	linux-arm-kernel, linux-rockchip, linux-kernel, dri-devel,
	linux-doc, linux-media, linaro-mm-sig

Hi Rob,

On Tue, Feb 25, 2025 at 5:02 PM Rob Herring <robh@kernel.org> wrote:
>
> On Tue, Feb 25, 2025 at 08:55:47AM +0100, Tomeu Vizoso wrote:
> > Add the bindings for the Neural Processing Unit IP from Rockchip.
> >
> > v2:
> > - Adapt to new node structure (one node per core, each with its own
> >   IOMMU)
> > - Several misc. fixes from Sebastian Reichel
> >
> > Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
> > ---
> >  .../bindings/npu/rockchip,rknn-core.yaml           | 152 +++++++++++++++++++++
> >  1 file changed, 152 insertions(+)
> >
> > diff --git a/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > new file mode 100644
> > index 0000000000000000000000000000000000000000..e8d0afe4a7d1c4f166cf13a9f4aa7c1901362a3f
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > @@ -0,0 +1,152 @@
> > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> > +%YAML 1.2
> > +---
> > +$id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
> > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > +
> > +title: Neural Processing Unit IP from Rockchip
> > +
> > +maintainers:
> > +  - Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > +
> > +description:
> > +  Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
> > +  open source NVDLA IP.
> > +
> > +properties:
> > +  $nodename:
> > +    pattern: '^npu-core@[a-f0-9]+$'
> > +
> > +  compatible:
> > +    oneOf:
> > +      - items:
> > +          - enum:
> > +              - rockchip,rk3588-rknn-core-top
> > +          - const: rockchip,rknn-core-top
>
> Drop the fallbacks unless you have some evidence that the IP is the
> same across a lot of SoCs. If you don't, then
> rockchip,rk3588-rknn-core-top can be the fallback whenever there are
> more compatible SoCs.
>
> Or if there's version/feature registers that otherwise make it
> discoverable, then a common compatible is fine.
>
> > +      - items:
> > +          - enum:
> > +              - rockchip,rk3588-rknn-core
> > +          - const: rockchip,rknn-core
>
> I don't understand the difference between core and core-top. That needs
> to be explained in the top-level description.
>
> > +
> > +  reg:
> > +    maxItems: 1
> > +
> > +  clocks:
> > +    minItems: 2
> > +    maxItems: 4
> > +
> > +  clock-names:
> > +    items:
> > +      - const: aclk
> > +      - const: hclk
> > +      - const: npu
> > +      - const: pclk
> > +    minItems: 2
> > +
> > +  interrupts:
> > +    maxItems: 1
> > +
> > +  iommus:
> > +    maxItems: 1
> > +
> > +  npu-supply: true
> > +
> > +  power-domains:
> > +    maxItems: 1
> > +
> > +  resets:
> > +    maxItems: 2
> > +
> > +  reset-names:
> > +    items:
> > +      - const: srst_a
> > +      - const: srst_h
> > +
> > +  sram-supply: true
>
> Group supply properties together
>
> > +
> > +required:
> > +  - compatible
> > +  - reg
> > +  - clocks
> > +  - clock-names
> > +  - interrupts
> > +  - iommus
> > +  - npu-supply
> > +  - power-domains
> > +  - resets
> > +  - reset-names
> > +  - sram-supply
> > +
> > +allOf:
> > +  - if:
> > +      properties:
> > +        compatible:
> > +          contains:
> > +            enum:
> > +              - rockchip,rknn-core-top
> > +    then:
> > +      properties:
> > +        clocks:
> > +          minItems: 4
> > +
> > +        clock-names:
> > +          minItems: 4
> > +  - if:
> > +      properties:
> > +        compatible:
> > +          contains:
> > +            enum:
> > +              - rockchip,rknn-core
> > +    then:
> > +      properties:
> > +        clocks:
> > +          maxItems: 2
> > +        clock-names:
> > +          maxItems: 2
> > +
> > +additionalProperties: false
> > +
> > +examples:
> > +  - |
> > +    #include <dt-bindings/clock/rockchip,rk3588-cru.h>
> > +    #include <dt-bindings/interrupt-controller/irq.h>
> > +    #include <dt-bindings/interrupt-controller/arm-gic.h>
> > +    #include <dt-bindings/power/rk3588-power.h>
> > +    #include <dt-bindings/reset/rockchip,rk3588-cru.h>
> > +
> > +    bus {
> > +      #address-cells = <2>;
> > +      #size-cells = <2>;
> > +
> > +      rknn_core_top: npu-core@fdab0000 {
>
> npu@...

Can you extend on why you would prefer to have npu@? As each node
corresponds to a core inside the NPU, I went with npu-core@.

Thanks,

Tomeu

> > +        compatible = "rockchip,rk3588-rknn-core-top", "rockchip,rknn-core-top";
> > +        reg = <0x0 0xfdab0000 0x0 0x9000>;
> > +        assigned-clocks = <&scmi_clk SCMI_CLK_NPU>;
> > +        assigned-clock-rates = <200000000>;
> > +        clocks = <&cru ACLK_NPU0>, <&cru HCLK_NPU0>,
> > +                 <&scmi_clk SCMI_CLK_NPU>, <&cru PCLK_NPU_ROOT>;
> > +        clock-names = "aclk", "hclk", "npu", "pclk";
> > +        interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH 0>;
> > +        iommus = <&rknn_mmu_top>;
> > +        npu-supply = <&vdd_npu_s0>;
> > +        power-domains = <&power RK3588_PD_NPUTOP>;
> > +        resets = <&cru SRST_A_RKNN0>, <&cru SRST_H_RKNN0>;
> > +        reset-names = "srst_a", "srst_h";
> > +        sram-supply = <&vdd_npu_mem_s0>;
> > +      };
> > +
> > +      rknn_core_1: npu-core@fdac0000 {
> > +        compatible = "rockchip,rk3588-rknn-core", "rockchip,rknn-core";
> > +        reg = <0x0 0xfdac0000 0x0 0x9000>;
> > +        clocks = <&cru ACLK_NPU1>, <&cru HCLK_NPU1>;
> > +        clock-names = "aclk", "hclk";
> > +        interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH 0>;
> > +        iommus = <&rknn_mmu_1>;
> > +        npu-supply = <&vdd_npu_s0>;
> > +        power-domains = <&power RK3588_PD_NPU1>;
> > +        resets = <&cru SRST_A_RKNN1>, <&cru SRST_H_RKNN1>;
> > +        reset-names = "srst_a", "srst_h";
> > +        sram-supply = <&vdd_npu_mem_s0>;
> > +      };
> > +    };
> > +...
> >
> > --
> > 2.48.1
> >


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings
  2025-05-14 15:18     ` Tomeu Vizoso
@ 2025-05-14 17:50       ` Nicolas Frattaroli
  2025-05-15  8:30         ` Tomeu Vizoso
  0 siblings, 1 reply; 25+ messages in thread
From: Nicolas Frattaroli @ 2025-05-14 17:50 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip, devicetree, linux-arm-kernel, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

On Wednesday, 14 May 2025 17:18:22 Central European Summer Time Tomeu Vizoso wrote:
> Hi Nicolas,
> 
> Thanks for looking at this. Some thoughts below:
> 
> On Fri, Apr 25, 2025 at 8:50 PM Nicolas Frattaroli
> <nicolas.frattaroli@collabora.com> wrote:
> >
> > On Tuesday, 25 February 2025 08:55:47 Central European Summer Time Tomeu Vizoso wrote:
> > > Add the bindings for the Neural Processing Unit IP from Rockchip.
> > >
> > > v2:
> > > - Adapt to new node structure (one node per core, each with its own
> > >   IOMMU)
> > > - Several misc. fixes from Sebastian Reichel
> > >
> > > Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > > Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
> > > ---
> > >  .../bindings/npu/rockchip,rknn-core.yaml           | 152 +++++++++++++++++++++
> > >  1 file changed, 152 insertions(+)
> > >
> > > diff --git a/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > > new file mode 100644
> > > index 0000000000000000000000000000000000000000..e8d0afe4a7d1c4f166cf13a9f4aa7c1901362a3f
> > > --- /dev/null
> > > +++ b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > > @@ -0,0 +1,152 @@
> > > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> > > +%YAML 1.2
> > > +---
> > > +$id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
> > > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > > +
> > > +title: Neural Processing Unit IP from Rockchip
> > > +
> > > +maintainers:
> > > +  - Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > > +
> > > +description:
> > > +  Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
> > > +  open source NVDLA IP.
> > > +
> > > +properties:
> > > +  $nodename:
> > > +    pattern: '^npu-core@[a-f0-9]+$'
> > > +
> > > +  compatible:
> > > +    oneOf:
> > > +      - items:
> > > +          - enum:
> > > +              - rockchip,rk3588-rknn-core-top
> > > +          - const: rockchip,rknn-core-top
> > > +      - items:
> > > +          - enum:
> > > +              - rockchip,rk3588-rknn-core
> > > +          - const: rockchip,rknn-core
> > > +
> > > +  reg:
> > > +    maxItems: 1
> >
> > Hi Tomeu,
> >
> > as you probably know, RK3576 has quite a similar NPU. This is why I'm currently
> > poking at this patch series. One of the differences I ran into was that the
> > IOMMU of each NPU core now sits within the reg address space range of the core
> > as described by the single reg item binding and assumed by the driver.
> 
> But this is not a difference, right?

It is. E.g. on RK3588, you use reg = <0x0 0xfdab0000 0x0 0x9000>; for
rknn_core_top, and rknn_mmu_top then sits at 0xfdab9000, which is just
outside the reg range of the rknn_core_top node. That means acquiring the
iomem as a resource succeeds for you, whereas for me it fails.

> 
> > This seemed weird to me at first, since I would've guessed the cores would be
> > exactly the same, but I noticed that they kind of still are; the RK3588's NPU
> > also has a "hole" between 0x2000 and 0x2fff on each core, which is where RK3576
> > put its IOMMU.
> 
> So this is the same in both RK3576 and RK3588, right?

Yes, both RK3576 and RK3588 have a hole in the same area. RK3562 also has
the same hole. RK3568 doesn't have the offsets for the individual parts of
the NPU in the TRM, making all the relative register offsets the TRM then
goes on to document completely pointless as it omits what those offsets
are based on, so we don't know if it has a hole there. I vaguely recall
that it has the IOMMU either before or past the global range (not sure if
I wrote these findings down anywhere?), so if it has a hole at 0x2000
then it's unused like on the RK3588. I don't have access to the RV1106
Part 2 TRM where the NPU is described, so I don't know whether that has a
hole there unless we dig into the downstream code.

> 
> > This is some information I gleaned from the RK3588 TRM, specifically section
> > 36.4.1 "Internal Address Mapping", which shows where each "part" of the NPU core
> > has its address space.
> >
> > Right now we just represent this as a single reg item per core. I've played
> > with the idea of splitting this up into the distinct ranges the TRM lists and
> > giving each a reg-names entry, but this would require a major rework of the
> > driver from what I can tell, including to the auto-generated register header.
> >
> > For now, my hack on RK3576 is to just ioremap the range defined by resource
> > start to resource end inside rocket manually if I get -EBUSY trying to ioremap
> > the resource proper. This is quite an ugly hack though, it means the IOMMU node
> > still has its address overlapping with another node in the DT, and it also means
> > we have an unavoidable error message printed into the kernel log. This is also
> > what the vendor driver seems to do.
> >
> > What do you reckon is a reg setup in the binding that is both reasonable to
> > implement in the driver while accurately describing the hardware?
> 
> Guess we could go with some smaller granularity and have 3 register
> areas per core, instead of 10:
> 
> - CORE: PC+CNA (0x0000 ~ 0x1fff)
> - AUX: CORE+DPU+PPU+DDMA+SDMA (0x3000 ~ 0x9fff)
> - GLOBAL (0xf000 ~ 0xf004)
> 
> So the IOMMU on all the known SoCs can have its own regmap. I have
> chosen to call the first one CORE because these are the components
> that are absolutely needed in any NPU that is oriented towards
> convolutional networks (convolutions, basically). I have named the
> second AUX because it contains hardware units that are optional and
> are used to implement operations that may be common but that aren't as
> computational expensive as convolutions and thus might be skipped in
> lower-end versions of the IP.
> 
> What do you think?

I'm personally fine with this approach. I've floated a two-area approach
to Sebastian Reichel before who, as far as I can recall, expressed his
distaste for  it as it seemed like an arbitrary division. I do concur in
that, it seems very arbitrary, so it's hard to say whether the bindings
maintainers would let us get away with it if they get wind of it.
Unfortunately they are Cc'd on this E-Mail, so the cat is out of the bag
in this regard.

What speaks for the 3 register area split is that anything that brings
more holes and doubly mapped things into the AUX area is probably going
to be so radically different it'll ideally have its own binding anyway,
or needs more than just a compatible added to the binding.

I think as far as arbitrary splits goes, the one you propose is probably
the one most closely aligned with reality. Certain register areas do
seem like something they'd never move away from its corresponding
companion, whereas adding parts to the AUX area or removing from it is
probably going to be quite common. So it can essentially be treated as
the area where optional things will most likely land as you pointed out,
which then don't need more bindings fiddling to add those optional things
as explicitly named areas in the bindings as long as we treat it as just
one opaque area s far as the binding is concerned.

Also, unless there's some virtual combined sparse iomem API in the kernel
that I'm not aware of, that's probably the easiest path forward for the
driver as well.

> 
> Regards,
> 
> Tomeu

Kind regards,
Nicolas Frattaroli

> 
> > The RK3568, which uses a similar NPU design has the IOMMU at an offset of 0xb000
> > from the core's start of PC, so probably after any core specifics but before the
> > global registers if I hazard a guess.
> >
> > For those without access to the TRM: splitting this up into multiple reg items
> > per core precisely the way the TRM does it would result in no less than 10 reg
> > items on RK3588, if I count correctly.
> >
> > Kind regards,
> > Nicolas Frattaroli
> >
> >
> 






^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings
  2025-05-14 17:50       ` Nicolas Frattaroli
@ 2025-05-15  8:30         ` Tomeu Vizoso
  2025-05-16 10:25           ` Nicolas Frattaroli
  0 siblings, 1 reply; 25+ messages in thread
From: Tomeu Vizoso @ 2025-05-15  8:30 UTC (permalink / raw)
  To: Nicolas Frattaroli
  Cc: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip, devicetree, linux-arm-kernel, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

On Wed, May 14, 2025 at 7:50 PM Nicolas Frattaroli
<nicolas.frattaroli@collabora.com> wrote:
>
> On Wednesday, 14 May 2025 17:18:22 Central European Summer Time Tomeu Vizoso wrote:
> > Hi Nicolas,
> >
> > Thanks for looking at this. Some thoughts below:
> >
> > On Fri, Apr 25, 2025 at 8:50 PM Nicolas Frattaroli
> > <nicolas.frattaroli@collabora.com> wrote:
> > >
> > > On Tuesday, 25 February 2025 08:55:47 Central European Summer Time Tomeu Vizoso wrote:
> > > > Add the bindings for the Neural Processing Unit IP from Rockchip.
> > > >
> > > > v2:
> > > > - Adapt to new node structure (one node per core, each with its own
> > > >   IOMMU)
> > > > - Several misc. fixes from Sebastian Reichel
> > > >
> > > > Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > > > Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
> > > > ---
> > > >  .../bindings/npu/rockchip,rknn-core.yaml           | 152 +++++++++++++++++++++
> > > >  1 file changed, 152 insertions(+)
> > > >
> > > > diff --git a/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > > > new file mode 100644
> > > > index 0000000000000000000000000000000000000000..e8d0afe4a7d1c4f166cf13a9f4aa7c1901362a3f
> > > > --- /dev/null
> > > > +++ b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > > > @@ -0,0 +1,152 @@
> > > > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> > > > +%YAML 1.2
> > > > +---
> > > > +$id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
> > > > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > > > +
> > > > +title: Neural Processing Unit IP from Rockchip
> > > > +
> > > > +maintainers:
> > > > +  - Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > > > +
> > > > +description:
> > > > +  Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
> > > > +  open source NVDLA IP.
> > > > +
> > > > +properties:
> > > > +  $nodename:
> > > > +    pattern: '^npu-core@[a-f0-9]+$'
> > > > +
> > > > +  compatible:
> > > > +    oneOf:
> > > > +      - items:
> > > > +          - enum:
> > > > +              - rockchip,rk3588-rknn-core-top
> > > > +          - const: rockchip,rknn-core-top
> > > > +      - items:
> > > > +          - enum:
> > > > +              - rockchip,rk3588-rknn-core
> > > > +          - const: rockchip,rknn-core
> > > > +
> > > > +  reg:
> > > > +    maxItems: 1
> > >
> > > Hi Tomeu,
> > >
> > > as you probably know, RK3576 has quite a similar NPU. This is why I'm currently
> > > poking at this patch series. One of the differences I ran into was that the
> > > IOMMU of each NPU core now sits within the reg address space range of the core
> > > as described by the single reg item binding and assumed by the driver.
> >
> > But this is not a difference, right?
>
> It is. E.g. on RK3588, you use reg = <0x0 0xfdab0000 0x0 0x9000>; for
> rknn_core_top, and rknn_mmu_top then sits at 0xfdab9000, which is just
> outside the reg range of the rknn_core_top node. That means acquiring the
> iomem as a resource succeeds for you, whereas for me it fails.

Ah, got it now, thanks.

> >
> > > This seemed weird to me at first, since I would've guessed the cores would be
> > > exactly the same, but I noticed that they kind of still are; the RK3588's NPU
> > > also has a "hole" between 0x2000 and 0x2fff on each core, which is where RK3576
> > > put its IOMMU.
> >
> > So this is the same in both RK3576 and RK3588, right?
>
> Yes, both RK3576 and RK3588 have a hole in the same area. RK3562 also has
> the same hole. RK3568 doesn't have the offsets for the individual parts of
> the NPU in the TRM, making all the relative register offsets the TRM then
> goes on to document completely pointless as it omits what those offsets
> are based on, so we don't know if it has a hole there. I vaguely recall
> that it has the IOMMU either before or past the global range (not sure if
> I wrote these findings down anywhere?), so if it has a hole at 0x2000
> then it's unused like on the RK3588. I don't have access to the RV1106
> Part 2 TRM where the NPU is described, so I don't know whether that has a
> hole there unless we dig into the downstream code.
>
> >
> > > This is some information I gleaned from the RK3588 TRM, specifically section
> > > 36.4.1 "Internal Address Mapping", which shows where each "part" of the NPU core
> > > has its address space.
> > >
> > > Right now we just represent this as a single reg item per core. I've played
> > > with the idea of splitting this up into the distinct ranges the TRM lists and
> > > giving each a reg-names entry, but this would require a major rework of the
> > > driver from what I can tell, including to the auto-generated register header.
> > >
> > > For now, my hack on RK3576 is to just ioremap the range defined by resource
> > > start to resource end inside rocket manually if I get -EBUSY trying to ioremap
> > > the resource proper. This is quite an ugly hack though, it means the IOMMU node
> > > still has its address overlapping with another node in the DT, and it also means
> > > we have an unavoidable error message printed into the kernel log. This is also
> > > what the vendor driver seems to do.
> > >
> > > What do you reckon is a reg setup in the binding that is both reasonable to
> > > implement in the driver while accurately describing the hardware?
> >
> > Guess we could go with some smaller granularity and have 3 register
> > areas per core, instead of 10:
> >
> > - CORE: PC+CNA (0x0000 ~ 0x1fff)
> > - AUX: CORE+DPU+PPU+DDMA+SDMA (0x3000 ~ 0x9fff)
> > - GLOBAL (0xf000 ~ 0xf004)
> >
> > So the IOMMU on all the known SoCs can have its own regmap. I have
> > chosen to call the first one CORE because these are the components
> > that are absolutely needed in any NPU that is oriented towards
> > convolutional networks (convolutions, basically). I have named the
> > second AUX because it contains hardware units that are optional and
> > are used to implement operations that may be common but that aren't as
> > computational expensive as convolutions and thus might be skipped in
> > lower-end versions of the IP.
> >
> > What do you think?
>
> I'm personally fine with this approach. I've floated a two-area approach
> to Sebastian Reichel before who, as far as I can recall, expressed his
> distaste for  it as it seemed like an arbitrary division. I do concur in
> that, it seems very arbitrary, so it's hard to say whether the bindings
> maintainers would let us get away with it if they get wind of it.
> Unfortunately they are Cc'd on this E-Mail, so the cat is out of the bag
> in this regard.

Actually, after thinking a bit more about it I'm leaning towards only
having the PC, CNA and CORE areas in the DT, as those are the only
ones that should be accessible from the CPU.

The registers for the other units should be set by the PC, as it reads
the command stream.

So three register areas that can be set to wherever Rockchip has
placed them, and we just ignore the others in the kernel, as we don't
have any business messing with them ourselves.

What do you think?

Thanks,

Tomeu

> What speaks for the 3 register area split is that anything that brings
> more holes and doubly mapped things into the AUX area is probably going
> to be so radically different it'll ideally have its own binding anyway,
> or needs more than just a compatible added to the binding.
>
> I think as far as arbitrary splits goes, the one you propose is probably
> the one most closely aligned with reality. Certain register areas do
> seem like something they'd never move away from its corresponding
> companion, whereas adding parts to the AUX area or removing from it is
> probably going to be quite common. So it can essentially be treated as
> the area where optional things will most likely land as you pointed out,
> which then don't need more bindings fiddling to add those optional things
> as explicitly named areas in the bindings as long as we treat it as just
> one opaque area s far as the binding is concerned.
>
> Also, unless there's some virtual combined sparse iomem API in the kernel
> that I'm not aware of, that's probably the easiest path forward for the
> driver as well.
>
> >
> > Regards,
> >
> > Tomeu
>
> Kind regards,
> Nicolas Frattaroli
>
> >
> > > The RK3568, which uses a similar NPU design has the IOMMU at an offset of 0xb000
> > > from the core's start of PC, so probably after any core specifics but before the
> > > global registers if I hazard a guess.
> > >
> > > For those without access to the TRM: splitting this up into multiple reg items
> > > per core precisely the way the TRM does it would result in no less than 10 reg
> > > items on RK3588, if I count correctly.
> > >
> > > Kind regards,
> > > Nicolas Frattaroli
> > >
> > >
> >
>
>
>
>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/7] accel/rocket: Add a new driver for Rockchip's NPU
  2025-04-25 18:22   ` Nicolas Frattaroli
@ 2025-05-16  9:15     ` Tomeu Vizoso
  0 siblings, 0 replies; 25+ messages in thread
From: Tomeu Vizoso @ 2025-05-16  9:15 UTC (permalink / raw)
  To: Nicolas Frattaroli
  Cc: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip, devicetree, linux-arm-kernel, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

On Fri, Apr 25, 2025 at 8:22 PM Nicolas Frattaroli
<nicolas.frattaroli@collabora.com> wrote:
>
> On Tuesday, 25 February 2025 08:55:50 Central European Summer Time Tomeu Vizoso wrote:
> > This initial version supports the NPU as shipped in the RK3588 SoC and
> > described in the first part of its TRM, in Chapter 36.
> >
> > This NPU contains 3 independent cores that the driver can submit jobs
> > to.
> >
> > This commit adds just hardware initialization and power management.
> >
> > v2:
> > - Split cores and IOMMUs as independent devices (Sebastian Reichel)
> > - Add some documentation (Jeffrey Hugo)
> > - Be more explicit in the Kconfig documentation (Jeffrey Hugo)
> > - Remove resets, as these haven't been found useful so far (Zenghui Yu)
> > - Repack structs (Jeffrey Hugo)
> > - Use DEFINE_DRM_ACCEL_FOPS (Jeffrey Hugo)
> > - Use devm_drm_dev_alloc (Jeffrey Hugo)
> > - Use probe log helper (Jeffrey Hugo)
> > - Introduce UABI header in a later patch (Jeffrey Hugo)
> >
> > Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > ---
> >  Documentation/accel/index.rst           |    1 +
> >  Documentation/accel/rocket/index.rst    |   19 +
> >  MAINTAINERS                             |    8 +
> >  drivers/accel/Kconfig                   |    1 +
> >  drivers/accel/Makefile                  |    1 +
> >  drivers/accel/rocket/Kconfig            |   25 +
> >  drivers/accel/rocket/Makefile           |    8 +
> >  drivers/accel/rocket/rocket_core.c      |   71 +
> >  drivers/accel/rocket/rocket_core.h      |   29 +
> >  drivers/accel/rocket/rocket_device.c    |   29 +
> >  drivers/accel/rocket/rocket_device.h    |   29 +
> >  drivers/accel/rocket/rocket_drv.c       |  273 ++
> >  drivers/accel/rocket/rocket_drv.h       |   13 +
> >  drivers/accel/rocket/rocket_registers.h | 4425 +++++++++++++++++++++++++++++++
> >  14 files changed, 4932 insertions(+)
> >
> > [...]
> > diff --git a/drivers/accel/rocket/rocket_drv.c b/drivers/accel/rocket/rocket_drv.c
> > new file mode 100644
> > index 0000000000000000000000000000000000000000..c22d965f20f1239a36b1d823d5fe5f372713555d
> > --- /dev/null
> > +++ b/drivers/accel/rocket/rocket_drv.c
> > @@ -0,0 +1,273 @@
> > [...]
> > +static int rocket_probe(struct platform_device *pdev)
> > +{
> > +     struct component_match *match = NULL;
> > +     struct device_node *core_node;
> > +
> > +     if (fwnode_device_is_compatible(pdev->dev.fwnode, "rockchip,rk3588-rknn-core"))
> > +             return component_add(&pdev->dev, &rocket_core_ops);
> > +
> > +     for_each_compatible_node(core_node, NULL, "rockchip,rk3588-rknn-core") {
> > +             if (!of_device_is_available(core_node))
> > +                     continue;
> > +
> > +             drm_of_component_match_add(&pdev->dev, &match,
> > +                                        component_compare_of, core_node);
> > +     }
> > +
> > +     return component_master_add_with_match(&pdev->dev, &rocket_drm_ops, match);
> > +}
>
> Hi Tomeu,
>
> something I've noticed while playing with this: currently, it doesn't seem like
> it's possible to support 1-core NPUs. rknn-core-top is a real core, but if no
> rknn-core is enabled beside it, it'll call component_master_add_with_match with
> match being NULL. This causes a kernel Oops.
>
> I'm not sure what the proper fix is, since the component API doesn't seem to
> really have a consideration for a master with no other components.

Yeah, I think we could add a code path for single-core NPUs that
doesn't make use of the component API at all.

> I ran into this when I was trying to debug why I get job timeouts followed by
> a full SoC lock-up on RK3576 by running with only one of the two cores enabled.
>
> As an aside note, my throwaway rocket-on-RK3576-hacking-branch is at [1] and
> contains some changes you may want to consider for v3, e.g. [2] and [3]+[4]. In
> [4], specifically the `domain-supply` part which means the NPU regulators don't
> have to be always-on. Though feel free to pull in my entire ROCK 5B enablement
> patch.

Ok, [2] I already had in my WIP branch. Will pick up [3] and [4],
though I cannot test them myself.

Regards,

Tomeu

> Kind regards,
> Nicolas Frattaroli, who discovered that his cat is apparently 5% space heater
> according to mobilenet while playing with this patch series.
>
> [1]: https://gitlab.collabora.com/fratti/linux/-/commits/tomeu-npu?ref_type=heads
> [2]: https://gitlab.collabora.com/fratti/linux/-/commit/73aba31a00b34c254be575b524da568e115d985d
> [3]: https://gitlab.collabora.com/fratti/linux/-/commit/bd3a7bf5054c54c2915a9dc0396730d0f24b3b7c
> [4]: https://gitlab.collabora.com/fratti/linux/-/commit/5da44d61b09c345309f76159574d447d071c295d
>
>
>


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings
  2025-05-15  8:30         ` Tomeu Vizoso
@ 2025-05-16 10:25           ` Nicolas Frattaroli
  2025-05-16 10:56             ` Tomeu Vizoso
  0 siblings, 1 reply; 25+ messages in thread
From: Nicolas Frattaroli @ 2025-05-16 10:25 UTC (permalink / raw)
  To: Tomeu Vizoso
  Cc: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip, devicetree, linux-arm-kernel, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

On Thursday, 15 May 2025 10:30:14 Central European Summer Time Tomeu Vizoso wrote:
> On Wed, May 14, 2025 at 7:50 PM Nicolas Frattaroli
> <nicolas.frattaroli@collabora.com> wrote:
> >
> > On Wednesday, 14 May 2025 17:18:22 Central European Summer Time Tomeu Vizoso wrote:
> > > Hi Nicolas,
> > >
> > > Thanks for looking at this. Some thoughts below:
> > >
> > > On Fri, Apr 25, 2025 at 8:50 PM Nicolas Frattaroli
> > > <nicolas.frattaroli@collabora.com> wrote:
> > > >
> > > > On Tuesday, 25 February 2025 08:55:47 Central European Summer Time Tomeu Vizoso wrote:
> > > > > Add the bindings for the Neural Processing Unit IP from Rockchip.
> > > > >
> > > > > v2:
> > > > > - Adapt to new node structure (one node per core, each with its own
> > > > >   IOMMU)
> > > > > - Several misc. fixes from Sebastian Reichel
> > > > >
> > > > > Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > > > > Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
> > > > > ---
> > > > >  .../bindings/npu/rockchip,rknn-core.yaml           | 152 +++++++++++++++++++++
> > > > >  1 file changed, 152 insertions(+)
> > > > >
> > > > > diff --git a/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > > > > new file mode 100644
> > > > > index 0000000000000000000000000000000000000000..e8d0afe4a7d1c4f166cf13a9f4aa7c1901362a3f
> > > > > --- /dev/null
> > > > > +++ b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > > > > @@ -0,0 +1,152 @@
> > > > > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> > > > > +%YAML 1.2
> > > > > +---
> > > > > +$id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
> > > > > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > > > > +
> > > > > +title: Neural Processing Unit IP from Rockchip
> > > > > +
> > > > > +maintainers:
> > > > > +  - Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > > > > +
> > > > > +description:
> > > > > +  Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
> > > > > +  open source NVDLA IP.
> > > > > +
> > > > > +properties:
> > > > > +  $nodename:
> > > > > +    pattern: '^npu-core@[a-f0-9]+$'
> > > > > +
> > > > > +  compatible:
> > > > > +    oneOf:
> > > > > +      - items:
> > > > > +          - enum:
> > > > > +              - rockchip,rk3588-rknn-core-top
> > > > > +          - const: rockchip,rknn-core-top
> > > > > +      - items:
> > > > > +          - enum:
> > > > > +              - rockchip,rk3588-rknn-core
> > > > > +          - const: rockchip,rknn-core
> > > > > +
> > > > > +  reg:
> > > > > +    maxItems: 1
> > > >
> > > > Hi Tomeu,
> > > >
> > > > as you probably know, RK3576 has quite a similar NPU. This is why I'm currently
> > > > poking at this patch series. One of the differences I ran into was that the
> > > > IOMMU of each NPU core now sits within the reg address space range of the core
> > > > as described by the single reg item binding and assumed by the driver.
> > >
> > > But this is not a difference, right?
> >
> > It is. E.g. on RK3588, you use reg = <0x0 0xfdab0000 0x0 0x9000>; for
> > rknn_core_top, and rknn_mmu_top then sits at 0xfdab9000, which is just
> > outside the reg range of the rknn_core_top node. That means acquiring the
> > iomem as a resource succeeds for you, whereas for me it fails.
> 
> Ah, got it now, thanks.
> 
> > >
> > > > This seemed weird to me at first, since I would've guessed the cores would be
> > > > exactly the same, but I noticed that they kind of still are; the RK3588's NPU
> > > > also has a "hole" between 0x2000 and 0x2fff on each core, which is where RK3576
> > > > put its IOMMU.
> > >
> > > So this is the same in both RK3576 and RK3588, right?
> >
> > Yes, both RK3576 and RK3588 have a hole in the same area. RK3562 also has
> > the same hole. RK3568 doesn't have the offsets for the individual parts of
> > the NPU in the TRM, making all the relative register offsets the TRM then
> > goes on to document completely pointless as it omits what those offsets
> > are based on, so we don't know if it has a hole there. I vaguely recall
> > that it has the IOMMU either before or past the global range (not sure if
> > I wrote these findings down anywhere?), so if it has a hole at 0x2000
> > then it's unused like on the RK3588. I don't have access to the RV1106
> > Part 2 TRM where the NPU is described, so I don't know whether that has a
> > hole there unless we dig into the downstream code.
> >
> > >
> > > > This is some information I gleaned from the RK3588 TRM, specifically section
> > > > 36.4.1 "Internal Address Mapping", which shows where each "part" of the NPU core
> > > > has its address space.
> > > >
> > > > Right now we just represent this as a single reg item per core. I've played
> > > > with the idea of splitting this up into the distinct ranges the TRM lists and
> > > > giving each a reg-names entry, but this would require a major rework of the
> > > > driver from what I can tell, including to the auto-generated register header.
> > > >
> > > > For now, my hack on RK3576 is to just ioremap the range defined by resource
> > > > start to resource end inside rocket manually if I get -EBUSY trying to ioremap
> > > > the resource proper. This is quite an ugly hack though, it means the IOMMU node
> > > > still has its address overlapping with another node in the DT, and it also means
> > > > we have an unavoidable error message printed into the kernel log. This is also
> > > > what the vendor driver seems to do.
> > > >
> > > > What do you reckon is a reg setup in the binding that is both reasonable to
> > > > implement in the driver while accurately describing the hardware?
> > >
> > > Guess we could go with some smaller granularity and have 3 register
> > > areas per core, instead of 10:
> > >
> > > - CORE: PC+CNA (0x0000 ~ 0x1fff)
> > > - AUX: CORE+DPU+PPU+DDMA+SDMA (0x3000 ~ 0x9fff)
> > > - GLOBAL (0xf000 ~ 0xf004)
> > >
> > > So the IOMMU on all the known SoCs can have its own regmap. I have
> > > chosen to call the first one CORE because these are the components
> > > that are absolutely needed in any NPU that is oriented towards
> > > convolutional networks (convolutions, basically). I have named the
> > > second AUX because it contains hardware units that are optional and
> > > are used to implement operations that may be common but that aren't as
> > > computational expensive as convolutions and thus might be skipped in
> > > lower-end versions of the IP.
> > >
> > > What do you think?
> >
> > I'm personally fine with this approach. I've floated a two-area approach
> > to Sebastian Reichel before who, as far as I can recall, expressed his
> > distaste for  it as it seemed like an arbitrary division. I do concur in
> > that, it seems very arbitrary, so it's hard to say whether the bindings
> > maintainers would let us get away with it if they get wind of it.
> > Unfortunately they are Cc'd on this E-Mail, so the cat is out of the bag
> > in this regard.
> 
> Actually, after thinking a bit more about it I'm leaning towards only
> having the PC, CNA and CORE areas in the DT, as those are the only
> ones that should be accessible from the CPU.

That does make sense to me. I've just checked the RK3576 specific reg
fiddling code I hacked in and it doesn't appear to be writing to any
other areas either.

> 
> The registers for the other units should be set by the PC, as it reads
> the command stream.
> 
> So three register areas that can be set to wherever Rockchip has
> placed them, and we just ignore the others in the kernel, as we don't
> have any business messing with them ourselves.
> 
> What do you think?

This seems like a good solution. Any further reg ranges that are used in
other variants (e.g. RK3562/RK3576 and maybe RV1106) introduce something
called "CBUF" and I'm not yet sure if that'll need any writes to its regs
from the driver, but if it does then it's easy to add another range for it
in the binding for just those compatibles.

> 
> Thanks,
> 
> Tomeu

Kind regards,
Nicolas Frattaroli

> 
> > What speaks for the 3 register area split is that anything that brings
> > more holes and doubly mapped things into the AUX area is probably going
> > to be so radically different it'll ideally have its own binding anyway,
> > or needs more than just a compatible added to the binding.
> >
> > I think as far as arbitrary splits goes, the one you propose is probably
> > the one most closely aligned with reality. Certain register areas do
> > seem like something they'd never move away from its corresponding
> > companion, whereas adding parts to the AUX area or removing from it is
> > probably going to be quite common. So it can essentially be treated as
> > the area where optional things will most likely land as you pointed out,
> > which then don't need more bindings fiddling to add those optional things
> > as explicitly named areas in the bindings as long as we treat it as just
> > one opaque area s far as the binding is concerned.
> >
> > Also, unless there's some virtual combined sparse iomem API in the kernel
> > that I'm not aware of, that's probably the easiest path forward for the
> > driver as well.
> >
> > >
> > > Regards,
> > >
> > > Tomeu
> >
> > Kind regards,
> > Nicolas Frattaroli
> >
> > >
> > > > The RK3568, which uses a similar NPU design has the IOMMU at an offset of 0xb000
> > > > from the core's start of PC, so probably after any core specifics but before the
> > > > global registers if I hazard a guess.
> > > >
> > > > For those without access to the TRM: splitting this up into multiple reg items
> > > > per core precisely the way the TRM does it would result in no less than 10 reg
> > > > items on RK3588, if I count correctly.
> > > >
> > > > Kind regards,
> > > > Nicolas Frattaroli
> > > >
> > > >
> > >
> >
> >
> >
> >
> 






^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings
  2025-05-16 10:25           ` Nicolas Frattaroli
@ 2025-05-16 10:56             ` Tomeu Vizoso
  0 siblings, 0 replies; 25+ messages in thread
From: Tomeu Vizoso @ 2025-05-16 10:56 UTC (permalink / raw)
  To: Nicolas Frattaroli
  Cc: Rob Herring, Krzysztof Kozlowski, Conor Dooley, Heiko Stuebner,
	Oded Gabbay, Jonathan Corbet, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, Sebastian Reichel, Jeffrey Hugo,
	linux-rockchip, devicetree, linux-arm-kernel, linux-kernel,
	dri-devel, linux-doc, linux-media, linaro-mm-sig

On Fri, May 16, 2025 at 12:25 PM Nicolas Frattaroli
<nicolas.frattaroli@collabora.com> wrote:
>
> On Thursday, 15 May 2025 10:30:14 Central European Summer Time Tomeu Vizoso wrote:
> > On Wed, May 14, 2025 at 7:50 PM Nicolas Frattaroli
> > <nicolas.frattaroli@collabora.com> wrote:
> > >
> > > On Wednesday, 14 May 2025 17:18:22 Central European Summer Time Tomeu Vizoso wrote:
> > > > Hi Nicolas,
> > > >
> > > > Thanks for looking at this. Some thoughts below:
> > > >
> > > > On Fri, Apr 25, 2025 at 8:50 PM Nicolas Frattaroli
> > > > <nicolas.frattaroli@collabora.com> wrote:
> > > > >
> > > > > On Tuesday, 25 February 2025 08:55:47 Central European Summer Time Tomeu Vizoso wrote:
> > > > > > Add the bindings for the Neural Processing Unit IP from Rockchip.
> > > > > >
> > > > > > v2:
> > > > > > - Adapt to new node structure (one node per core, each with its own
> > > > > >   IOMMU)
> > > > > > - Several misc. fixes from Sebastian Reichel
> > > > > >
> > > > > > Signed-off-by: Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > > > > > Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
> > > > > > ---
> > > > > >  .../bindings/npu/rockchip,rknn-core.yaml           | 152 +++++++++++++++++++++
> > > > > >  1 file changed, 152 insertions(+)
> > > > > >
> > > > > > diff --git a/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > > > > > new file mode 100644
> > > > > > index 0000000000000000000000000000000000000000..e8d0afe4a7d1c4f166cf13a9f4aa7c1901362a3f
> > > > > > --- /dev/null
> > > > > > +++ b/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml
> > > > > > @@ -0,0 +1,152 @@
> > > > > > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> > > > > > +%YAML 1.2
> > > > > > +---
> > > > > > +$id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
> > > > > > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > > > > > +
> > > > > > +title: Neural Processing Unit IP from Rockchip
> > > > > > +
> > > > > > +maintainers:
> > > > > > +  - Tomeu Vizoso <tomeu@tomeuvizoso.net>
> > > > > > +
> > > > > > +description:
> > > > > > +  Rockchip IP for accelerating inference of neural networks, based on NVIDIA's
> > > > > > +  open source NVDLA IP.
> > > > > > +
> > > > > > +properties:
> > > > > > +  $nodename:
> > > > > > +    pattern: '^npu-core@[a-f0-9]+$'
> > > > > > +
> > > > > > +  compatible:
> > > > > > +    oneOf:
> > > > > > +      - items:
> > > > > > +          - enum:
> > > > > > +              - rockchip,rk3588-rknn-core-top
> > > > > > +          - const: rockchip,rknn-core-top
> > > > > > +      - items:
> > > > > > +          - enum:
> > > > > > +              - rockchip,rk3588-rknn-core
> > > > > > +          - const: rockchip,rknn-core
> > > > > > +
> > > > > > +  reg:
> > > > > > +    maxItems: 1
> > > > >
> > > > > Hi Tomeu,
> > > > >
> > > > > as you probably know, RK3576 has quite a similar NPU. This is why I'm currently
> > > > > poking at this patch series. One of the differences I ran into was that the
> > > > > IOMMU of each NPU core now sits within the reg address space range of the core
> > > > > as described by the single reg item binding and assumed by the driver.
> > > >
> > > > But this is not a difference, right?
> > >
> > > It is. E.g. on RK3588, you use reg = <0x0 0xfdab0000 0x0 0x9000>; for
> > > rknn_core_top, and rknn_mmu_top then sits at 0xfdab9000, which is just
> > > outside the reg range of the rknn_core_top node. That means acquiring the
> > > iomem as a resource succeeds for you, whereas for me it fails.
> >
> > Ah, got it now, thanks.
> >
> > > >
> > > > > This seemed weird to me at first, since I would've guessed the cores would be
> > > > > exactly the same, but I noticed that they kind of still are; the RK3588's NPU
> > > > > also has a "hole" between 0x2000 and 0x2fff on each core, which is where RK3576
> > > > > put its IOMMU.
> > > >
> > > > So this is the same in both RK3576 and RK3588, right?
> > >
> > > Yes, both RK3576 and RK3588 have a hole in the same area. RK3562 also has
> > > the same hole. RK3568 doesn't have the offsets for the individual parts of
> > > the NPU in the TRM, making all the relative register offsets the TRM then
> > > goes on to document completely pointless as it omits what those offsets
> > > are based on, so we don't know if it has a hole there. I vaguely recall
> > > that it has the IOMMU either before or past the global range (not sure if
> > > I wrote these findings down anywhere?), so if it has a hole at 0x2000
> > > then it's unused like on the RK3588. I don't have access to the RV1106
> > > Part 2 TRM where the NPU is described, so I don't know whether that has a
> > > hole there unless we dig into the downstream code.
> > >
> > > >
> > > > > This is some information I gleaned from the RK3588 TRM, specifically section
> > > > > 36.4.1 "Internal Address Mapping", which shows where each "part" of the NPU core
> > > > > has its address space.
> > > > >
> > > > > Right now we just represent this as a single reg item per core. I've played
> > > > > with the idea of splitting this up into the distinct ranges the TRM lists and
> > > > > giving each a reg-names entry, but this would require a major rework of the
> > > > > driver from what I can tell, including to the auto-generated register header.
> > > > >
> > > > > For now, my hack on RK3576 is to just ioremap the range defined by resource
> > > > > start to resource end inside rocket manually if I get -EBUSY trying to ioremap
> > > > > the resource proper. This is quite an ugly hack though, it means the IOMMU node
> > > > > still has its address overlapping with another node in the DT, and it also means
> > > > > we have an unavoidable error message printed into the kernel log. This is also
> > > > > what the vendor driver seems to do.
> > > > >
> > > > > What do you reckon is a reg setup in the binding that is both reasonable to
> > > > > implement in the driver while accurately describing the hardware?
> > > >
> > > > Guess we could go with some smaller granularity and have 3 register
> > > > areas per core, instead of 10:
> > > >
> > > > - CORE: PC+CNA (0x0000 ~ 0x1fff)
> > > > - AUX: CORE+DPU+PPU+DDMA+SDMA (0x3000 ~ 0x9fff)
> > > > - GLOBAL (0xf000 ~ 0xf004)
> > > >
> > > > So the IOMMU on all the known SoCs can have its own regmap. I have
> > > > chosen to call the first one CORE because these are the components
> > > > that are absolutely needed in any NPU that is oriented towards
> > > > convolutional networks (convolutions, basically). I have named the
> > > > second AUX because it contains hardware units that are optional and
> > > > are used to implement operations that may be common but that aren't as
> > > > computational expensive as convolutions and thus might be skipped in
> > > > lower-end versions of the IP.
> > > >
> > > > What do you think?
> > >
> > > I'm personally fine with this approach. I've floated a two-area approach
> > > to Sebastian Reichel before who, as far as I can recall, expressed his
> > > distaste for  it as it seemed like an arbitrary division. I do concur in
> > > that, it seems very arbitrary, so it's hard to say whether the bindings
> > > maintainers would let us get away with it if they get wind of it.
> > > Unfortunately they are Cc'd on this E-Mail, so the cat is out of the bag
> > > in this regard.
> >
> > Actually, after thinking a bit more about it I'm leaning towards only
> > having the PC, CNA and CORE areas in the DT, as those are the only
> > ones that should be accessible from the CPU.
>
> That does make sense to me. I've just checked the RK3576 specific reg
> fiddling code I hacked in and it doesn't appear to be writing to any
> other areas either.

Cool.

> >
> > The registers for the other units should be set by the PC, as it reads
> > the command stream.
> >
> > So three register areas that can be set to wherever Rockchip has
> > placed them, and we just ignore the others in the kernel, as we don't
> > have any business messing with them ourselves.
> >
> > What do you think?
>
> This seems like a good solution. Any further reg ranges that are used in
> other variants (e.g. RK3562/RK3576 and maybe RV1106) introduce something
> called "CBUF" and I'm not yet sure if that'll need any writes to its regs
> from the driver, but if it does then it's easy to add another range for it
> in the binding for just those compatibles.

Do you have any further info on those CBUF regs? In the NVDLA
documentation, CBUF is the block that handles the Convolution Buffer,
but on the RK3588 it is part of the CNA block.

In any case, I don't think the kernel will have to do anything about it.

Cheers,

Tomeu

> >
> > Thanks,
> >
> > Tomeu
>
> Kind regards,
> Nicolas Frattaroli
>
> >
> > > What speaks for the 3 register area split is that anything that brings
> > > more holes and doubly mapped things into the AUX area is probably going
> > > to be so radically different it'll ideally have its own binding anyway,
> > > or needs more than just a compatible added to the binding.
> > >
> > > I think as far as arbitrary splits goes, the one you propose is probably
> > > the one most closely aligned with reality. Certain register areas do
> > > seem like something they'd never move away from its corresponding
> > > companion, whereas adding parts to the AUX area or removing from it is
> > > probably going to be quite common. So it can essentially be treated as
> > > the area where optional things will most likely land as you pointed out,
> > > which then don't need more bindings fiddling to add those optional things
> > > as explicitly named areas in the bindings as long as we treat it as just
> > > one opaque area s far as the binding is concerned.
> > >
> > > Also, unless there's some virtual combined sparse iomem API in the kernel
> > > that I'm not aware of, that's probably the easiest path forward for the
> > > driver as well.
> > >
> > > >
> > > > Regards,
> > > >
> > > > Tomeu
> > >
> > > Kind regards,
> > > Nicolas Frattaroli
> > >
> > > >
> > > > > The RK3568, which uses a similar NPU design has the IOMMU at an offset of 0xb000
> > > > > from the core's start of PC, so probably after any core specifics but before the
> > > > > global registers if I hazard a guess.
> > > > >
> > > > > For those without access to the TRM: splitting this up into multiple reg items
> > > > > per core precisely the way the TRM does it would result in no less than 10 reg
> > > > > items on RK3588, if I count correctly.
> > > > >
> > > > > Kind regards,
> > > > > Nicolas Frattaroli
> > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > >
> >
>
>
>
>


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2025-05-16 11:16 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-25  7:55 [PATCH v2 0/7] New DRM accel driver for Rockchip's RKNN NPU Tomeu Vizoso
2025-02-25  7:55 ` [PATCH v2 1/7] dt-bindings: npu: rockchip,rknn: Add bindings Tomeu Vizoso
2025-02-25 16:02   ` Rob Herring
2025-05-14 16:26     ` Tomeu Vizoso
2025-04-25 18:50   ` Nicolas Frattaroli
2025-05-14 15:18     ` Tomeu Vizoso
2025-05-14 17:50       ` Nicolas Frattaroli
2025-05-15  8:30         ` Tomeu Vizoso
2025-05-16 10:25           ` Nicolas Frattaroli
2025-05-16 10:56             ` Tomeu Vizoso
2025-02-25  7:55 ` [PATCH v2 2/7] arm64: dts: rockchip: Add nodes for NPU and its MMU to rk3588s Tomeu Vizoso
2025-02-25  7:55 ` [PATCH v2 3/7] arm64: dts: rockchip: Enable the NPU on quartzpro64 Tomeu Vizoso
2025-02-25  7:55 ` [PATCH v2 5/7] accel/rocket: Add IOCTL for BO creation Tomeu Vizoso
2025-02-25  8:35   ` Thomas Zimmermann
2025-03-21 15:56   ` Jeffrey Hugo
2025-02-25  7:55 ` [PATCH v2 6/7] accel/rocket: Add job submission IOCTL Tomeu Vizoso
2025-02-25  8:44   ` Thomas Zimmermann
2025-03-21 16:09   ` Jeff Hugo
2025-04-29 10:05   ` Nicolas Frattaroli
2025-02-25  7:55 ` [PATCH v2 7/7] accel/rocket: Add IOCTLs for synchronizing memory accesses Tomeu Vizoso
2025-03-21 16:15   ` Jeffrey Hugo
     [not found] ` <20250225-6-10-rocket-v2-4-d4dbcfafc141@tomeuvizoso.net>
2025-03-21 15:48   ` [PATCH v2 4/7] accel/rocket: Add a new driver for Rockchip's NPU Jeff Hugo
2025-04-25 18:22   ` Nicolas Frattaroli
2025-05-16  9:15     ` Tomeu Vizoso
2025-04-29  9:39   ` Nicolas Frattaroli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).