* [PATCH v3 01/10] drm/msm: Fix bv_fence being used as bv_rptr
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-05 14:51 ` [PATCH v3 02/10] drm/msm: Add a `preempt_record_size` field Antonino Maniscalco
` (9 subsequent siblings)
10 siblings, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco, Akhil P Oommen, Neil Armstrong
The bv_fence field of rbmemptrs was being used incorrectly as the BV
rptr shadow pointer in some places.
Add a bv_rptr field and change the code to use that instead.
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/gpu/drm/msm/msm_ringbuffer.h | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index bcaec86ac67a..32a4faa93d7f 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -1132,7 +1132,7 @@ static int hw_init(struct msm_gpu *gpu)
/* ..which means "always" on A7xx, also for BV shadow */
if (adreno_is_a7xx(adreno_gpu)) {
gpu_write64(gpu, REG_A7XX_CP_BV_RB_RPTR_ADDR,
- rbmemptr(gpu->rb[0], bv_fence));
+ rbmemptr(gpu->rb[0], bv_rptr));
}
/* Always come up on rb 0 */
diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h
index 0d6beb8cd39a..40791b2ade46 100644
--- a/drivers/gpu/drm/msm/msm_ringbuffer.h
+++ b/drivers/gpu/drm/msm/msm_ringbuffer.h
@@ -31,6 +31,7 @@ struct msm_rbmemptrs {
volatile uint32_t rptr;
volatile uint32_t fence;
/* Introduced on A7xx */
+ volatile uint32_t bv_rptr;
volatile uint32_t bv_fence;
volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT];
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* [PATCH v3 02/10] drm/msm: Add a `preempt_record_size` field
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
2024-09-05 14:51 ` [PATCH v3 01/10] drm/msm: Fix bv_fence being used as bv_rptr Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-05 14:51 ` [PATCH v3 03/10] drm/msm: Add CONTEXT_SWITCH_CNTL bitfields Antonino Maniscalco
` (8 subsequent siblings)
10 siblings, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco, Akhil P Oommen, Neil Armstrong
Adds a field to `adreno_info` to store the GPU specific preempt record
size.
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
Reviewed-by: Akhil P Oommen <quic_akhilpo@quicinc.com>
Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
---
drivers/gpu/drm/msm/adreno/a6xx_catalog.c | 4 ++++
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 +
2 files changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
index 68ba9aed5506..316f23ca9167 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
@@ -1190,6 +1190,7 @@ static const struct adreno_info a7xx_gpus[] = {
.protect = &a730_protect,
},
.address_space_size = SZ_16G,
+ .preempt_record_size = 2860 * SZ_1K,
}, {
.chip_ids = ADRENO_CHIP_IDS(0x43050a01), /* "C510v2" */
.family = ADRENO_7XX_GEN2,
@@ -1209,6 +1210,7 @@ static const struct adreno_info a7xx_gpus[] = {
.gmu_chipid = 0x7020100,
},
.address_space_size = SZ_16G,
+ .preempt_record_size = 4192 * SZ_1K,
}, {
.chip_ids = ADRENO_CHIP_IDS(0x43050c01), /* "C512v2" */
.family = ADRENO_7XX_GEN2,
@@ -1227,6 +1229,7 @@ static const struct adreno_info a7xx_gpus[] = {
.gmu_chipid = 0x7050001,
},
.address_space_size = SZ_256G,
+ .preempt_record_size = 4192 * SZ_1K,
}, {
.chip_ids = ADRENO_CHIP_IDS(0x43051401), /* "C520v2" */
.family = ADRENO_7XX_GEN3,
@@ -1245,6 +1248,7 @@ static const struct adreno_info a7xx_gpus[] = {
.gmu_chipid = 0x7090100,
},
.address_space_size = SZ_16G,
+ .preempt_record_size = 3572 * SZ_1K,
}
};
DECLARE_ADRENO_GPULIST(a7xx);
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
index 1ab523a163a0..6b1888280a83 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
@@ -111,6 +111,7 @@ struct adreno_info {
* {SHRT_MAX, 0} sentinal.
*/
struct adreno_speedbin *speedbins;
+ u64 preempt_record_size;
};
#define ADRENO_CHIP_IDS(tbl...) (uint32_t[]) { tbl, 0 }
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* [PATCH v3 03/10] drm/msm: Add CONTEXT_SWITCH_CNTL bitfields
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
2024-09-05 14:51 ` [PATCH v3 01/10] drm/msm: Fix bv_fence being used as bv_rptr Antonino Maniscalco
2024-09-05 14:51 ` [PATCH v3 02/10] drm/msm: Add a `preempt_record_size` field Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-05 14:51 ` [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets Antonino Maniscalco
` (7 subsequent siblings)
10 siblings, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco
Add missing bitfields to CONTEXT_SWITCH_CNTL in a6xx.xml.
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
---
drivers/gpu/drm/msm/registers/adreno/a6xx.xml | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/registers/adreno/a6xx.xml b/drivers/gpu/drm/msm/registers/adreno/a6xx.xml
index 2dfe6913ab4f..fd31d1d7a11e 100644
--- a/drivers/gpu/drm/msm/registers/adreno/a6xx.xml
+++ b/drivers/gpu/drm/msm/registers/adreno/a6xx.xml
@@ -1337,7 +1337,12 @@ to upconvert to 32b float internally?
<reg32 offset="0x0" name="REG" type="a6x_cp_protect"/>
</array>
- <reg32 offset="0x08A0" name="CP_CONTEXT_SWITCH_CNTL"/>
+ <reg32 offset="0x08A0" name="CP_CONTEXT_SWITCH_CNTL">
+ <bitfield name="STOP" pos="0" type="boolean"/>
+ <bitfield name="LEVEL" low="6" high="7"/>
+ <bitfield name="USES_GMEM" pos="8" type="boolean"/>
+ <bitfield name="SKIP_SAVE_RESTORE" pos="9" type="boolean"/>
+ </reg32>
<reg64 offset="0x08A1" name="CP_CONTEXT_SWITCH_SMMU_INFO"/>
<reg64 offset="0x08A3" name="CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR"/>
<reg64 offset="0x08A5" name="CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR"/>
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
` (2 preceding siblings ...)
2024-09-05 14:51 ` [PATCH v3 03/10] drm/msm: Add CONTEXT_SWITCH_CNTL bitfields Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-06 19:54 ` Akhil P Oommen
2024-09-05 14:51 ` [PATCH v3 05/10] drm/msm/A6xx: Sync relevant adreno_pm4.xml changes Antonino Maniscalco
` (6 subsequent siblings)
10 siblings, 1 reply; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco, Sharat Masetty, Neil Armstrong
This patch implements preemption feature for A6xx targets, this allows
the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
hardware as such supports multiple levels of preemption granularities,
ranging from coarse grained(ringbuffer level) to a more fine grained
such as draw-call level or a bin boundary level preemption. This patch
enables the basic preemption level, with more fine grained preemption
support to follow.
Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
---
drivers/gpu/drm/msm/Makefile | 1 +
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 391 ++++++++++++++++++++++++++++++
drivers/gpu/drm/msm/msm_ringbuffer.h | 7 +
5 files changed, 844 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
index f5e2838c6a76..32e915109a59 100644
--- a/drivers/gpu/drm/msm/Makefile
+++ b/drivers/gpu/drm/msm/Makefile
@@ -23,6 +23,7 @@ adreno-y := \
adreno/a6xx_gpu.o \
adreno/a6xx_gmu.o \
adreno/a6xx_hfi.o \
+ adreno/a6xx_preempt.o \
adreno-$(CONFIG_DEBUG_FS) += adreno/a5xx_debugfs.o \
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 32a4faa93d7f..ed0b138a2d66 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -16,6 +16,83 @@
#define GPU_PAS_ID 13
+/* IFPC & Preemption static powerup restore list */
+static const uint32_t a7xx_pwrup_reglist[] = {
+ REG_A6XX_UCHE_TRAP_BASE,
+ REG_A6XX_UCHE_TRAP_BASE + 1,
+ REG_A6XX_UCHE_WRITE_THRU_BASE,
+ REG_A6XX_UCHE_WRITE_THRU_BASE + 1,
+ REG_A6XX_UCHE_GMEM_RANGE_MIN,
+ REG_A6XX_UCHE_GMEM_RANGE_MIN + 1,
+ REG_A6XX_UCHE_GMEM_RANGE_MAX,
+ REG_A6XX_UCHE_GMEM_RANGE_MAX + 1,
+ REG_A6XX_UCHE_CACHE_WAYS,
+ REG_A6XX_UCHE_MODE_CNTL,
+ REG_A6XX_RB_NC_MODE_CNTL,
+ REG_A6XX_RB_CMP_DBG_ECO_CNTL,
+ REG_A7XX_GRAS_NC_MODE_CNTL,
+ REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE,
+ REG_A6XX_UCHE_GBIF_GX_CONFIG,
+ REG_A6XX_UCHE_CLIENT_PF,
+};
+
+static const uint32_t a7xx_ifpc_pwrup_reglist[] = {
+ REG_A6XX_TPL1_NC_MODE_CNTL,
+ REG_A6XX_SP_NC_MODE_CNTL,
+ REG_A6XX_CP_DBG_ECO_CNTL,
+ REG_A6XX_CP_PROTECT_CNTL,
+ REG_A6XX_CP_PROTECT(0),
+ REG_A6XX_CP_PROTECT(1),
+ REG_A6XX_CP_PROTECT(2),
+ REG_A6XX_CP_PROTECT(3),
+ REG_A6XX_CP_PROTECT(4),
+ REG_A6XX_CP_PROTECT(5),
+ REG_A6XX_CP_PROTECT(6),
+ REG_A6XX_CP_PROTECT(7),
+ REG_A6XX_CP_PROTECT(8),
+ REG_A6XX_CP_PROTECT(9),
+ REG_A6XX_CP_PROTECT(10),
+ REG_A6XX_CP_PROTECT(11),
+ REG_A6XX_CP_PROTECT(12),
+ REG_A6XX_CP_PROTECT(13),
+ REG_A6XX_CP_PROTECT(14),
+ REG_A6XX_CP_PROTECT(15),
+ REG_A6XX_CP_PROTECT(16),
+ REG_A6XX_CP_PROTECT(17),
+ REG_A6XX_CP_PROTECT(18),
+ REG_A6XX_CP_PROTECT(19),
+ REG_A6XX_CP_PROTECT(20),
+ REG_A6XX_CP_PROTECT(21),
+ REG_A6XX_CP_PROTECT(22),
+ REG_A6XX_CP_PROTECT(23),
+ REG_A6XX_CP_PROTECT(24),
+ REG_A6XX_CP_PROTECT(25),
+ REG_A6XX_CP_PROTECT(26),
+ REG_A6XX_CP_PROTECT(27),
+ REG_A6XX_CP_PROTECT(28),
+ REG_A6XX_CP_PROTECT(29),
+ REG_A6XX_CP_PROTECT(30),
+ REG_A6XX_CP_PROTECT(31),
+ REG_A6XX_CP_PROTECT(32),
+ REG_A6XX_CP_PROTECT(33),
+ REG_A6XX_CP_PROTECT(34),
+ REG_A6XX_CP_PROTECT(35),
+ REG_A6XX_CP_PROTECT(36),
+ REG_A6XX_CP_PROTECT(37),
+ REG_A6XX_CP_PROTECT(38),
+ REG_A6XX_CP_PROTECT(39),
+ REG_A6XX_CP_PROTECT(40),
+ REG_A6XX_CP_PROTECT(41),
+ REG_A6XX_CP_PROTECT(42),
+ REG_A6XX_CP_PROTECT(43),
+ REG_A6XX_CP_PROTECT(44),
+ REG_A6XX_CP_PROTECT(45),
+ REG_A6XX_CP_PROTECT(46),
+ REG_A6XX_CP_PROTECT(47),
+ REG_A6XX_CP_AHB_CNTL,
+};
+
+
static inline bool _a6xx_check_idle(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
@@ -68,6 +145,8 @@ static void update_shadow_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
uint32_t wptr;
unsigned long flags;
@@ -81,12 +160,26 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
/* Make sure to wrap wptr if we need to */
wptr = get_wptr(ring);
- spin_unlock_irqrestore(&ring->preempt_lock, flags);
-
/* Make sure everything is posted before making a decision */
mb();
- gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
+ /* Update HW if this is the current ring and we are not in preempt*/
+ if (!a6xx_in_preempt(a6xx_gpu)) {
+ /*
+ * Order the reads of the preempt state and cur_ring. This
+ * matches the barrier after writing cur_ring.
+ */
+ rmb();
+
+ if (a6xx_gpu->cur_ring == ring)
+ gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
+ else
+ ring->skip_inline_wptr = true;
+ } else {
+ ring->skip_inline_wptr = true;
+ }
+
+ spin_unlock_irqrestore(&ring->preempt_lock, flags);
}
static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
@@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
/*
* Write the new TTBR0 to the memstore. This is good for debugging.
+ * Needed for preemption
*/
- OUT_PKT7(ring, CP_MEM_WRITE, 4);
+ OUT_PKT7(ring, CP_MEM_WRITE, 5);
OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr)));
OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr)));
OUT_RING(ring, lower_32_bits(ttbr));
- OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr));
+ OUT_RING(ring, upper_32_bits(ttbr));
+ OUT_RING(ring, ctx->seqno);
/*
* Sync both threads after switching pagetables and enable BR only
@@ -268,6 +363,43 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
a6xx_flush(gpu, ring);
}
+static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
+ struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
+{
+ u64 preempt_offset_priv_secure;
+
+ OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
+
+ OUT_RING(ring, SMMU_INFO);
+ /* don't save SMMU, we write the record from the kernel instead */
+ OUT_RING(ring, 0);
+ OUT_RING(ring, 0);
+
+ /* privileged and non secure buffer save */
+ OUT_RING(ring, NON_SECURE_SAVE_ADDR);
+ OUT_RING(ring, lower_32_bits(
+ a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
+ OUT_RING(ring, upper_32_bits(
+ a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
+ OUT_RING(ring, SECURE_SAVE_ADDR);
+ preempt_offset_priv_secure =
+ PREEMPT_OFFSET_PRIV_SECURE(a6xx_gpu->base.info->preempt_record_size);
+ OUT_RING(ring, lower_32_bits(
+ a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
+ OUT_RING(ring, upper_32_bits(
+ a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
+
+ /* user context buffer save, seems to be unnused by fw */
+ OUT_RING(ring, NON_PRIV_SAVE_ADDR);
+ OUT_RING(ring, 0);
+ OUT_RING(ring, 0);
+
+ OUT_RING(ring, COUNTER);
+ /* seems OK to set to 0 to disable it */
+ OUT_RING(ring, 0);
+ OUT_RING(ring, 0);
+}
+
static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
{
unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT;
@@ -283,6 +415,13 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR);
+ /*
+ * If preemption is enabled, then set the pseudo register for the save
+ * sequence
+ */
+ if (gpu->nr_rings > 1)
+ a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, submit->queue);
+
a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0),
@@ -376,6 +515,8 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence)));
OUT_RING(ring, submit->seqno);
+ a6xx_gpu->last_seqno[ring->id] = submit->seqno;
+
/* write the ringbuffer timestamp */
OUT_PKT7(ring, CP_EVENT_WRITE, 4);
OUT_RING(ring, CACHE_CLEAN | CP_EVENT_WRITE_0_IRQ | BIT(27));
@@ -389,10 +530,32 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
OUT_PKT7(ring, CP_SET_MARKER, 1);
OUT_RING(ring, 0x100); /* IFPC enable */
+ /* If preemption is enabled */
+ if (gpu->nr_rings > 1) {
+ /* Yield the floor on command completion */
+ OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
+
+ /*
+ * If dword[2:1] are non zero, they specify an address for
+ * the CP to write the value of dword[3] to on preemption
+ * complete. Write 0 to skip the write
+ */
+ OUT_RING(ring, 0x00);
+ OUT_RING(ring, 0x00);
+ /* Data value - not used if the address above is 0 */
+ OUT_RING(ring, 0x01);
+ /* generate interrupt on preemption completion */
+ OUT_RING(ring, 0x00);
+ }
+
+
trace_msm_gpu_submit_flush(submit,
gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER));
a6xx_flush(gpu, ring);
+
+ /* Check to see if we need to start preemption */
+ a6xx_preempt_trigger(gpu);
}
static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state)
@@ -588,6 +751,89 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
adreno_gpu->ubwc_config.min_acc_len << 23 | hbb_lo << 21);
}
+static void a7xx_patch_pwrup_reglist(struct msm_gpu *gpu)
+{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ struct adreno_reglist_list reglist[2];
+ void *ptr = a6xx_gpu->pwrup_reglist_ptr;
+ struct cpu_gpu_lock *lock = ptr;
+ u32 *dest = (u32 *)&lock->regs[0];
+ int i, j;
+
+ lock->gpu_req = lock->cpu_req = lock->turn = 0;
+ lock->ifpc_list_len = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
+ lock->preemption_list_len = ARRAY_SIZE(a7xx_pwrup_reglist);
+
+ /* Static IFPC-only registers */
+ reglist[0].regs = a7xx_ifpc_pwrup_reglist;
+ reglist[0].count = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
+ lock->ifpc_list_len = reglist[0].count;
+
+ /* Static IFPC + preemption registers */
+ reglist[1].regs = a7xx_pwrup_reglist;
+ reglist[1].count = ARRAY_SIZE(a7xx_pwrup_reglist);
+ lock->preemption_list_len = reglist[1].count;
+
+ /*
+ * For each entry in each of the lists, write the offset and the current
+ * register value into the GPU buffer
+ */
+ for (i = 0; i < 2; i++) {
+ const u32 *r = reglist[i].regs;
+
+ for (j = 0; j < reglist[i].count; j++) {
+ *dest++ = r[j];
+ *dest++ = gpu_read(gpu, r[j]);
+ }
+ }
+
+ /*
+ * The overall register list is composed of
+ * 1. Static IFPC-only registers
+ * 2. Static IFPC + preemption registers
+ * 3. Dynamic IFPC + preemption registers (ex: perfcounter selects)
+ *
+ * The first two lists are static. Size of these lists are stored as
+ * number of pairs in ifpc_list_len and preemption_list_len
+ * respectively. With concurrent binning, Some of the perfcounter
+ * registers being virtualized, CP needs to know the pipe id to program
+ * the aperture inorder to restore the same. Thus, third list is a
+ * dynamic list with triplets as
+ * (<aperture, shifted 12 bits> <address> <data>), and the length is
+ * stored as number for triplets in dynamic_list_len.
+ */
+ lock->dynamic_list_len = 0;
+}
+
+static int a7xx_preempt_start(struct msm_gpu *gpu)
+{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ struct msm_ringbuffer *ring = gpu->rb[0];
+
+ if (gpu->nr_rings <= 1)
+ return 0;
+
+ /* Turn CP protection off */
+ OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
+ OUT_RING(ring, 0);
+
+ a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, NULL);
+
+ /* Yield the floor on command completion */
+ OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
+ OUT_RING(ring, 0x00);
+ OUT_RING(ring, 0x00);
+ OUT_RING(ring, 0x01);
+ /* Generate interrupt on preemption completion */
+ OUT_RING(ring, 0x00);
+
+ a6xx_flush(gpu, ring);
+
+ return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
+}
+
static int a6xx_cp_init(struct msm_gpu *gpu)
{
struct msm_ringbuffer *ring = gpu->rb[0];
@@ -619,6 +865,8 @@ static int a6xx_cp_init(struct msm_gpu *gpu)
static int a7xx_cp_init(struct msm_gpu *gpu)
{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
struct msm_ringbuffer *ring = gpu->rb[0];
u32 mask;
@@ -626,6 +874,8 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
OUT_RING(ring, BIT(27));
+ a7xx_patch_pwrup_reglist(gpu);
+
OUT_PKT7(ring, CP_ME_INIT, 7);
/* Use multiple HW contexts */
@@ -656,11 +906,11 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
/* *Don't* send a power up reg list for concurrent binning (TODO) */
/* Lo address */
- OUT_RING(ring, 0x00000000);
+ OUT_RING(ring, lower_32_bits(a6xx_gpu->pwrup_reglist_iova));
/* Hi address */
- OUT_RING(ring, 0x00000000);
+ OUT_RING(ring, upper_32_bits(a6xx_gpu->pwrup_reglist_iova));
/* BIT(31) set => read the regs from the list */
- OUT_RING(ring, 0x00000000);
+ OUT_RING(ring, BIT(31));
a6xx_flush(gpu, ring);
return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
@@ -784,6 +1034,16 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow");
}
+ a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE,
+ MSM_BO_WC | MSM_BO_MAP_PRIV,
+ gpu->aspace, &a6xx_gpu->pwrup_reglist_bo,
+ &a6xx_gpu->pwrup_reglist_iova);
+
+ if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr))
+ return PTR_ERR(a6xx_gpu->pwrup_reglist_ptr);
+
+ msm_gem_object_set_name(a6xx_gpu->pwrup_reglist_bo, "pwrup_reglist");
+
return 0;
}
@@ -1127,6 +1387,8 @@ static int hw_init(struct msm_gpu *gpu)
if (a6xx_gpu->shadow_bo) {
gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR,
shadowptr(a6xx_gpu, gpu->rb[0]));
+ for (unsigned int i = 0; i < gpu->nr_rings; i++)
+ a6xx_gpu->shadow[i] = 0;
}
/* ..which means "always" on A7xx, also for BV shadow */
@@ -1135,6 +1397,8 @@ static int hw_init(struct msm_gpu *gpu)
rbmemptr(gpu->rb[0], bv_rptr));
}
+ a6xx_preempt_hw_init(gpu);
+
/* Always come up on rb 0 */
a6xx_gpu->cur_ring = gpu->rb[0];
@@ -1180,6 +1444,10 @@ static int hw_init(struct msm_gpu *gpu)
out:
if (adreno_has_gmu_wrapper(adreno_gpu))
return ret;
+
+ /* Last step - yield the ringbuffer */
+ a7xx_preempt_start(gpu);
+
/*
* Tell the GMU that we are done touching the GPU and it can start power
* management
@@ -1557,8 +1825,13 @@ static irqreturn_t a6xx_irq(struct msm_gpu *gpu)
if (status & A6XX_RBBM_INT_0_MASK_SWFUSEVIOLATION)
a7xx_sw_fuse_violation_irq(gpu);
- if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS)
+ if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS) {
msm_gpu_retire(gpu);
+ a6xx_preempt_trigger(gpu);
+ }
+
+ if (status & A6XX_RBBM_INT_0_MASK_CP_SW)
+ a6xx_preempt_irq(gpu);
return IRQ_HANDLED;
}
@@ -2331,6 +2604,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
a6xx_fault_handler);
a6xx_calc_ubwc_config(adreno_gpu);
+ /* Set up the preemption specific bits and pieces for each ringbuffer */
+ a6xx_preempt_init(gpu);
return gpu;
}
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
index e3e5c53ae8af..da10060e38dc 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
@@ -12,6 +12,31 @@
extern bool hang_debug;
+struct cpu_gpu_lock {
+ uint32_t gpu_req;
+ uint32_t cpu_req;
+ uint32_t turn;
+ union {
+ struct {
+ uint16_t list_length;
+ uint16_t list_offset;
+ };
+ struct {
+ uint8_t ifpc_list_len;
+ uint8_t preemption_list_len;
+ uint16_t dynamic_list_len;
+ };
+ };
+ uint64_t regs[62];
+};
+
+struct adreno_reglist_list {
+ /** @reg: List of register **/
+ const u32 *regs;
+ /** @count: Number of registers in the list **/
+ u32 count;
+};
+
/**
* struct a6xx_info - a6xx specific information from device table
*
@@ -31,6 +56,20 @@ struct a6xx_gpu {
uint64_t sqe_iova;
struct msm_ringbuffer *cur_ring;
+ struct msm_ringbuffer *next_ring;
+
+ struct drm_gem_object *preempt_bo[MSM_GPU_MAX_RINGS];
+ void *preempt[MSM_GPU_MAX_RINGS];
+ uint64_t preempt_iova[MSM_GPU_MAX_RINGS];
+ uint32_t last_seqno[MSM_GPU_MAX_RINGS];
+
+ atomic_t preempt_state;
+ spinlock_t eval_lock;
+ struct timer_list preempt_timer;
+
+ unsigned int preempt_level;
+ bool uses_gmem;
+ bool skip_save_restore;
struct a6xx_gmu gmu;
@@ -38,6 +77,10 @@ struct a6xx_gpu {
uint64_t shadow_iova;
uint32_t *shadow;
+ struct drm_gem_object *pwrup_reglist_bo;
+ void *pwrup_reglist_ptr;
+ uint64_t pwrup_reglist_iova;
+
bool has_whereami;
void __iomem *llc_mmio;
@@ -49,6 +92,105 @@ struct a6xx_gpu {
#define to_a6xx_gpu(x) container_of(x, struct a6xx_gpu, base)
+/*
+ * In order to do lockless preemption we use a simple state machine to progress
+ * through the process.
+ *
+ * PREEMPT_NONE - no preemption in progress. Next state START.
+ * PREEMPT_START - The trigger is evaluating if preemption is possible. Next
+ * states: TRIGGERED, NONE
+ * PREEMPT_FINISH - An intermediate state before moving back to NONE. Next
+ * state: NONE.
+ * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next
+ * states: FAULTED, PENDING
+ * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger
+ * recovery. Next state: N/A
+ * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is
+ * checking the success of the operation. Next state: FAULTED, NONE.
+ */
+
+enum a6xx_preempt_state {
+ PREEMPT_NONE = 0,
+ PREEMPT_START,
+ PREEMPT_FINISH,
+ PREEMPT_TRIGGERED,
+ PREEMPT_FAULTED,
+ PREEMPT_PENDING,
+};
+
+/*
+ * struct a6xx_preempt_record is a shared buffer between the microcode and the
+ * CPU to store the state for preemption. The record itself is much larger
+ * (2112k) but most of that is used by the CP for storage.
+ *
+ * There is a preemption record assigned per ringbuffer. When the CPU triggers a
+ * preemption, it fills out the record with the useful information (wptr, ring
+ * base, etc) and the microcode uses that information to set up the CP following
+ * the preemption. When a ring is switched out, the CP will save the ringbuffer
+ * state back to the record. In this way, once the records are properly set up
+ * the CPU can quickly switch back and forth between ringbuffers by only
+ * updating a few registers (often only the wptr).
+ *
+ * These are the CPU aware registers in the record:
+ * @magic: Must always be 0xAE399D6EUL
+ * @info: Type of the record - written 0 by the CPU, updated by the CP
+ * @errno: preemption error record
+ * @data: Data field in YIELD and SET_MARKER packets, Written and used by CP
+ * @cntl: Value of RB_CNTL written by CPU, save/restored by CP
+ * @rptr: Value of RB_RPTR written by CPU, save/restored by CP
+ * @wptr: Value of RB_WPTR written by CPU, save/restored by CP
+ * @_pad: Reserved/padding
+ * @rptr_addr: Value of RB_RPTR_ADDR_LO|HI written by CPU, save/restored by CP
+ * @rbase: Value of RB_BASE written by CPU, save/restored by CP
+ * @counter: GPU address of the storage area for the preemption counters
+ */
+struct a6xx_preempt_record {
+ u32 magic;
+ u32 info;
+ u32 errno;
+ u32 data;
+ u32 cntl;
+ u32 rptr;
+ u32 wptr;
+ u32 _pad;
+ u64 rptr_addr;
+ u64 rbase;
+ u64 counter;
+ u64 bv_rptr_addr;
+};
+
+#define A6XX_PREEMPT_RECORD_MAGIC 0xAE399D6EUL
+
+#define PREEMPT_RECORD_SIZE_FALLBACK(size) \
+ ((size) == 0 ? 4192 * SZ_1K : (size))
+
+#define PREEMPT_OFFSET_SMMU_INFO 0
+#define PREEMPT_OFFSET_PRIV_NON_SECURE (PREEMPT_OFFSET_SMMU_INFO + 4096)
+#define PREEMPT_OFFSET_PRIV_SECURE(size) \
+ (PREEMPT_OFFSET_PRIV_NON_SECURE + PREEMPT_RECORD_SIZE_FALLBACK(size))
+#define PREEMPT_SIZE(size) \
+ (PREEMPT_OFFSET_PRIV_SECURE(size) + PREEMPT_RECORD_SIZE_FALLBACK(size))
+
+/*
+ * The preemption counter block is a storage area for the value of the
+ * preemption counters that are saved immediately before context switch. We
+ * append it on to the end of the allocation for the preemption record.
+ */
+#define A6XX_PREEMPT_COUNTER_SIZE (16 * 4)
+
+#define A6XX_PREEMPT_USER_RECORD_SIZE (192 * 1024)
+
+struct a7xx_cp_smmu_info {
+ u32 magic;
+ u32 _pad4;
+ u64 ttbr0;
+ u32 asid;
+ u32 context_idr;
+ u32 context_bank;
+};
+
+#define GEN7_CP_SMMU_INFO_MAGIC 0x241350d5UL
+
/*
* Given a register and a count, return a value to program into
* REG_CP_PROTECT_REG(n) - this will block both reads and writes for
@@ -106,6 +248,25 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu);
+void a6xx_preempt_init(struct msm_gpu *gpu);
+void a6xx_preempt_hw_init(struct msm_gpu *gpu);
+void a6xx_preempt_trigger(struct msm_gpu *gpu);
+void a6xx_preempt_irq(struct msm_gpu *gpu);
+void a6xx_preempt_fini(struct msm_gpu *gpu);
+int a6xx_preempt_submitqueue_setup(struct msm_gpu *gpu,
+ struct msm_gpu_submitqueue *queue);
+void a6xx_preempt_submitqueue_close(struct msm_gpu *gpu,
+ struct msm_gpu_submitqueue *queue);
+
+/* Return true if we are in a preempt state */
+static inline bool a6xx_in_preempt(struct a6xx_gpu *a6xx_gpu)
+{
+ int preempt_state = atomic_read(&a6xx_gpu->preempt_state);
+
+ return !(preempt_state == PREEMPT_NONE ||
+ preempt_state == PREEMPT_FINISH);
+}
+
void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
bool suspended);
unsigned long a6xx_gmu_get_freq(struct msm_gpu *gpu);
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
new file mode 100644
index 000000000000..1caff76aca6e
--- /dev/null
+++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
@@ -0,0 +1,391 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved. */
+/* Copyright (c) 2023 Collabora, Ltd. */
+/* Copyright (c) 2024 Valve Corporation */
+
+#include "msm_gem.h"
+#include "a6xx_gpu.h"
+#include "a6xx_gmu.xml.h"
+#include "msm_mmu.h"
+
+/*
+ * Try to transition the preemption state from old to new. Return
+ * true on success or false if the original state wasn't 'old'
+ */
+static inline bool try_preempt_state(struct a6xx_gpu *a6xx_gpu,
+ enum a6xx_preempt_state old, enum a6xx_preempt_state new)
+{
+ enum a6xx_preempt_state cur = atomic_cmpxchg(&a6xx_gpu->preempt_state,
+ old, new);
+
+ return (cur == old);
+}
+
+/*
+ * Force the preemption state to the specified state. This is used in cases
+ * where the current state is known and won't change
+ */
+static inline void set_preempt_state(struct a6xx_gpu *gpu,
+ enum a6xx_preempt_state new)
+{
+ /*
+ * preempt_state may be read by other cores trying to trigger a
+ * preemption or in the interrupt handler so barriers are needed
+ * before...
+ */
+ smp_mb__before_atomic();
+ atomic_set(&gpu->preempt_state, new);
+ /* ... and after*/
+ smp_mb__after_atomic();
+}
+
+/* Write the most recent wptr for the given ring into the hardware */
+static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
+{
+ unsigned long flags;
+ uint32_t wptr;
+
+ if (!ring)
+ return;
+
+ spin_lock_irqsave(&ring->preempt_lock, flags);
+
+ if (ring->skip_inline_wptr) {
+ wptr = get_wptr(ring);
+
+ gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
+
+ ring->skip_inline_wptr = false;
+ }
+
+ spin_unlock_irqrestore(&ring->preempt_lock, flags);
+}
+
+/* Return the highest priority ringbuffer with something in it */
+static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
+{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+
+ unsigned long flags;
+ int i;
+
+ for (i = 0; i < gpu->nr_rings; i++) {
+ bool empty;
+ struct msm_ringbuffer *ring = gpu->rb[i];
+
+ spin_lock_irqsave(&ring->preempt_lock, flags);
+ empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
+ if (!empty && ring == a6xx_gpu->cur_ring)
+ empty = ring->memptrs->fence == a6xx_gpu->last_seqno[i];
+ spin_unlock_irqrestore(&ring->preempt_lock, flags);
+
+ if (!empty)
+ return ring;
+ }
+
+ return NULL;
+}
+
+static void a6xx_preempt_timer(struct timer_list *t)
+{
+ struct a6xx_gpu *a6xx_gpu = from_timer(a6xx_gpu, t, preempt_timer);
+ struct msm_gpu *gpu = &a6xx_gpu->base.base;
+ struct drm_device *dev = gpu->dev;
+
+ if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_FAULTED))
+ return;
+
+ dev_err(dev->dev, "%s: preemption timed out\n", gpu->name);
+ kthread_queue_work(gpu->worker, &gpu->recover_work);
+}
+
+void a6xx_preempt_irq(struct msm_gpu *gpu)
+{
+ uint32_t status;
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ struct drm_device *dev = gpu->dev;
+
+ if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_PENDING))
+ return;
+
+ /* Delete the preemption watchdog timer */
+ del_timer(&a6xx_gpu->preempt_timer);
+
+ /*
+ * The hardware should be setting the stop bit of CP_CONTEXT_SWITCH_CNTL
+ * to zero before firing the interrupt, but there is a non zero chance
+ * of a hardware condition or a software race that could set it again
+ * before we have a chance to finish. If that happens, log and go for
+ * recovery
+ */
+ status = gpu_read(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL);
+ if (unlikely(status & A6XX_CP_CONTEXT_SWITCH_CNTL_STOP)) {
+ DRM_DEV_ERROR(&gpu->pdev->dev,
+ "!!!!!!!!!!!!!!!! preemption faulted !!!!!!!!!!!!!! irq\n");
+ set_preempt_state(a6xx_gpu, PREEMPT_FAULTED);
+ dev_err(dev->dev, "%s: Preemption failed to complete\n",
+ gpu->name);
+ kthread_queue_work(gpu->worker, &gpu->recover_work);
+ return;
+ }
+
+ a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
+ a6xx_gpu->next_ring = NULL;
+
+ /* Make sure the write to cur_ring is posted before the change in state */
+ wmb();
+
+ set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
+
+ update_wptr(gpu, a6xx_gpu->cur_ring);
+
+ set_preempt_state(a6xx_gpu, PREEMPT_NONE);
+
+ /*
+ * Retrigger preemption to avoid a deadlock that might occur when preemption
+ * is skipped due to it being already in flight when requested.
+ */
+ a6xx_preempt_trigger(gpu);
+}
+
+void a6xx_preempt_hw_init(struct msm_gpu *gpu)
+{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ int i;
+
+ /* No preemption if we only have one ring */
+ if (gpu->nr_rings == 1)
+ return;
+
+ for (i = 0; i < gpu->nr_rings; i++) {
+ struct a6xx_preempt_record *record_ptr =
+ a6xx_gpu->preempt[i] + PREEMPT_OFFSET_PRIV_NON_SECURE;
+ record_ptr->wptr = 0;
+ record_ptr->rptr = 0;
+ record_ptr->rptr_addr = shadowptr(a6xx_gpu, gpu->rb[i]);
+ record_ptr->info = 0;
+ record_ptr->data = 0;
+ record_ptr->rbase = gpu->rb[i]->iova;
+ }
+
+ /* Write a 0 to signal that we aren't switching pagetables */
+ gpu_write64(gpu, REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO, 0);
+
+ /* Enable the GMEM save/restore feature for preemption */
+ gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE, 0x1);
+
+ /* Reset the preemption state */
+ set_preempt_state(a6xx_gpu, PREEMPT_NONE);
+
+ spin_lock_init(&a6xx_gpu->eval_lock);
+
+ /* Always come up on rb 0 */
+ a6xx_gpu->cur_ring = gpu->rb[0];
+}
+
+void a6xx_preempt_trigger(struct msm_gpu *gpu)
+{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ u64 preempt_offset_priv_secure;
+ unsigned long flags;
+ struct msm_ringbuffer *ring;
+ unsigned int cntl;
+
+ if (gpu->nr_rings == 1)
+ return;
+
+ /*
+ * Lock to make sure another thread attempting preemption doesn't skip it
+ * while we are still evaluating the next ring. This makes sure the other
+ * thread does start preemption if we abort it and avoids a soft lock.
+ */
+ spin_lock_irqsave(&a6xx_gpu->eval_lock, flags);
+
+ /*
+ * Try to start preemption by moving from NONE to START. If
+ * unsuccessful, a preemption is already in flight
+ */
+ if (!try_preempt_state(a6xx_gpu, PREEMPT_NONE, PREEMPT_START)) {
+ spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
+ return;
+ }
+
+ cntl = A6XX_CP_CONTEXT_SWITCH_CNTL_LEVEL(a6xx_gpu->preempt_level);
+
+ if (a6xx_gpu->skip_save_restore)
+ cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_SKIP_SAVE_RESTORE;
+
+ if (a6xx_gpu->uses_gmem)
+ cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_USES_GMEM;
+
+ cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_STOP;
+
+ /* Get the next ring to preempt to */
+ ring = get_next_ring(gpu);
+
+ /*
+ * If no ring is populated or the highest priority ring is the current
+ * one do nothing except to update the wptr to the latest and greatest
+ */
+ if (!ring || (a6xx_gpu->cur_ring == ring)) {
+ set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
+ update_wptr(gpu, a6xx_gpu->cur_ring);
+ set_preempt_state(a6xx_gpu, PREEMPT_NONE);
+ spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
+ return;
+ }
+
+ spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
+
+ spin_lock_irqsave(&ring->preempt_lock, flags);
+
+ struct a7xx_cp_smmu_info *smmu_info_ptr =
+ a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_SMMU_INFO;
+ struct a6xx_preempt_record *record_ptr =
+ a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE;
+ u64 ttbr0 = ring->memptrs->ttbr0;
+ u32 context_idr = ring->memptrs->context_idr;
+
+ smmu_info_ptr->ttbr0 = ttbr0;
+ smmu_info_ptr->context_idr = context_idr;
+ record_ptr->wptr = get_wptr(ring);
+
+ /*
+ * The GPU will write the wptr we set above when we preempt. Reset
+ * skip_inline_wptr to make sure that we don't write WPTR to the same
+ * thing twice. It's still possible subsequent submissions will update
+ * wptr again, in which case they will set the flag to true. This has
+ * to be protected by the lock for setting the flag and updating wptr
+ * to be atomic.
+ */
+ ring->skip_inline_wptr = false;
+
+ spin_unlock_irqrestore(&ring->preempt_lock, flags);
+
+ gpu_write64(gpu,
+ REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO,
+ a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_SMMU_INFO);
+
+ gpu_write64(gpu,
+ REG_A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR,
+ a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE);
+
+ preempt_offset_priv_secure =
+ PREEMPT_OFFSET_PRIV_SECURE(adreno_gpu->info->preempt_record_size);
+ gpu_write64(gpu,
+ REG_A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR,
+ a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure);
+
+ a6xx_gpu->next_ring = ring;
+
+ /* Start a timer to catch a stuck preemption */
+ mod_timer(&a6xx_gpu->preempt_timer, jiffies + msecs_to_jiffies(10000));
+
+ /* Set the preemption state to triggered */
+ set_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED);
+
+ /* Make sure any previous writes to WPTR are posted */
+ gpu_read(gpu, REG_A6XX_CP_RB_WPTR);
+
+ /* Make sure everything is written before hitting the button */
+ wmb();
+
+ /* Trigger the preemption */
+ gpu_write(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL, cntl);
+}
+
+static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
+ struct msm_ringbuffer *ring)
+{
+ struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
+ struct msm_gpu *gpu = &adreno_gpu->base;
+ struct drm_gem_object *bo = NULL;
+ phys_addr_t ttbr;
+ u64 iova = 0;
+ void *ptr;
+ int asid;
+
+ ptr = msm_gem_kernel_new(gpu->dev,
+ PREEMPT_SIZE(adreno_gpu->info->preempt_record_size),
+ MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova);
+
+ memset(ptr, 0, PREEMPT_SIZE(adreno_gpu->info->preempt_record_size));
+
+ if (IS_ERR(ptr))
+ return PTR_ERR(ptr);
+
+ a6xx_gpu->preempt_bo[ring->id] = bo;
+ a6xx_gpu->preempt_iova[ring->id] = iova;
+ a6xx_gpu->preempt[ring->id] = ptr;
+
+ struct a7xx_cp_smmu_info *smmu_info_ptr = ptr + PREEMPT_OFFSET_SMMU_INFO;
+ struct a6xx_preempt_record *record_ptr = ptr + PREEMPT_OFFSET_PRIV_NON_SECURE;
+
+ msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid);
+
+ smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC;
+ smmu_info_ptr->ttbr0 = ttbr;
+ smmu_info_ptr->asid = 0xdecafbad;
+ smmu_info_ptr->context_idr = 0;
+
+ /* Set up the defaults on the preemption record */
+ record_ptr->magic = A6XX_PREEMPT_RECORD_MAGIC;
+ record_ptr->info = 0;
+ record_ptr->data = 0;
+ record_ptr->rptr = 0;
+ record_ptr->wptr = 0;
+ record_ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT;
+ record_ptr->rbase = ring->iova;
+ record_ptr->counter = 0;
+ record_ptr->bv_rptr_addr = rbmemptr(ring, bv_rptr);
+
+ return 0;
+}
+
+void a6xx_preempt_fini(struct msm_gpu *gpu)
+{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ int i;
+
+ for (i = 0; i < gpu->nr_rings; i++)
+ msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace);
+}
+
+void a6xx_preempt_init(struct msm_gpu *gpu)
+{
+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+ int i;
+
+ /* No preemption if we only have one ring */
+ if (gpu->nr_rings <= 1)
+ return;
+
+ for (i = 0; i < gpu->nr_rings; i++) {
+ if (preempt_init_ring(a6xx_gpu, gpu->rb[i]))
+ goto fail;
+ }
+
+ /* TODO: make this configurable? */
+ a6xx_gpu->preempt_level = 1;
+ a6xx_gpu->uses_gmem = 1;
+ a6xx_gpu->skip_save_restore = 1;
+
+ timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
+
+ return;
+fail:
+ /*
+ * On any failure our adventure is over. Clean up and
+ * set nr_rings to 1 to force preemption off
+ */
+ a6xx_preempt_fini(gpu);
+ gpu->nr_rings = 1;
+
+ return;
+}
diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h
index 40791b2ade46..7dde6a312511 100644
--- a/drivers/gpu/drm/msm/msm_ringbuffer.h
+++ b/drivers/gpu/drm/msm/msm_ringbuffer.h
@@ -36,6 +36,7 @@ struct msm_rbmemptrs {
volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT];
volatile u64 ttbr0;
+ volatile u32 context_idr;
};
struct msm_cp_state {
@@ -100,6 +101,12 @@ struct msm_ringbuffer {
* preemption. Can be aquired from irq context.
*/
spinlock_t preempt_lock;
+
+ /*
+ * Whether we skipped writing wptr and it needs to be updated in the
+ * future when the ring becomes current.
+ */
+ bool skip_inline_wptr;
};
struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-05 14:51 ` [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets Antonino Maniscalco
@ 2024-09-06 19:54 ` Akhil P Oommen
2024-09-09 12:22 ` Connor Abbott
2024-09-09 13:15 ` Antonino Maniscalco
0 siblings, 2 replies; 32+ messages in thread
From: Akhil P Oommen @ 2024-09-06 19:54 UTC (permalink / raw)
To: Antonino Maniscalco
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Sharat Masetty, Neil Armstrong
On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
> This patch implements preemption feature for A6xx targets, this allows
> the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
> hardware as such supports multiple levels of preemption granularities,
> ranging from coarse grained(ringbuffer level) to a more fine grained
> such as draw-call level or a bin boundary level preemption. This patch
> enables the basic preemption level, with more fine grained preemption
> support to follow.
>
> Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
> ---
> drivers/gpu/drm/msm/Makefile | 1 +
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 391 ++++++++++++++++++++++++++++++
> drivers/gpu/drm/msm/msm_ringbuffer.h | 7 +
> 5 files changed, 844 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
> index f5e2838c6a76..32e915109a59 100644
> --- a/drivers/gpu/drm/msm/Makefile
> +++ b/drivers/gpu/drm/msm/Makefile
> @@ -23,6 +23,7 @@ adreno-y := \
> adreno/a6xx_gpu.o \
> adreno/a6xx_gmu.o \
> adreno/a6xx_hfi.o \
> + adreno/a6xx_preempt.o \
>
> adreno-$(CONFIG_DEBUG_FS) += adreno/a5xx_debugfs.o \
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> index 32a4faa93d7f..ed0b138a2d66 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> @@ -16,6 +16,83 @@
>
> #define GPU_PAS_ID 13
>
> +/* IFPC & Preemption static powerup restore list */
> +static const uint32_t a7xx_pwrup_reglist[] = {
> + REG_A6XX_UCHE_TRAP_BASE,
> + REG_A6XX_UCHE_TRAP_BASE + 1,
> + REG_A6XX_UCHE_WRITE_THRU_BASE,
> + REG_A6XX_UCHE_WRITE_THRU_BASE + 1,
> + REG_A6XX_UCHE_GMEM_RANGE_MIN,
> + REG_A6XX_UCHE_GMEM_RANGE_MIN + 1,
> + REG_A6XX_UCHE_GMEM_RANGE_MAX,
> + REG_A6XX_UCHE_GMEM_RANGE_MAX + 1,
> + REG_A6XX_UCHE_CACHE_WAYS,
> + REG_A6XX_UCHE_MODE_CNTL,
> + REG_A6XX_RB_NC_MODE_CNTL,
> + REG_A6XX_RB_CMP_DBG_ECO_CNTL,
> + REG_A7XX_GRAS_NC_MODE_CNTL,
> + REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE,
> + REG_A6XX_UCHE_GBIF_GX_CONFIG,
> + REG_A6XX_UCHE_CLIENT_PF,
REG_A6XX_TPL1_DBG_ECO_CNTL1 here. A friendly warning, missing a register
in this list (and the below list) will lead to a very frustrating debug.
> +};
> +
> +static const uint32_t a7xx_ifpc_pwrup_reglist[] = {
> + REG_A6XX_TPL1_NC_MODE_CNTL,
> + REG_A6XX_SP_NC_MODE_CNTL,
> + REG_A6XX_CP_DBG_ECO_CNTL,
> + REG_A6XX_CP_PROTECT_CNTL,
> + REG_A6XX_CP_PROTECT(0),
> + REG_A6XX_CP_PROTECT(1),
> + REG_A6XX_CP_PROTECT(2),
> + REG_A6XX_CP_PROTECT(3),
> + REG_A6XX_CP_PROTECT(4),
> + REG_A6XX_CP_PROTECT(5),
> + REG_A6XX_CP_PROTECT(6),
> + REG_A6XX_CP_PROTECT(7),
> + REG_A6XX_CP_PROTECT(8),
> + REG_A6XX_CP_PROTECT(9),
> + REG_A6XX_CP_PROTECT(10),
> + REG_A6XX_CP_PROTECT(11),
> + REG_A6XX_CP_PROTECT(12),
> + REG_A6XX_CP_PROTECT(13),
> + REG_A6XX_CP_PROTECT(14),
> + REG_A6XX_CP_PROTECT(15),
> + REG_A6XX_CP_PROTECT(16),
> + REG_A6XX_CP_PROTECT(17),
> + REG_A6XX_CP_PROTECT(18),
> + REG_A6XX_CP_PROTECT(19),
> + REG_A6XX_CP_PROTECT(20),
> + REG_A6XX_CP_PROTECT(21),
> + REG_A6XX_CP_PROTECT(22),
> + REG_A6XX_CP_PROTECT(23),
> + REG_A6XX_CP_PROTECT(24),
> + REG_A6XX_CP_PROTECT(25),
> + REG_A6XX_CP_PROTECT(26),
> + REG_A6XX_CP_PROTECT(27),
> + REG_A6XX_CP_PROTECT(28),
> + REG_A6XX_CP_PROTECT(29),
> + REG_A6XX_CP_PROTECT(30),
> + REG_A6XX_CP_PROTECT(31),
> + REG_A6XX_CP_PROTECT(32),
> + REG_A6XX_CP_PROTECT(33),
> + REG_A6XX_CP_PROTECT(34),
> + REG_A6XX_CP_PROTECT(35),
> + REG_A6XX_CP_PROTECT(36),
> + REG_A6XX_CP_PROTECT(37),
> + REG_A6XX_CP_PROTECT(38),
> + REG_A6XX_CP_PROTECT(39),
> + REG_A6XX_CP_PROTECT(40),
> + REG_A6XX_CP_PROTECT(41),
> + REG_A6XX_CP_PROTECT(42),
> + REG_A6XX_CP_PROTECT(43),
> + REG_A6XX_CP_PROTECT(44),
> + REG_A6XX_CP_PROTECT(45),
> + REG_A6XX_CP_PROTECT(46),
> + REG_A6XX_CP_PROTECT(47),
> + REG_A6XX_CP_AHB_CNTL,
> +};
> +
> +
> static inline bool _a6xx_check_idle(struct msm_gpu *gpu)
> {
> struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> @@ -68,6 +145,8 @@ static void update_shadow_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
>
> static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> {
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> uint32_t wptr;
> unsigned long flags;
>
> @@ -81,12 +160,26 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> /* Make sure to wrap wptr if we need to */
> wptr = get_wptr(ring);
>
> - spin_unlock_irqrestore(&ring->preempt_lock, flags);
> -
> /* Make sure everything is posted before making a decision */
> mb();
This looks unnecessary.
>
> - gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> + /* Update HW if this is the current ring and we are not in preempt*/
> + if (!a6xx_in_preempt(a6xx_gpu)) {
> + /*
> + * Order the reads of the preempt state and cur_ring. This
> + * matches the barrier after writing cur_ring.
> + */
> + rmb();
we can use the lighter smp variant here.
> +
> + if (a6xx_gpu->cur_ring == ring)
> + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> + else
> + ring->skip_inline_wptr = true;
> + } else {
> + ring->skip_inline_wptr = true;
> + }
> +
> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> }
>
> static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
> @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
not. This is currently not tracked separately for each ring. Can you
please check that?
I wonder why that didn't cause any gpu errors in testing. Not sure if I
am missing something.
>
> /*
> * Write the new TTBR0 to the memstore. This is good for debugging.
> + * Needed for preemption
> */
> - OUT_PKT7(ring, CP_MEM_WRITE, 4);
> + OUT_PKT7(ring, CP_MEM_WRITE, 5);
> OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr)));
> OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr)));
> OUT_RING(ring, lower_32_bits(ttbr));
> - OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr));
> + OUT_RING(ring, upper_32_bits(ttbr));
> + OUT_RING(ring, ctx->seqno);
>
> /*
> * Sync both threads after switching pagetables and enable BR only
> @@ -268,6 +363,43 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> a6xx_flush(gpu, ring);
> }
>
> +static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> + struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
> +{
> + u64 preempt_offset_priv_secure;
> +
> + OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
> +
> + OUT_RING(ring, SMMU_INFO);
> + /* don't save SMMU, we write the record from the kernel instead */
> + OUT_RING(ring, 0);
> + OUT_RING(ring, 0);
> +
> + /* privileged and non secure buffer save */
> + OUT_RING(ring, NON_SECURE_SAVE_ADDR);
> + OUT_RING(ring, lower_32_bits(
> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
> + OUT_RING(ring, upper_32_bits(
> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
> + OUT_RING(ring, SECURE_SAVE_ADDR);
> + preempt_offset_priv_secure =
> + PREEMPT_OFFSET_PRIV_SECURE(a6xx_gpu->base.info->preempt_record_size);
> + OUT_RING(ring, lower_32_bits(
> + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
> + OUT_RING(ring, upper_32_bits(
> + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
> +
> + /* user context buffer save, seems to be unnused by fw */
> + OUT_RING(ring, NON_PRIV_SAVE_ADDR);
> + OUT_RING(ring, 0);
> + OUT_RING(ring, 0);
> +
> + OUT_RING(ring, COUNTER);
> + /* seems OK to set to 0 to disable it */
> + OUT_RING(ring, 0);
> + OUT_RING(ring, 0);
> +}
> +
> static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> {
> unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT;
> @@ -283,6 +415,13 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
> OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR);
>
> + /*
> + * If preemption is enabled, then set the pseudo register for the save
> + * sequence
> + */
> + if (gpu->nr_rings > 1)
> + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, submit->queue);
Can we move this after set_pagetable()?
> +
> a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
>
> get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0),
> @@ -376,6 +515,8 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence)));
> OUT_RING(ring, submit->seqno);
>
> + a6xx_gpu->last_seqno[ring->id] = submit->seqno;
> +
> /* write the ringbuffer timestamp */
> OUT_PKT7(ring, CP_EVENT_WRITE, 4);
> OUT_RING(ring, CACHE_CLEAN | CP_EVENT_WRITE_0_IRQ | BIT(27));
> @@ -389,10 +530,32 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> OUT_PKT7(ring, CP_SET_MARKER, 1);
> OUT_RING(ring, 0x100); /* IFPC enable */
>
> + /* If preemption is enabled */
> + if (gpu->nr_rings > 1) {
> + /* Yield the floor on command completion */
> + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
> +
> + /*
> + * If dword[2:1] are non zero, they specify an address for
> + * the CP to write the value of dword[3] to on preemption
> + * complete. Write 0 to skip the write
> + */
> + OUT_RING(ring, 0x00);
> + OUT_RING(ring, 0x00);
> + /* Data value - not used if the address above is 0 */
> + OUT_RING(ring, 0x01);
> + /* generate interrupt on preemption completion */
> + OUT_RING(ring, 0x00);
> + }
> +
> +
> trace_msm_gpu_submit_flush(submit,
> gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER));
>
> a6xx_flush(gpu, ring);
> +
> + /* Check to see if we need to start preemption */
> + a6xx_preempt_trigger(gpu);
> }
>
> static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state)
> @@ -588,6 +751,89 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
> adreno_gpu->ubwc_config.min_acc_len << 23 | hbb_lo << 21);
> }
>
> +static void a7xx_patch_pwrup_reglist(struct msm_gpu *gpu)
> +{
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> + struct adreno_reglist_list reglist[2];
> + void *ptr = a6xx_gpu->pwrup_reglist_ptr;
> + struct cpu_gpu_lock *lock = ptr;
> + u32 *dest = (u32 *)&lock->regs[0];
> + int i, j;
> +
This sequence is required only once. We can use a flag to check and bail out
next time.
> + lock->gpu_req = lock->cpu_req = lock->turn = 0;
> + lock->ifpc_list_len = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
> + lock->preemption_list_len = ARRAY_SIZE(a7xx_pwrup_reglist);
> +
> + /* Static IFPC-only registers */
> + reglist[0].regs = a7xx_ifpc_pwrup_reglist;
> + reglist[0].count = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
> + lock->ifpc_list_len = reglist[0].count;
> +
> + /* Static IFPC + preemption registers */
> + reglist[1].regs = a7xx_pwrup_reglist;
> + reglist[1].count = ARRAY_SIZE(a7xx_pwrup_reglist);
> + lock->preemption_list_len = reglist[1].count;
> +
> + /*
> + * For each entry in each of the lists, write the offset and the current
> + * register value into the GPU buffer
> + */
> + for (i = 0; i < 2; i++) {
> + const u32 *r = reglist[i].regs;
> +
> + for (j = 0; j < reglist[i].count; j++) {
> + *dest++ = r[j];
> + *dest++ = gpu_read(gpu, r[j]);
> + }
> + }
> +
> + /*
> + * The overall register list is composed of
> + * 1. Static IFPC-only registers
> + * 2. Static IFPC + preemption registers
> + * 3. Dynamic IFPC + preemption registers (ex: perfcounter selects)
> + *
> + * The first two lists are static. Size of these lists are stored as
> + * number of pairs in ifpc_list_len and preemption_list_len
> + * respectively. With concurrent binning, Some of the perfcounter
> + * registers being virtualized, CP needs to know the pipe id to program
> + * the aperture inorder to restore the same. Thus, third list is a
> + * dynamic list with triplets as
> + * (<aperture, shifted 12 bits> <address> <data>), and the length is
> + * stored as number for triplets in dynamic_list_len.
> + */
> + lock->dynamic_list_len = 0;
> +}
> +
> +static int a7xx_preempt_start(struct msm_gpu *gpu)
> +{
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> + struct msm_ringbuffer *ring = gpu->rb[0];
> +
> + if (gpu->nr_rings <= 1)
> + return 0;
> +
> + /* Turn CP protection off */
> + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
> + OUT_RING(ring, 0);
> +
> + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, NULL);
> +
> + /* Yield the floor on command completion */
> + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
> + OUT_RING(ring, 0x00);
> + OUT_RING(ring, 0x00);
> + OUT_RING(ring, 0x01);
Looks like kgsl use 0x00 here. Not sure if that matters!
> + /* Generate interrupt on preemption completion */
> + OUT_RING(ring, 0x00);
> +
> + a6xx_flush(gpu, ring);
> +
> + return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
> +}
> +
> static int a6xx_cp_init(struct msm_gpu *gpu)
> {
> struct msm_ringbuffer *ring = gpu->rb[0];
> @@ -619,6 +865,8 @@ static int a6xx_cp_init(struct msm_gpu *gpu)
>
> static int a7xx_cp_init(struct msm_gpu *gpu)
> {
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> struct msm_ringbuffer *ring = gpu->rb[0];
> u32 mask;
>
> @@ -626,6 +874,8 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
> OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
> OUT_RING(ring, BIT(27));
>
> + a7xx_patch_pwrup_reglist(gpu);
> +
Looks out of place. I guess you kept it here to avoid an extra a7x
check. At least we should move this before the above pm4 packets.
> OUT_PKT7(ring, CP_ME_INIT, 7);
>
> /* Use multiple HW contexts */
> @@ -656,11 +906,11 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
>
> /* *Don't* send a power up reg list for concurrent binning (TODO) */
> /* Lo address */
> - OUT_RING(ring, 0x00000000);
> + OUT_RING(ring, lower_32_bits(a6xx_gpu->pwrup_reglist_iova));
> /* Hi address */
> - OUT_RING(ring, 0x00000000);
> + OUT_RING(ring, upper_32_bits(a6xx_gpu->pwrup_reglist_iova));
> /* BIT(31) set => read the regs from the list */
> - OUT_RING(ring, 0x00000000);
> + OUT_RING(ring, BIT(31));
>
> a6xx_flush(gpu, ring);
> return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
> @@ -784,6 +1034,16 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
> msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow");
> }
>
> + a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE,
> + MSM_BO_WC | MSM_BO_MAP_PRIV,
> + gpu->aspace, &a6xx_gpu->pwrup_reglist_bo,
> + &a6xx_gpu->pwrup_reglist_iova);
> +
> + if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr))
> + return PTR_ERR(a6xx_gpu->pwrup_reglist_ptr);
> +
> + msm_gem_object_set_name(a6xx_gpu->pwrup_reglist_bo, "pwrup_reglist");
> +
> return 0;
> }
>
> @@ -1127,6 +1387,8 @@ static int hw_init(struct msm_gpu *gpu)
> if (a6xx_gpu->shadow_bo) {
> gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR,
> shadowptr(a6xx_gpu, gpu->rb[0]));
> + for (unsigned int i = 0; i < gpu->nr_rings; i++)
> + a6xx_gpu->shadow[i] = 0;
> }
>
> /* ..which means "always" on A7xx, also for BV shadow */
> @@ -1135,6 +1397,8 @@ static int hw_init(struct msm_gpu *gpu)
> rbmemptr(gpu->rb[0], bv_rptr));
> }
>
> + a6xx_preempt_hw_init(gpu);
> +
> /* Always come up on rb 0 */
> a6xx_gpu->cur_ring = gpu->rb[0];
>
> @@ -1180,6 +1444,10 @@ static int hw_init(struct msm_gpu *gpu)
> out:
> if (adreno_has_gmu_wrapper(adreno_gpu))
> return ret;
> +
> + /* Last step - yield the ringbuffer */
> + a7xx_preempt_start(gpu);
> +
> /*
> * Tell the GMU that we are done touching the GPU and it can start power
> * management
> @@ -1557,8 +1825,13 @@ static irqreturn_t a6xx_irq(struct msm_gpu *gpu)
> if (status & A6XX_RBBM_INT_0_MASK_SWFUSEVIOLATION)
> a7xx_sw_fuse_violation_irq(gpu);
>
> - if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS)
> + if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS) {
> msm_gpu_retire(gpu);
> + a6xx_preempt_trigger(gpu);
> + }
> +
> + if (status & A6XX_RBBM_INT_0_MASK_CP_SW)
> + a6xx_preempt_irq(gpu);
>
> return IRQ_HANDLED;
> }
> @@ -2331,6 +2604,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
> a6xx_fault_handler);
>
> a6xx_calc_ubwc_config(adreno_gpu);
> + /* Set up the preemption specific bits and pieces for each ringbuffer */
> + a6xx_preempt_init(gpu);
>
> return gpu;
> }
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> index e3e5c53ae8af..da10060e38dc 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> @@ -12,6 +12,31 @@
>
> extern bool hang_debug;
>
> +struct cpu_gpu_lock {
> + uint32_t gpu_req;
> + uint32_t cpu_req;
> + uint32_t turn;
> + union {
> + struct {
> + uint16_t list_length;
> + uint16_t list_offset;
> + };
> + struct {
> + uint8_t ifpc_list_len;
> + uint8_t preemption_list_len;
> + uint16_t dynamic_list_len;
> + };
> + };
> + uint64_t regs[62];
> +};
> +
> +struct adreno_reglist_list {
> + /** @reg: List of register **/
> + const u32 *regs;
> + /** @count: Number of registers in the list **/
> + u32 count;
> +};
> +
> /**
> * struct a6xx_info - a6xx specific information from device table
> *
> @@ -31,6 +56,20 @@ struct a6xx_gpu {
> uint64_t sqe_iova;
>
> struct msm_ringbuffer *cur_ring;
> + struct msm_ringbuffer *next_ring;
> +
> + struct drm_gem_object *preempt_bo[MSM_GPU_MAX_RINGS];
> + void *preempt[MSM_GPU_MAX_RINGS];
> + uint64_t preempt_iova[MSM_GPU_MAX_RINGS];
> + uint32_t last_seqno[MSM_GPU_MAX_RINGS];
> +
> + atomic_t preempt_state;
> + spinlock_t eval_lock;
> + struct timer_list preempt_timer;
> +
> + unsigned int preempt_level;
> + bool uses_gmem;
> + bool skip_save_restore;
>
> struct a6xx_gmu gmu;
>
> @@ -38,6 +77,10 @@ struct a6xx_gpu {
> uint64_t shadow_iova;
> uint32_t *shadow;
>
> + struct drm_gem_object *pwrup_reglist_bo;
> + void *pwrup_reglist_ptr;
> + uint64_t pwrup_reglist_iova;
> +
> bool has_whereami;
>
> void __iomem *llc_mmio;
> @@ -49,6 +92,105 @@ struct a6xx_gpu {
>
> #define to_a6xx_gpu(x) container_of(x, struct a6xx_gpu, base)
>
> +/*
> + * In order to do lockless preemption we use a simple state machine to progress
> + * through the process.
> + *
> + * PREEMPT_NONE - no preemption in progress. Next state START.
> + * PREEMPT_START - The trigger is evaluating if preemption is possible. Next
> + * states: TRIGGERED, NONE
> + * PREEMPT_FINISH - An intermediate state before moving back to NONE. Next
> + * state: NONE.
> + * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next
> + * states: FAULTED, PENDING
> + * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger
> + * recovery. Next state: N/A
> + * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is
> + * checking the success of the operation. Next state: FAULTED, NONE.
> + */
> +
> +enum a6xx_preempt_state {
> + PREEMPT_NONE = 0,
> + PREEMPT_START,
> + PREEMPT_FINISH,
> + PREEMPT_TRIGGERED,
> + PREEMPT_FAULTED,
> + PREEMPT_PENDING,
> +};
> +
> +/*
> + * struct a6xx_preempt_record is a shared buffer between the microcode and the
> + * CPU to store the state for preemption. The record itself is much larger
> + * (2112k) but most of that is used by the CP for storage.
> + *
> + * There is a preemption record assigned per ringbuffer. When the CPU triggers a
> + * preemption, it fills out the record with the useful information (wptr, ring
> + * base, etc) and the microcode uses that information to set up the CP following
> + * the preemption. When a ring is switched out, the CP will save the ringbuffer
> + * state back to the record. In this way, once the records are properly set up
> + * the CPU can quickly switch back and forth between ringbuffers by only
> + * updating a few registers (often only the wptr).
> + *
> + * These are the CPU aware registers in the record:
> + * @magic: Must always be 0xAE399D6EUL
> + * @info: Type of the record - written 0 by the CPU, updated by the CP
> + * @errno: preemption error record
> + * @data: Data field in YIELD and SET_MARKER packets, Written and used by CP
> + * @cntl: Value of RB_CNTL written by CPU, save/restored by CP
> + * @rptr: Value of RB_RPTR written by CPU, save/restored by CP
> + * @wptr: Value of RB_WPTR written by CPU, save/restored by CP
> + * @_pad: Reserved/padding
> + * @rptr_addr: Value of RB_RPTR_ADDR_LO|HI written by CPU, save/restored by CP
> + * @rbase: Value of RB_BASE written by CPU, save/restored by CP
> + * @counter: GPU address of the storage area for the preemption counters
doc missing for bv_rptr_addr.
> + */
> +struct a6xx_preempt_record {
> + u32 magic;
> + u32 info;
> + u32 errno;
> + u32 data;
> + u32 cntl;
> + u32 rptr;
> + u32 wptr;
> + u32 _pad;
> + u64 rptr_addr;
> + u64 rbase;
> + u64 counter;
> + u64 bv_rptr_addr;
> +};
> +
> +#define A6XX_PREEMPT_RECORD_MAGIC 0xAE399D6EUL
> +
> +#define PREEMPT_RECORD_SIZE_FALLBACK(size) \
> + ((size) == 0 ? 4192 * SZ_1K : (size))
> +
> +#define PREEMPT_OFFSET_SMMU_INFO 0
> +#define PREEMPT_OFFSET_PRIV_NON_SECURE (PREEMPT_OFFSET_SMMU_INFO + 4096)
> +#define PREEMPT_OFFSET_PRIV_SECURE(size) \
> + (PREEMPT_OFFSET_PRIV_NON_SECURE + PREEMPT_RECORD_SIZE_FALLBACK(size))
> +#define PREEMPT_SIZE(size) \
> + (PREEMPT_OFFSET_PRIV_SECURE(size) + PREEMPT_RECORD_SIZE_FALLBACK(size))
> +
> +/*
> + * The preemption counter block is a storage area for the value of the
> + * preemption counters that are saved immediately before context switch. We
> + * append it on to the end of the allocation for the preemption record.
> + */
> +#define A6XX_PREEMPT_COUNTER_SIZE (16 * 4)
> +
> +#define A6XX_PREEMPT_USER_RECORD_SIZE (192 * 1024)
Unused.
> +
> +struct a7xx_cp_smmu_info {
> + u32 magic;
> + u32 _pad4;
> + u64 ttbr0;
> + u32 asid;
> + u32 context_idr;
> + u32 context_bank;
> +};
> +
> +#define GEN7_CP_SMMU_INFO_MAGIC 0x241350d5UL
> +
> /*
> * Given a register and a count, return a value to program into
> * REG_CP_PROTECT_REG(n) - this will block both reads and writes for
> @@ -106,6 +248,25 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
> int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
> void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu);
>
> +void a6xx_preempt_init(struct msm_gpu *gpu);
> +void a6xx_preempt_hw_init(struct msm_gpu *gpu);
> +void a6xx_preempt_trigger(struct msm_gpu *gpu);
> +void a6xx_preempt_irq(struct msm_gpu *gpu);
> +void a6xx_preempt_fini(struct msm_gpu *gpu);
> +int a6xx_preempt_submitqueue_setup(struct msm_gpu *gpu,
> + struct msm_gpu_submitqueue *queue);
> +void a6xx_preempt_submitqueue_close(struct msm_gpu *gpu,
> + struct msm_gpu_submitqueue *queue);
> +
> +/* Return true if we are in a preempt state */
> +static inline bool a6xx_in_preempt(struct a6xx_gpu *a6xx_gpu)
> +{
> + int preempt_state = atomic_read(&a6xx_gpu->preempt_state);
I think we should keep a matching barrier before the 'read' similar to the one used in the
set_preempt_state helper.
> +
> + return !(preempt_state == PREEMPT_NONE ||
> + preempt_state == PREEMPT_FINISH);
> +}
> +
> void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
> bool suspended);
> unsigned long a6xx_gmu_get_freq(struct msm_gpu *gpu);
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> new file mode 100644
> index 000000000000..1caff76aca6e
> --- /dev/null
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> @@ -0,0 +1,391 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. */
> +/* Copyright (c) 2023 Collabora, Ltd. */
> +/* Copyright (c) 2024 Valve Corporation */
> +
> +#include "msm_gem.h"
> +#include "a6xx_gpu.h"
> +#include "a6xx_gmu.xml.h"
> +#include "msm_mmu.h"
> +
> +/*
> + * Try to transition the preemption state from old to new. Return
> + * true on success or false if the original state wasn't 'old'
> + */
> +static inline bool try_preempt_state(struct a6xx_gpu *a6xx_gpu,
> + enum a6xx_preempt_state old, enum a6xx_preempt_state new)
> +{
> + enum a6xx_preempt_state cur = atomic_cmpxchg(&a6xx_gpu->preempt_state,
> + old, new);
> +
> + return (cur == old);
> +}
> +
> +/*
> + * Force the preemption state to the specified state. This is used in cases
> + * where the current state is known and won't change
> + */
> +static inline void set_preempt_state(struct a6xx_gpu *gpu,
> + enum a6xx_preempt_state new)
> +{
> + /*
> + * preempt_state may be read by other cores trying to trigger a
> + * preemption or in the interrupt handler so barriers are needed
> + * before...
> + */
> + smp_mb__before_atomic();
> + atomic_set(&gpu->preempt_state, new);
> + /* ... and after*/
> + smp_mb__after_atomic();
> +}
> +
> +/* Write the most recent wptr for the given ring into the hardware */
> +static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> +{
> + unsigned long flags;
> + uint32_t wptr;
> +
> + if (!ring)
Is this ever true?
> + return;
> +
> + spin_lock_irqsave(&ring->preempt_lock, flags);
> +
> + if (ring->skip_inline_wptr) {
> + wptr = get_wptr(ring);
> +
> + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> +
> + ring->skip_inline_wptr = false;
> + }
> +
> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> +}
> +
> +/* Return the highest priority ringbuffer with something in it */
> +static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
> +{
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> +
> + unsigned long flags;
> + int i;
> +
> + for (i = 0; i < gpu->nr_rings; i++) {
> + bool empty;
> + struct msm_ringbuffer *ring = gpu->rb[i];
> +
> + spin_lock_irqsave(&ring->preempt_lock, flags);
> + empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
> + if (!empty && ring == a6xx_gpu->cur_ring)
> + empty = ring->memptrs->fence == a6xx_gpu->last_seqno[i];
> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> +
> + if (!empty)
> + return ring;
> + }
> +
> + return NULL;
> +}
> +
> +static void a6xx_preempt_timer(struct timer_list *t)
> +{
> + struct a6xx_gpu *a6xx_gpu = from_timer(a6xx_gpu, t, preempt_timer);
> + struct msm_gpu *gpu = &a6xx_gpu->base.base;
> + struct drm_device *dev = gpu->dev;
> +
> + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_FAULTED))
> + return;
> +
> + dev_err(dev->dev, "%s: preemption timed out\n", gpu->name);
> + kthread_queue_work(gpu->worker, &gpu->recover_work);
> +}
> +
> +void a6xx_preempt_irq(struct msm_gpu *gpu)
> +{
> + uint32_t status;
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> + struct drm_device *dev = gpu->dev;
> +
> + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_PENDING))
> + return;
> +
> + /* Delete the preemption watchdog timer */
> + del_timer(&a6xx_gpu->preempt_timer);
> +
> + /*
> + * The hardware should be setting the stop bit of CP_CONTEXT_SWITCH_CNTL
> + * to zero before firing the interrupt, but there is a non zero chance
> + * of a hardware condition or a software race that could set it again
> + * before we have a chance to finish. If that happens, log and go for
> + * recovery
> + */
> + status = gpu_read(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL);
> + if (unlikely(status & A6XX_CP_CONTEXT_SWITCH_CNTL_STOP)) {
> + DRM_DEV_ERROR(&gpu->pdev->dev,
> + "!!!!!!!!!!!!!!!! preemption faulted !!!!!!!!!!!!!! irq\n");
> + set_preempt_state(a6xx_gpu, PREEMPT_FAULTED);
> + dev_err(dev->dev, "%s: Preemption failed to complete\n",
> + gpu->name);
> + kthread_queue_work(gpu->worker, &gpu->recover_work);
> + return;
> + }
> +
> + a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
> + a6xx_gpu->next_ring = NULL;
> +
> + /* Make sure the write to cur_ring is posted before the change in state */
> + wmb();
Not needed. set_preempt_state has the necessary barrier.
> +
> + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
> +
> + update_wptr(gpu, a6xx_gpu->cur_ring);
> +
> + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> +
> + /*
> + * Retrigger preemption to avoid a deadlock that might occur when preemption
> + * is skipped due to it being already in flight when requested.
> + */
> + a6xx_preempt_trigger(gpu);
> +}
> +
> +void a6xx_preempt_hw_init(struct msm_gpu *gpu)
> +{
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> + int i;
> +
> + /* No preemption if we only have one ring */
> + if (gpu->nr_rings == 1)
> + return;
> +
> + for (i = 0; i < gpu->nr_rings; i++) {
> + struct a6xx_preempt_record *record_ptr =
> + a6xx_gpu->preempt[i] + PREEMPT_OFFSET_PRIV_NON_SECURE;
> + record_ptr->wptr = 0;
> + record_ptr->rptr = 0;
> + record_ptr->rptr_addr = shadowptr(a6xx_gpu, gpu->rb[i]);
> + record_ptr->info = 0;
> + record_ptr->data = 0;
> + record_ptr->rbase = gpu->rb[i]->iova;
> + }
> +
> + /* Write a 0 to signal that we aren't switching pagetables */
> + gpu_write64(gpu, REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO, 0);
> +
> + /* Enable the GMEM save/restore feature for preemption */
> + gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE, 0x1);
> +
> + /* Reset the preemption state */
> + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> +
> + spin_lock_init(&a6xx_gpu->eval_lock);
> +
> + /* Always come up on rb 0 */
> + a6xx_gpu->cur_ring = gpu->rb[0];
> +}
> +
> +void a6xx_preempt_trigger(struct msm_gpu *gpu)
> +{
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> + u64 preempt_offset_priv_secure;
> + unsigned long flags;
> + struct msm_ringbuffer *ring;
> + unsigned int cntl;
> +
> + if (gpu->nr_rings == 1)
> + return;
> +
> + /*
> + * Lock to make sure another thread attempting preemption doesn't skip it
> + * while we are still evaluating the next ring. This makes sure the other
> + * thread does start preemption if we abort it and avoids a soft lock.
> + */
> + spin_lock_irqsave(&a6xx_gpu->eval_lock, flags);
> +
> + /*
> + * Try to start preemption by moving from NONE to START. If
> + * unsuccessful, a preemption is already in flight
> + */
> + if (!try_preempt_state(a6xx_gpu, PREEMPT_NONE, PREEMPT_START)) {
> + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> + return;
> + }
> +
> + cntl = A6XX_CP_CONTEXT_SWITCH_CNTL_LEVEL(a6xx_gpu->preempt_level);
> +
> + if (a6xx_gpu->skip_save_restore)
> + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_SKIP_SAVE_RESTORE;
> +
> + if (a6xx_gpu->uses_gmem)
> + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_USES_GMEM;
> +
> + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_STOP;
> +
> + /* Get the next ring to preempt to */
> + ring = get_next_ring(gpu);
> +
> + /*
> + * If no ring is populated or the highest priority ring is the current
> + * one do nothing except to update the wptr to the latest and greatest
> + */
> + if (!ring || (a6xx_gpu->cur_ring == ring)) {
> + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
> + update_wptr(gpu, a6xx_gpu->cur_ring);
> + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> + return;
> + }
> +
> + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> +
> + spin_lock_irqsave(&ring->preempt_lock, flags);
> +
> + struct a7xx_cp_smmu_info *smmu_info_ptr =
> + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_SMMU_INFO;
> + struct a6xx_preempt_record *record_ptr =
> + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE;
> + u64 ttbr0 = ring->memptrs->ttbr0;
> + u32 context_idr = ring->memptrs->context_idr;
> +
> + smmu_info_ptr->ttbr0 = ttbr0;
> + smmu_info_ptr->context_idr = context_idr;
> + record_ptr->wptr = get_wptr(ring);
> +
> + /*
> + * The GPU will write the wptr we set above when we preempt. Reset
> + * skip_inline_wptr to make sure that we don't write WPTR to the same
> + * thing twice. It's still possible subsequent submissions will update
> + * wptr again, in which case they will set the flag to true. This has
> + * to be protected by the lock for setting the flag and updating wptr
> + * to be atomic.
> + */
> + ring->skip_inline_wptr = false;
> +
> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> +
> + gpu_write64(gpu,
> + REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO,
> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_SMMU_INFO);
> +
> + gpu_write64(gpu,
> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR,
> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE);
> +
> + preempt_offset_priv_secure =
> + PREEMPT_OFFSET_PRIV_SECURE(adreno_gpu->info->preempt_record_size);
> + gpu_write64(gpu,
> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR,
> + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure);
Secure buffers are not supported currently, so we can skip this and the
context record allocation. Anyway this has to be a separate buffer
mapped in secure pagetable which don't currently have. We can skip the
same in pseudo register packet too.
> +
> + a6xx_gpu->next_ring = ring;
> +
> + /* Start a timer to catch a stuck preemption */
> + mod_timer(&a6xx_gpu->preempt_timer, jiffies + msecs_to_jiffies(10000));
> +
> + /* Set the preemption state to triggered */
> + set_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED);
> +
> + /* Make sure any previous writes to WPTR are posted */
> + gpu_read(gpu, REG_A6XX_CP_RB_WPTR);
> +
> + /* Make sure everything is written before hitting the button */
> + wmb();
This and the above read back looks unnecessary. All writes to gpu are
ordered anyway.
> +
> + /* Trigger the preemption */
> + gpu_write(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL, cntl);
> +}
> +
> +static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
> + struct msm_ringbuffer *ring)
> +{
> + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> + struct msm_gpu *gpu = &adreno_gpu->base;
> + struct drm_gem_object *bo = NULL;
> + phys_addr_t ttbr;
> + u64 iova = 0;
> + void *ptr;
> + int asid;
> +
> + ptr = msm_gem_kernel_new(gpu->dev,
> + PREEMPT_SIZE(adreno_gpu->info->preempt_record_size),
> + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova);
set a name?
> +
> + memset(ptr, 0, PREEMPT_SIZE(adreno_gpu->info->preempt_record_size));
> +
> + if (IS_ERR(ptr))
> + return PTR_ERR(ptr);
> +
> + a6xx_gpu->preempt_bo[ring->id] = bo;
> + a6xx_gpu->preempt_iova[ring->id] = iova;
> + a6xx_gpu->preempt[ring->id] = ptr;
> +
> + struct a7xx_cp_smmu_info *smmu_info_ptr = ptr + PREEMPT_OFFSET_SMMU_INFO;
> + struct a6xx_preempt_record *record_ptr = ptr + PREEMPT_OFFSET_PRIV_NON_SECURE;
> +
> + msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid);
> +
> + smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC;
> + smmu_info_ptr->ttbr0 = ttbr;
> + smmu_info_ptr->asid = 0xdecafbad;
> + smmu_info_ptr->context_idr = 0;
> +
> + /* Set up the defaults on the preemption record */
> + record_ptr->magic = A6XX_PREEMPT_RECORD_MAGIC;
> + record_ptr->info = 0;
> + record_ptr->data = 0;
> + record_ptr->rptr = 0;
> + record_ptr->wptr = 0;
> + record_ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT;
> + record_ptr->rbase = ring->iova;
> + record_ptr->counter = 0;
> + record_ptr->bv_rptr_addr = rbmemptr(ring, bv_rptr);
> +
> + return 0;
> +}
> +
> +void a6xx_preempt_fini(struct msm_gpu *gpu)
> +{
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> + int i;
> +
> + for (i = 0; i < gpu->nr_rings; i++)
> + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace);
> +}
> +
> +void a6xx_preempt_init(struct msm_gpu *gpu)
> +{
> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> + int i;
> +
> + /* No preemption if we only have one ring */
> + if (gpu->nr_rings <= 1)
> + return;
> +
> + for (i = 0; i < gpu->nr_rings; i++) {
> + if (preempt_init_ring(a6xx_gpu, gpu->rb[i]))
> + goto fail;
> + }
> +
> + /* TODO: make this configurable? */
> + a6xx_gpu->preempt_level = 1;
> + a6xx_gpu->uses_gmem = 1;
> + a6xx_gpu->skip_save_restore = 1;
> +
> + timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
> +
> + return;
> +fail:
Log an error so that preemption is not disabled silently?
> + /*
> + * On any failure our adventure is over. Clean up and
> + * set nr_rings to 1 to force preemption off
> + */
> + a6xx_preempt_fini(gpu);
> + gpu->nr_rings = 1;
> +
> + return;
> +}
> diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h
> index 40791b2ade46..7dde6a312511 100644
> --- a/drivers/gpu/drm/msm/msm_ringbuffer.h
> +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h
> @@ -36,6 +36,7 @@ struct msm_rbmemptrs {
>
> volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT];
> volatile u64 ttbr0;
> + volatile u32 context_idr;
> };
>
> struct msm_cp_state {
> @@ -100,6 +101,12 @@ struct msm_ringbuffer {
> * preemption. Can be aquired from irq context.
> */
> spinlock_t preempt_lock;
> +
> + /*
> + * Whether we skipped writing wptr and it needs to be updated in the
> + * future when the ring becomes current.
> + */
> + bool skip_inline_wptr;
nit: does 'restore_wptr' makes more sense? Or something better? Basically, name it based
on the future action?
-Akhil
> };
>
> struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
>
> --
> 2.46.0
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-06 19:54 ` Akhil P Oommen
@ 2024-09-09 12:22 ` Connor Abbott
2024-09-10 16:43 ` Akhil P Oommen
2024-09-09 13:15 ` Antonino Maniscalco
1 sibling, 1 reply; 32+ messages in thread
From: Connor Abbott @ 2024-09-09 12:22 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Antonino Maniscalco, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Jonathan Corbet, linux-arm-msm, dri-devel,
freedreno, linux-kernel, linux-doc, Sharat Masetty,
Neil Armstrong
On Fri, Sep 6, 2024 at 9:03 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
>
> On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
> > This patch implements preemption feature for A6xx targets, this allows
> > the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
> > hardware as such supports multiple levels of preemption granularities,
> > ranging from coarse grained(ringbuffer level) to a more fine grained
> > such as draw-call level or a bin boundary level preemption. This patch
> > enables the basic preemption level, with more fine grained preemption
> > support to follow.
> >
> > Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
> > Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> > Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
> > ---
> > drivers/gpu/drm/msm/Makefile | 1 +
> > drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
> > drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
> > drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 391 ++++++++++++++++++++++++++++++
> > drivers/gpu/drm/msm/msm_ringbuffer.h | 7 +
> > 5 files changed, 844 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
> > index f5e2838c6a76..32e915109a59 100644
> > --- a/drivers/gpu/drm/msm/Makefile
> > +++ b/drivers/gpu/drm/msm/Makefile
> > @@ -23,6 +23,7 @@ adreno-y := \
> > adreno/a6xx_gpu.o \
> > adreno/a6xx_gmu.o \
> > adreno/a6xx_hfi.o \
> > + adreno/a6xx_preempt.o \
> >
> > adreno-$(CONFIG_DEBUG_FS) += adreno/a5xx_debugfs.o \
> >
> > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > index 32a4faa93d7f..ed0b138a2d66 100644
> > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > @@ -16,6 +16,83 @@
> >
> > #define GPU_PAS_ID 13
> >
> > +/* IFPC & Preemption static powerup restore list */
> > +static const uint32_t a7xx_pwrup_reglist[] = {
> > + REG_A6XX_UCHE_TRAP_BASE,
> > + REG_A6XX_UCHE_TRAP_BASE + 1,
> > + REG_A6XX_UCHE_WRITE_THRU_BASE,
> > + REG_A6XX_UCHE_WRITE_THRU_BASE + 1,
> > + REG_A6XX_UCHE_GMEM_RANGE_MIN,
> > + REG_A6XX_UCHE_GMEM_RANGE_MIN + 1,
> > + REG_A6XX_UCHE_GMEM_RANGE_MAX,
> > + REG_A6XX_UCHE_GMEM_RANGE_MAX + 1,
> > + REG_A6XX_UCHE_CACHE_WAYS,
> > + REG_A6XX_UCHE_MODE_CNTL,
> > + REG_A6XX_RB_NC_MODE_CNTL,
> > + REG_A6XX_RB_CMP_DBG_ECO_CNTL,
> > + REG_A7XX_GRAS_NC_MODE_CNTL,
> > + REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE,
> > + REG_A6XX_UCHE_GBIF_GX_CONFIG,
> > + REG_A6XX_UCHE_CLIENT_PF,
>
> REG_A6XX_TPL1_DBG_ECO_CNTL1 here. A friendly warning, missing a register
> in this list (and the below list) will lead to a very frustrating debug.
>
> > +};
> > +
> > +static const uint32_t a7xx_ifpc_pwrup_reglist[] = {
> > + REG_A6XX_TPL1_NC_MODE_CNTL,
> > + REG_A6XX_SP_NC_MODE_CNTL,
> > + REG_A6XX_CP_DBG_ECO_CNTL,
> > + REG_A6XX_CP_PROTECT_CNTL,
> > + REG_A6XX_CP_PROTECT(0),
> > + REG_A6XX_CP_PROTECT(1),
> > + REG_A6XX_CP_PROTECT(2),
> > + REG_A6XX_CP_PROTECT(3),
> > + REG_A6XX_CP_PROTECT(4),
> > + REG_A6XX_CP_PROTECT(5),
> > + REG_A6XX_CP_PROTECT(6),
> > + REG_A6XX_CP_PROTECT(7),
> > + REG_A6XX_CP_PROTECT(8),
> > + REG_A6XX_CP_PROTECT(9),
> > + REG_A6XX_CP_PROTECT(10),
> > + REG_A6XX_CP_PROTECT(11),
> > + REG_A6XX_CP_PROTECT(12),
> > + REG_A6XX_CP_PROTECT(13),
> > + REG_A6XX_CP_PROTECT(14),
> > + REG_A6XX_CP_PROTECT(15),
> > + REG_A6XX_CP_PROTECT(16),
> > + REG_A6XX_CP_PROTECT(17),
> > + REG_A6XX_CP_PROTECT(18),
> > + REG_A6XX_CP_PROTECT(19),
> > + REG_A6XX_CP_PROTECT(20),
> > + REG_A6XX_CP_PROTECT(21),
> > + REG_A6XX_CP_PROTECT(22),
> > + REG_A6XX_CP_PROTECT(23),
> > + REG_A6XX_CP_PROTECT(24),
> > + REG_A6XX_CP_PROTECT(25),
> > + REG_A6XX_CP_PROTECT(26),
> > + REG_A6XX_CP_PROTECT(27),
> > + REG_A6XX_CP_PROTECT(28),
> > + REG_A6XX_CP_PROTECT(29),
> > + REG_A6XX_CP_PROTECT(30),
> > + REG_A6XX_CP_PROTECT(31),
> > + REG_A6XX_CP_PROTECT(32),
> > + REG_A6XX_CP_PROTECT(33),
> > + REG_A6XX_CP_PROTECT(34),
> > + REG_A6XX_CP_PROTECT(35),
> > + REG_A6XX_CP_PROTECT(36),
> > + REG_A6XX_CP_PROTECT(37),
> > + REG_A6XX_CP_PROTECT(38),
> > + REG_A6XX_CP_PROTECT(39),
> > + REG_A6XX_CP_PROTECT(40),
> > + REG_A6XX_CP_PROTECT(41),
> > + REG_A6XX_CP_PROTECT(42),
> > + REG_A6XX_CP_PROTECT(43),
> > + REG_A6XX_CP_PROTECT(44),
> > + REG_A6XX_CP_PROTECT(45),
> > + REG_A6XX_CP_PROTECT(46),
> > + REG_A6XX_CP_PROTECT(47),
> > + REG_A6XX_CP_AHB_CNTL,
> > +};
> > +
> > +
> > static inline bool _a6xx_check_idle(struct msm_gpu *gpu)
> > {
> > struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > @@ -68,6 +145,8 @@ static void update_shadow_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> >
> > static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > {
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > uint32_t wptr;
> > unsigned long flags;
> >
> > @@ -81,12 +160,26 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > /* Make sure to wrap wptr if we need to */
> > wptr = get_wptr(ring);
> >
> > - spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > -
> > /* Make sure everything is posted before making a decision */
> > mb();
>
> This looks unnecessary.
>
> >
> > - gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > + /* Update HW if this is the current ring and we are not in preempt*/
> > + if (!a6xx_in_preempt(a6xx_gpu)) {
> > + /*
> > + * Order the reads of the preempt state and cur_ring. This
> > + * matches the barrier after writing cur_ring.
> > + */
> > + rmb();
>
> we can use the lighter smp variant here.
>
> > +
> > + if (a6xx_gpu->cur_ring == ring)
> > + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > + else
> > + ring->skip_inline_wptr = true;
> > + } else {
> > + ring->skip_inline_wptr = true;
> > + }
> > +
> > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > }
> >
> > static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
> > @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
>
> set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
> not. This is currently not tracked separately for each ring. Can you
> please check that?
>
> I wonder why that didn't cause any gpu errors in testing. Not sure if I
> am missing something.
>
> >
> > /*
> > * Write the new TTBR0 to the memstore. This is good for debugging.
> > + * Needed for preemption
> > */
> > - OUT_PKT7(ring, CP_MEM_WRITE, 4);
> > + OUT_PKT7(ring, CP_MEM_WRITE, 5);
> > OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr)));
> > OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr)));
> > OUT_RING(ring, lower_32_bits(ttbr));
> > - OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr));
> > + OUT_RING(ring, upper_32_bits(ttbr));
> > + OUT_RING(ring, ctx->seqno);
> >
> > /*
> > * Sync both threads after switching pagetables and enable BR only
> > @@ -268,6 +363,43 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > a6xx_flush(gpu, ring);
> > }
> >
> > +static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> > + struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
> > +{
> > + u64 preempt_offset_priv_secure;
> > +
> > + OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
> > +
> > + OUT_RING(ring, SMMU_INFO);
> > + /* don't save SMMU, we write the record from the kernel instead */
> > + OUT_RING(ring, 0);
> > + OUT_RING(ring, 0);
> > +
> > + /* privileged and non secure buffer save */
> > + OUT_RING(ring, NON_SECURE_SAVE_ADDR);
> > + OUT_RING(ring, lower_32_bits(
> > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
> > + OUT_RING(ring, upper_32_bits(
> > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
> > + OUT_RING(ring, SECURE_SAVE_ADDR);
> > + preempt_offset_priv_secure =
> > + PREEMPT_OFFSET_PRIV_SECURE(a6xx_gpu->base.info->preempt_record_size);
> > + OUT_RING(ring, lower_32_bits(
> > + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
> > + OUT_RING(ring, upper_32_bits(
> > + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
> > +
> > + /* user context buffer save, seems to be unnused by fw */
> > + OUT_RING(ring, NON_PRIV_SAVE_ADDR);
> > + OUT_RING(ring, 0);
> > + OUT_RING(ring, 0);
> > +
> > + OUT_RING(ring, COUNTER);
> > + /* seems OK to set to 0 to disable it */
> > + OUT_RING(ring, 0);
> > + OUT_RING(ring, 0);
> > +}
> > +
> > static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > {
> > unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT;
> > @@ -283,6 +415,13 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
> > OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR);
> >
> > + /*
> > + * If preemption is enabled, then set the pseudo register for the save
> > + * sequence
> > + */
> > + if (gpu->nr_rings > 1)
> > + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, submit->queue);
>
> Can we move this after set_pagetable()?
>
> > +
> > a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
> >
> > get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0),
> > @@ -376,6 +515,8 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence)));
> > OUT_RING(ring, submit->seqno);
> >
> > + a6xx_gpu->last_seqno[ring->id] = submit->seqno;
> > +
> > /* write the ringbuffer timestamp */
> > OUT_PKT7(ring, CP_EVENT_WRITE, 4);
> > OUT_RING(ring, CACHE_CLEAN | CP_EVENT_WRITE_0_IRQ | BIT(27));
> > @@ -389,10 +530,32 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > OUT_PKT7(ring, CP_SET_MARKER, 1);
> > OUT_RING(ring, 0x100); /* IFPC enable */
> >
> > + /* If preemption is enabled */
> > + if (gpu->nr_rings > 1) {
> > + /* Yield the floor on command completion */
> > + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
> > +
> > + /*
> > + * If dword[2:1] are non zero, they specify an address for
> > + * the CP to write the value of dword[3] to on preemption
> > + * complete. Write 0 to skip the write
> > + */
> > + OUT_RING(ring, 0x00);
> > + OUT_RING(ring, 0x00);
> > + /* Data value - not used if the address above is 0 */
> > + OUT_RING(ring, 0x01);
> > + /* generate interrupt on preemption completion */
> > + OUT_RING(ring, 0x00);
> > + }
> > +
> > +
> > trace_msm_gpu_submit_flush(submit,
> > gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER));
> >
> > a6xx_flush(gpu, ring);
> > +
> > + /* Check to see if we need to start preemption */
> > + a6xx_preempt_trigger(gpu);
> > }
> >
> > static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state)
> > @@ -588,6 +751,89 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
> > adreno_gpu->ubwc_config.min_acc_len << 23 | hbb_lo << 21);
> > }
> >
> > +static void a7xx_patch_pwrup_reglist(struct msm_gpu *gpu)
> > +{
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > + struct adreno_reglist_list reglist[2];
> > + void *ptr = a6xx_gpu->pwrup_reglist_ptr;
> > + struct cpu_gpu_lock *lock = ptr;
> > + u32 *dest = (u32 *)&lock->regs[0];
> > + int i, j;
> > +
> This sequence is required only once. We can use a flag to check and bail out
> next time.
>
> > + lock->gpu_req = lock->cpu_req = lock->turn = 0;
> > + lock->ifpc_list_len = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
> > + lock->preemption_list_len = ARRAY_SIZE(a7xx_pwrup_reglist);
> > +
> > + /* Static IFPC-only registers */
> > + reglist[0].regs = a7xx_ifpc_pwrup_reglist;
> > + reglist[0].count = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
> > + lock->ifpc_list_len = reglist[0].count;
> > +
> > + /* Static IFPC + preemption registers */
> > + reglist[1].regs = a7xx_pwrup_reglist;
> > + reglist[1].count = ARRAY_SIZE(a7xx_pwrup_reglist);
> > + lock->preemption_list_len = reglist[1].count;
> > +
> > + /*
> > + * For each entry in each of the lists, write the offset and the current
> > + * register value into the GPU buffer
> > + */
> > + for (i = 0; i < 2; i++) {
> > + const u32 *r = reglist[i].regs;
> > +
> > + for (j = 0; j < reglist[i].count; j++) {
> > + *dest++ = r[j];
> > + *dest++ = gpu_read(gpu, r[j]);
> > + }
> > + }
> > +
> > + /*
> > + * The overall register list is composed of
> > + * 1. Static IFPC-only registers
> > + * 2. Static IFPC + preemption registers
> > + * 3. Dynamic IFPC + preemption registers (ex: perfcounter selects)
> > + *
> > + * The first two lists are static. Size of these lists are stored as
> > + * number of pairs in ifpc_list_len and preemption_list_len
> > + * respectively. With concurrent binning, Some of the perfcounter
> > + * registers being virtualized, CP needs to know the pipe id to program
> > + * the aperture inorder to restore the same. Thus, third list is a
> > + * dynamic list with triplets as
> > + * (<aperture, shifted 12 bits> <address> <data>), and the length is
> > + * stored as number for triplets in dynamic_list_len.
> > + */
> > + lock->dynamic_list_len = 0;
> > +}
> > +
> > +static int a7xx_preempt_start(struct msm_gpu *gpu)
> > +{
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > + struct msm_ringbuffer *ring = gpu->rb[0];
> > +
> > + if (gpu->nr_rings <= 1)
> > + return 0;
> > +
> > + /* Turn CP protection off */
> > + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
> > + OUT_RING(ring, 0);
> > +
> > + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, NULL);
> > +
> > + /* Yield the floor on command completion */
> > + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
> > + OUT_RING(ring, 0x00);
> > + OUT_RING(ring, 0x00);
> > + OUT_RING(ring, 0x01);
>
> Looks like kgsl use 0x00 here. Not sure if that matters!
>
> > + /* Generate interrupt on preemption completion */
> > + OUT_RING(ring, 0x00);
> > +
> > + a6xx_flush(gpu, ring);
> > +
> > + return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
> > +}
> > +
> > static int a6xx_cp_init(struct msm_gpu *gpu)
> > {
> > struct msm_ringbuffer *ring = gpu->rb[0];
> > @@ -619,6 +865,8 @@ static int a6xx_cp_init(struct msm_gpu *gpu)
> >
> > static int a7xx_cp_init(struct msm_gpu *gpu)
> > {
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > struct msm_ringbuffer *ring = gpu->rb[0];
> > u32 mask;
> >
> > @@ -626,6 +874,8 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
> > OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
> > OUT_RING(ring, BIT(27));
> >
> > + a7xx_patch_pwrup_reglist(gpu);
> > +
>
> Looks out of place. I guess you kept it here to avoid an extra a7x
> check. At least we should move this before the above pm4 packets.
>
> > OUT_PKT7(ring, CP_ME_INIT, 7);
> >
> > /* Use multiple HW contexts */
> > @@ -656,11 +906,11 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
> >
> > /* *Don't* send a power up reg list for concurrent binning (TODO) */
> > /* Lo address */
> > - OUT_RING(ring, 0x00000000);
> > + OUT_RING(ring, lower_32_bits(a6xx_gpu->pwrup_reglist_iova));
> > /* Hi address */
> > - OUT_RING(ring, 0x00000000);
> > + OUT_RING(ring, upper_32_bits(a6xx_gpu->pwrup_reglist_iova));
> > /* BIT(31) set => read the regs from the list */
> > - OUT_RING(ring, 0x00000000);
> > + OUT_RING(ring, BIT(31));
> >
> > a6xx_flush(gpu, ring);
> > return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
> > @@ -784,6 +1034,16 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
> > msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow");
> > }
> >
> > + a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE,
> > + MSM_BO_WC | MSM_BO_MAP_PRIV,
> > + gpu->aspace, &a6xx_gpu->pwrup_reglist_bo,
> > + &a6xx_gpu->pwrup_reglist_iova);
> > +
> > + if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr))
> > + return PTR_ERR(a6xx_gpu->pwrup_reglist_ptr);
> > +
> > + msm_gem_object_set_name(a6xx_gpu->pwrup_reglist_bo, "pwrup_reglist");
> > +
> > return 0;
> > }
> >
> > @@ -1127,6 +1387,8 @@ static int hw_init(struct msm_gpu *gpu)
> > if (a6xx_gpu->shadow_bo) {
> > gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR,
> > shadowptr(a6xx_gpu, gpu->rb[0]));
> > + for (unsigned int i = 0; i < gpu->nr_rings; i++)
> > + a6xx_gpu->shadow[i] = 0;
> > }
> >
> > /* ..which means "always" on A7xx, also for BV shadow */
> > @@ -1135,6 +1397,8 @@ static int hw_init(struct msm_gpu *gpu)
> > rbmemptr(gpu->rb[0], bv_rptr));
> > }
> >
> > + a6xx_preempt_hw_init(gpu);
> > +
> > /* Always come up on rb 0 */
> > a6xx_gpu->cur_ring = gpu->rb[0];
> >
> > @@ -1180,6 +1444,10 @@ static int hw_init(struct msm_gpu *gpu)
> > out:
> > if (adreno_has_gmu_wrapper(adreno_gpu))
> > return ret;
> > +
> > + /* Last step - yield the ringbuffer */
> > + a7xx_preempt_start(gpu);
> > +
> > /*
> > * Tell the GMU that we are done touching the GPU and it can start power
> > * management
> > @@ -1557,8 +1825,13 @@ static irqreturn_t a6xx_irq(struct msm_gpu *gpu)
> > if (status & A6XX_RBBM_INT_0_MASK_SWFUSEVIOLATION)
> > a7xx_sw_fuse_violation_irq(gpu);
> >
> > - if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS)
> > + if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS) {
> > msm_gpu_retire(gpu);
> > + a6xx_preempt_trigger(gpu);
> > + }
> > +
> > + if (status & A6XX_RBBM_INT_0_MASK_CP_SW)
> > + a6xx_preempt_irq(gpu);
> >
> > return IRQ_HANDLED;
> > }
> > @@ -2331,6 +2604,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
> > a6xx_fault_handler);
> >
> > a6xx_calc_ubwc_config(adreno_gpu);
> > + /* Set up the preemption specific bits and pieces for each ringbuffer */
> > + a6xx_preempt_init(gpu);
> >
> > return gpu;
> > }
> > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > index e3e5c53ae8af..da10060e38dc 100644
> > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > @@ -12,6 +12,31 @@
> >
> > extern bool hang_debug;
> >
> > +struct cpu_gpu_lock {
> > + uint32_t gpu_req;
> > + uint32_t cpu_req;
> > + uint32_t turn;
> > + union {
> > + struct {
> > + uint16_t list_length;
> > + uint16_t list_offset;
> > + };
> > + struct {
> > + uint8_t ifpc_list_len;
> > + uint8_t preemption_list_len;
> > + uint16_t dynamic_list_len;
> > + };
> > + };
> > + uint64_t regs[62];
> > +};
> > +
> > +struct adreno_reglist_list {
> > + /** @reg: List of register **/
> > + const u32 *regs;
> > + /** @count: Number of registers in the list **/
> > + u32 count;
> > +};
> > +
> > /**
> > * struct a6xx_info - a6xx specific information from device table
> > *
> > @@ -31,6 +56,20 @@ struct a6xx_gpu {
> > uint64_t sqe_iova;
> >
> > struct msm_ringbuffer *cur_ring;
> > + struct msm_ringbuffer *next_ring;
> > +
> > + struct drm_gem_object *preempt_bo[MSM_GPU_MAX_RINGS];
> > + void *preempt[MSM_GPU_MAX_RINGS];
> > + uint64_t preempt_iova[MSM_GPU_MAX_RINGS];
> > + uint32_t last_seqno[MSM_GPU_MAX_RINGS];
> > +
> > + atomic_t preempt_state;
> > + spinlock_t eval_lock;
> > + struct timer_list preempt_timer;
> > +
> > + unsigned int preempt_level;
> > + bool uses_gmem;
> > + bool skip_save_restore;
> >
> > struct a6xx_gmu gmu;
> >
> > @@ -38,6 +77,10 @@ struct a6xx_gpu {
> > uint64_t shadow_iova;
> > uint32_t *shadow;
> >
> > + struct drm_gem_object *pwrup_reglist_bo;
> > + void *pwrup_reglist_ptr;
> > + uint64_t pwrup_reglist_iova;
> > +
> > bool has_whereami;
> >
> > void __iomem *llc_mmio;
> > @@ -49,6 +92,105 @@ struct a6xx_gpu {
> >
> > #define to_a6xx_gpu(x) container_of(x, struct a6xx_gpu, base)
> >
> > +/*
> > + * In order to do lockless preemption we use a simple state machine to progress
> > + * through the process.
> > + *
> > + * PREEMPT_NONE - no preemption in progress. Next state START.
> > + * PREEMPT_START - The trigger is evaluating if preemption is possible. Next
> > + * states: TRIGGERED, NONE
> > + * PREEMPT_FINISH - An intermediate state before moving back to NONE. Next
> > + * state: NONE.
> > + * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next
> > + * states: FAULTED, PENDING
> > + * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger
> > + * recovery. Next state: N/A
> > + * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is
> > + * checking the success of the operation. Next state: FAULTED, NONE.
> > + */
> > +
> > +enum a6xx_preempt_state {
> > + PREEMPT_NONE = 0,
> > + PREEMPT_START,
> > + PREEMPT_FINISH,
> > + PREEMPT_TRIGGERED,
> > + PREEMPT_FAULTED,
> > + PREEMPT_PENDING,
> > +};
> > +
> > +/*
> > + * struct a6xx_preempt_record is a shared buffer between the microcode and the
> > + * CPU to store the state for preemption. The record itself is much larger
> > + * (2112k) but most of that is used by the CP for storage.
> > + *
> > + * There is a preemption record assigned per ringbuffer. When the CPU triggers a
> > + * preemption, it fills out the record with the useful information (wptr, ring
> > + * base, etc) and the microcode uses that information to set up the CP following
> > + * the preemption. When a ring is switched out, the CP will save the ringbuffer
> > + * state back to the record. In this way, once the records are properly set up
> > + * the CPU can quickly switch back and forth between ringbuffers by only
> > + * updating a few registers (often only the wptr).
> > + *
> > + * These are the CPU aware registers in the record:
> > + * @magic: Must always be 0xAE399D6EUL
> > + * @info: Type of the record - written 0 by the CPU, updated by the CP
> > + * @errno: preemption error record
> > + * @data: Data field in YIELD and SET_MARKER packets, Written and used by CP
> > + * @cntl: Value of RB_CNTL written by CPU, save/restored by CP
> > + * @rptr: Value of RB_RPTR written by CPU, save/restored by CP
> > + * @wptr: Value of RB_WPTR written by CPU, save/restored by CP
> > + * @_pad: Reserved/padding
> > + * @rptr_addr: Value of RB_RPTR_ADDR_LO|HI written by CPU, save/restored by CP
> > + * @rbase: Value of RB_BASE written by CPU, save/restored by CP
> > + * @counter: GPU address of the storage area for the preemption counters
>
> doc missing for bv_rptr_addr.
>
> > + */
> > +struct a6xx_preempt_record {
> > + u32 magic;
> > + u32 info;
> > + u32 errno;
> > + u32 data;
> > + u32 cntl;
> > + u32 rptr;
> > + u32 wptr;
> > + u32 _pad;
> > + u64 rptr_addr;
> > + u64 rbase;
> > + u64 counter;
> > + u64 bv_rptr_addr;
> > +};
> > +
> > +#define A6XX_PREEMPT_RECORD_MAGIC 0xAE399D6EUL
> > +
> > +#define PREEMPT_RECORD_SIZE_FALLBACK(size) \
> > + ((size) == 0 ? 4192 * SZ_1K : (size))
> > +
> > +#define PREEMPT_OFFSET_SMMU_INFO 0
> > +#define PREEMPT_OFFSET_PRIV_NON_SECURE (PREEMPT_OFFSET_SMMU_INFO + 4096)
> > +#define PREEMPT_OFFSET_PRIV_SECURE(size) \
> > + (PREEMPT_OFFSET_PRIV_NON_SECURE + PREEMPT_RECORD_SIZE_FALLBACK(size))
> > +#define PREEMPT_SIZE(size) \
> > + (PREEMPT_OFFSET_PRIV_SECURE(size) + PREEMPT_RECORD_SIZE_FALLBACK(size))
> > +
> > +/*
> > + * The preemption counter block is a storage area for the value of the
> > + * preemption counters that are saved immediately before context switch. We
> > + * append it on to the end of the allocation for the preemption record.
> > + */
> > +#define A6XX_PREEMPT_COUNTER_SIZE (16 * 4)
> > +
> > +#define A6XX_PREEMPT_USER_RECORD_SIZE (192 * 1024)
>
> Unused.
>
> > +
> > +struct a7xx_cp_smmu_info {
> > + u32 magic;
> > + u32 _pad4;
> > + u64 ttbr0;
> > + u32 asid;
> > + u32 context_idr;
> > + u32 context_bank;
> > +};
> > +
> > +#define GEN7_CP_SMMU_INFO_MAGIC 0x241350d5UL
> > +
> > /*
> > * Given a register and a count, return a value to program into
> > * REG_CP_PROTECT_REG(n) - this will block both reads and writes for
> > @@ -106,6 +248,25 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
> > int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
> > void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu);
> >
> > +void a6xx_preempt_init(struct msm_gpu *gpu);
> > +void a6xx_preempt_hw_init(struct msm_gpu *gpu);
> > +void a6xx_preempt_trigger(struct msm_gpu *gpu);
> > +void a6xx_preempt_irq(struct msm_gpu *gpu);
> > +void a6xx_preempt_fini(struct msm_gpu *gpu);
> > +int a6xx_preempt_submitqueue_setup(struct msm_gpu *gpu,
> > + struct msm_gpu_submitqueue *queue);
> > +void a6xx_preempt_submitqueue_close(struct msm_gpu *gpu,
> > + struct msm_gpu_submitqueue *queue);
> > +
> > +/* Return true if we are in a preempt state */
> > +static inline bool a6xx_in_preempt(struct a6xx_gpu *a6xx_gpu)
> > +{
> > + int preempt_state = atomic_read(&a6xx_gpu->preempt_state);
>
> I think we should keep a matching barrier before the 'read' similar to the one used in the
> set_preempt_state helper.
Good idea, but for the one case we found where it matters (the
a6xx_flush() vs. updating the ring in a6xx_preempt_irq() race) the
barrier needs to be after the read. The sequence is something like:
Thread A:
a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
a6xx_gpu->preempt_state = PREEMPT_FINISH;
Thread B:
read a6xx_gpu->preempt_state;
read a6xx_gpu->cur_ring;
And if the read to preempt_state returns PREEMPT_FINISH, then we need
cur_ring to reflect the ring we switched to. (I discovered this the
hard way from debugging deadlocks...)
So, maybe add a smp_rmb() before and after, then drop the explicit
barrier in a6xx_flush()?
>
> > +
> > + return !(preempt_state == PREEMPT_NONE ||
> > + preempt_state == PREEMPT_FINISH);
> > +}
> > +
> > void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
> > bool suspended);
> > unsigned long a6xx_gmu_get_freq(struct msm_gpu *gpu);
> > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > new file mode 100644
> > index 000000000000..1caff76aca6e
> > --- /dev/null
> > +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > @@ -0,0 +1,391 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. */
> > +/* Copyright (c) 2023 Collabora, Ltd. */
> > +/* Copyright (c) 2024 Valve Corporation */
> > +
> > +#include "msm_gem.h"
> > +#include "a6xx_gpu.h"
> > +#include "a6xx_gmu.xml.h"
> > +#include "msm_mmu.h"
> > +
> > +/*
> > + * Try to transition the preemption state from old to new. Return
> > + * true on success or false if the original state wasn't 'old'
> > + */
> > +static inline bool try_preempt_state(struct a6xx_gpu *a6xx_gpu,
> > + enum a6xx_preempt_state old, enum a6xx_preempt_state new)
> > +{
> > + enum a6xx_preempt_state cur = atomic_cmpxchg(&a6xx_gpu->preempt_state,
> > + old, new);
> > +
> > + return (cur == old);
> > +}
> > +
> > +/*
> > + * Force the preemption state to the specified state. This is used in cases
> > + * where the current state is known and won't change
> > + */
> > +static inline void set_preempt_state(struct a6xx_gpu *gpu,
> > + enum a6xx_preempt_state new)
> > +{
> > + /*
> > + * preempt_state may be read by other cores trying to trigger a
> > + * preemption or in the interrupt handler so barriers are needed
> > + * before...
> > + */
> > + smp_mb__before_atomic();
> > + atomic_set(&gpu->preempt_state, new);
> > + /* ... and after*/
> > + smp_mb__after_atomic();
> > +}
> > +
> > +/* Write the most recent wptr for the given ring into the hardware */
> > +static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > +{
> > + unsigned long flags;
> > + uint32_t wptr;
> > +
> > + if (!ring)
>
> Is this ever true?
>
> > + return;
> > +
> > + spin_lock_irqsave(&ring->preempt_lock, flags);
> > +
> > + if (ring->skip_inline_wptr) {
> > + wptr = get_wptr(ring);
> > +
> > + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > +
> > + ring->skip_inline_wptr = false;
> > + }
> > +
> > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > +}
> > +
> > +/* Return the highest priority ringbuffer with something in it */
> > +static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
> > +{
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > +
> > + unsigned long flags;
> > + int i;
> > +
> > + for (i = 0; i < gpu->nr_rings; i++) {
> > + bool empty;
> > + struct msm_ringbuffer *ring = gpu->rb[i];
> > +
> > + spin_lock_irqsave(&ring->preempt_lock, flags);
> > + empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
> > + if (!empty && ring == a6xx_gpu->cur_ring)
> > + empty = ring->memptrs->fence == a6xx_gpu->last_seqno[i];
> > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > +
> > + if (!empty)
> > + return ring;
> > + }
> > +
> > + return NULL;
> > +}
> > +
> > +static void a6xx_preempt_timer(struct timer_list *t)
> > +{
> > + struct a6xx_gpu *a6xx_gpu = from_timer(a6xx_gpu, t, preempt_timer);
> > + struct msm_gpu *gpu = &a6xx_gpu->base.base;
> > + struct drm_device *dev = gpu->dev;
> > +
> > + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_FAULTED))
> > + return;
> > +
> > + dev_err(dev->dev, "%s: preemption timed out\n", gpu->name);
> > + kthread_queue_work(gpu->worker, &gpu->recover_work);
> > +}
> > +
> > +void a6xx_preempt_irq(struct msm_gpu *gpu)
> > +{
> > + uint32_t status;
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > + struct drm_device *dev = gpu->dev;
> > +
> > + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_PENDING))
> > + return;
> > +
> > + /* Delete the preemption watchdog timer */
> > + del_timer(&a6xx_gpu->preempt_timer);
> > +
> > + /*
> > + * The hardware should be setting the stop bit of CP_CONTEXT_SWITCH_CNTL
> > + * to zero before firing the interrupt, but there is a non zero chance
> > + * of a hardware condition or a software race that could set it again
> > + * before we have a chance to finish. If that happens, log and go for
> > + * recovery
> > + */
> > + status = gpu_read(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL);
> > + if (unlikely(status & A6XX_CP_CONTEXT_SWITCH_CNTL_STOP)) {
> > + DRM_DEV_ERROR(&gpu->pdev->dev,
> > + "!!!!!!!!!!!!!!!! preemption faulted !!!!!!!!!!!!!! irq\n");
> > + set_preempt_state(a6xx_gpu, PREEMPT_FAULTED);
> > + dev_err(dev->dev, "%s: Preemption failed to complete\n",
> > + gpu->name);
> > + kthread_queue_work(gpu->worker, &gpu->recover_work);
> > + return;
> > + }
> > +
> > + a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
> > + a6xx_gpu->next_ring = NULL;
> > +
> > + /* Make sure the write to cur_ring is posted before the change in state */
> > + wmb();
>
> Not needed. set_preempt_state has the necessary barrier.
>
> > +
> > + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
> > +
> > + update_wptr(gpu, a6xx_gpu->cur_ring);
> > +
> > + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> > +
> > + /*
> > + * Retrigger preemption to avoid a deadlock that might occur when preemption
> > + * is skipped due to it being already in flight when requested.
> > + */
> > + a6xx_preempt_trigger(gpu);
> > +}
> > +
> > +void a6xx_preempt_hw_init(struct msm_gpu *gpu)
> > +{
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > + int i;
> > +
> > + /* No preemption if we only have one ring */
> > + if (gpu->nr_rings == 1)
> > + return;
> > +
> > + for (i = 0; i < gpu->nr_rings; i++) {
> > + struct a6xx_preempt_record *record_ptr =
> > + a6xx_gpu->preempt[i] + PREEMPT_OFFSET_PRIV_NON_SECURE;
> > + record_ptr->wptr = 0;
> > + record_ptr->rptr = 0;
> > + record_ptr->rptr_addr = shadowptr(a6xx_gpu, gpu->rb[i]);
> > + record_ptr->info = 0;
> > + record_ptr->data = 0;
> > + record_ptr->rbase = gpu->rb[i]->iova;
> > + }
> > +
> > + /* Write a 0 to signal that we aren't switching pagetables */
> > + gpu_write64(gpu, REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO, 0);
> > +
> > + /* Enable the GMEM save/restore feature for preemption */
> > + gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE, 0x1);
> > +
> > + /* Reset the preemption state */
> > + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> > +
> > + spin_lock_init(&a6xx_gpu->eval_lock);
> > +
> > + /* Always come up on rb 0 */
> > + a6xx_gpu->cur_ring = gpu->rb[0];
> > +}
> > +
> > +void a6xx_preempt_trigger(struct msm_gpu *gpu)
> > +{
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > + u64 preempt_offset_priv_secure;
> > + unsigned long flags;
> > + struct msm_ringbuffer *ring;
> > + unsigned int cntl;
> > +
> > + if (gpu->nr_rings == 1)
> > + return;
> > +
> > + /*
> > + * Lock to make sure another thread attempting preemption doesn't skip it
> > + * while we are still evaluating the next ring. This makes sure the other
> > + * thread does start preemption if we abort it and avoids a soft lock.
> > + */
> > + spin_lock_irqsave(&a6xx_gpu->eval_lock, flags);
> > +
> > + /*
> > + * Try to start preemption by moving from NONE to START. If
> > + * unsuccessful, a preemption is already in flight
> > + */
> > + if (!try_preempt_state(a6xx_gpu, PREEMPT_NONE, PREEMPT_START)) {
> > + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> > + return;
> > + }
> > +
> > + cntl = A6XX_CP_CONTEXT_SWITCH_CNTL_LEVEL(a6xx_gpu->preempt_level);
> > +
> > + if (a6xx_gpu->skip_save_restore)
> > + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_SKIP_SAVE_RESTORE;
> > +
> > + if (a6xx_gpu->uses_gmem)
> > + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_USES_GMEM;
> > +
> > + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_STOP;
> > +
> > + /* Get the next ring to preempt to */
> > + ring = get_next_ring(gpu);
> > +
> > + /*
> > + * If no ring is populated or the highest priority ring is the current
> > + * one do nothing except to update the wptr to the latest and greatest
> > + */
> > + if (!ring || (a6xx_gpu->cur_ring == ring)) {
> > + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
> > + update_wptr(gpu, a6xx_gpu->cur_ring);
> > + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> > + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> > + return;
> > + }
> > +
> > + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> > +
> > + spin_lock_irqsave(&ring->preempt_lock, flags);
> > +
> > + struct a7xx_cp_smmu_info *smmu_info_ptr =
> > + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_SMMU_INFO;
> > + struct a6xx_preempt_record *record_ptr =
> > + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE;
> > + u64 ttbr0 = ring->memptrs->ttbr0;
> > + u32 context_idr = ring->memptrs->context_idr;
> > +
> > + smmu_info_ptr->ttbr0 = ttbr0;
> > + smmu_info_ptr->context_idr = context_idr;
> > + record_ptr->wptr = get_wptr(ring);
> > +
> > + /*
> > + * The GPU will write the wptr we set above when we preempt. Reset
> > + * skip_inline_wptr to make sure that we don't write WPTR to the same
> > + * thing twice. It's still possible subsequent submissions will update
> > + * wptr again, in which case they will set the flag to true. This has
> > + * to be protected by the lock for setting the flag and updating wptr
> > + * to be atomic.
> > + */
> > + ring->skip_inline_wptr = false;
> > +
> > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > +
> > + gpu_write64(gpu,
> > + REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO,
> > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_SMMU_INFO);
> > +
> > + gpu_write64(gpu,
> > + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR,
> > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE);
> > +
> > + preempt_offset_priv_secure =
> > + PREEMPT_OFFSET_PRIV_SECURE(adreno_gpu->info->preempt_record_size);
> > + gpu_write64(gpu,
> > + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR,
> > + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure);
>
> Secure buffers are not supported currently, so we can skip this and the
> context record allocation. Anyway this has to be a separate buffer
> mapped in secure pagetable which don't currently have. We can skip the
> same in pseudo register packet too.
>
> > +
> > + a6xx_gpu->next_ring = ring;
> > +
> > + /* Start a timer to catch a stuck preemption */
> > + mod_timer(&a6xx_gpu->preempt_timer, jiffies + msecs_to_jiffies(10000));
> > +
> > + /* Set the preemption state to triggered */
> > + set_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED);
> > +
> > + /* Make sure any previous writes to WPTR are posted */
> > + gpu_read(gpu, REG_A6XX_CP_RB_WPTR);
> > +
> > + /* Make sure everything is written before hitting the button */
> > + wmb();
>
> This and the above read back looks unnecessary. All writes to gpu are
> ordered anyway.
I thought the whole reason for
https://lore.kernel.org/linux-kernel/20240508-topic-adreno-v1-1-1babd05c119d@linaro.org/
is that memory-mapped writes to different GPU registers are *not*
necessarily ordered from the GPU's perspective (even if they are from
the CPU). That's why I suggested the readback. Or am I missing
something?
>
> > +
> > + /* Trigger the preemption */
> > + gpu_write(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL, cntl);
> > +}
> > +
> > +static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
> > + struct msm_ringbuffer *ring)
> > +{
> > + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> > + struct msm_gpu *gpu = &adreno_gpu->base;
> > + struct drm_gem_object *bo = NULL;
> > + phys_addr_t ttbr;
> > + u64 iova = 0;
> > + void *ptr;
> > + int asid;
> > +
> > + ptr = msm_gem_kernel_new(gpu->dev,
> > + PREEMPT_SIZE(adreno_gpu->info->preempt_record_size),
> > + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova);
>
> set a name?
>
> > +
> > + memset(ptr, 0, PREEMPT_SIZE(adreno_gpu->info->preempt_record_size));
> > +
> > + if (IS_ERR(ptr))
> > + return PTR_ERR(ptr);
> > +
> > + a6xx_gpu->preempt_bo[ring->id] = bo;
> > + a6xx_gpu->preempt_iova[ring->id] = iova;
> > + a6xx_gpu->preempt[ring->id] = ptr;
> > +
> > + struct a7xx_cp_smmu_info *smmu_info_ptr = ptr + PREEMPT_OFFSET_SMMU_INFO;
> > + struct a6xx_preempt_record *record_ptr = ptr + PREEMPT_OFFSET_PRIV_NON_SECURE;
> > +
> > + msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid);
> > +
> > + smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC;
> > + smmu_info_ptr->ttbr0 = ttbr;
> > + smmu_info_ptr->asid = 0xdecafbad;
> > + smmu_info_ptr->context_idr = 0;
> > +
> > + /* Set up the defaults on the preemption record */
> > + record_ptr->magic = A6XX_PREEMPT_RECORD_MAGIC;
> > + record_ptr->info = 0;
> > + record_ptr->data = 0;
> > + record_ptr->rptr = 0;
> > + record_ptr->wptr = 0;
> > + record_ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT;
> > + record_ptr->rbase = ring->iova;
> > + record_ptr->counter = 0;
> > + record_ptr->bv_rptr_addr = rbmemptr(ring, bv_rptr);
> > +
> > + return 0;
> > +}
> > +
> > +void a6xx_preempt_fini(struct msm_gpu *gpu)
> > +{
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > + int i;
> > +
> > + for (i = 0; i < gpu->nr_rings; i++)
> > + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace);
> > +}
> > +
> > +void a6xx_preempt_init(struct msm_gpu *gpu)
> > +{
> > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > + int i;
> > +
> > + /* No preemption if we only have one ring */
> > + if (gpu->nr_rings <= 1)
> > + return;
> > +
> > + for (i = 0; i < gpu->nr_rings; i++) {
> > + if (preempt_init_ring(a6xx_gpu, gpu->rb[i]))
> > + goto fail;
> > + }
> > +
> > + /* TODO: make this configurable? */
> > + a6xx_gpu->preempt_level = 1;
> > + a6xx_gpu->uses_gmem = 1;
> > + a6xx_gpu->skip_save_restore = 1;
> > +
> > + timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
> > +
> > + return;
> > +fail:
>
> Log an error so that preemption is not disabled silently?
>
> > + /*
> > + * On any failure our adventure is over. Clean up and
> > + * set nr_rings to 1 to force preemption off
> > + */
> > + a6xx_preempt_fini(gpu);
> > + gpu->nr_rings = 1;
> > +
> > + return;
> > +}
> > diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h
> > index 40791b2ade46..7dde6a312511 100644
> > --- a/drivers/gpu/drm/msm/msm_ringbuffer.h
> > +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h
> > @@ -36,6 +36,7 @@ struct msm_rbmemptrs {
> >
> > volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT];
> > volatile u64 ttbr0;
> > + volatile u32 context_idr;
> > };
> >
> > struct msm_cp_state {
> > @@ -100,6 +101,12 @@ struct msm_ringbuffer {
> > * preemption. Can be aquired from irq context.
> > */
> > spinlock_t preempt_lock;
> > +
> > + /*
> > + * Whether we skipped writing wptr and it needs to be updated in the
> > + * future when the ring becomes current.
> > + */
> > + bool skip_inline_wptr;
>
> nit: does 'restore_wptr' makes more sense? Or something better? Basically, name it based
> on the future action?
>
> -Akhil
>
> > };
> >
> > struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> >
> > --
> > 2.46.0
> >
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-09 12:22 ` Connor Abbott
@ 2024-09-10 16:43 ` Akhil P Oommen
2024-09-12 15:48 ` Antonino Maniscalco
0 siblings, 1 reply; 32+ messages in thread
From: Akhil P Oommen @ 2024-09-10 16:43 UTC (permalink / raw)
To: Connor Abbott
Cc: Antonino Maniscalco, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Jonathan Corbet, linux-arm-msm, dri-devel,
freedreno, linux-kernel, linux-doc, Sharat Masetty,
Neil Armstrong
On Mon, Sep 09, 2024 at 01:22:22PM +0100, Connor Abbott wrote:
> On Fri, Sep 6, 2024 at 9:03 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
> >
> > On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
> > > This patch implements preemption feature for A6xx targets, this allows
> > > the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
> > > hardware as such supports multiple levels of preemption granularities,
> > > ranging from coarse grained(ringbuffer level) to a more fine grained
> > > such as draw-call level or a bin boundary level preemption. This patch
> > > enables the basic preemption level, with more fine grained preemption
> > > support to follow.
> > >
> > > Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
> > > Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> > > Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
> > > ---
> > > drivers/gpu/drm/msm/Makefile | 1 +
> > > drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
> > > drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
> > > drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 391 ++++++++++++++++++++++++++++++
> > > drivers/gpu/drm/msm/msm_ringbuffer.h | 7 +
> > > 5 files changed, 844 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
> > > index f5e2838c6a76..32e915109a59 100644
> > > --- a/drivers/gpu/drm/msm/Makefile
> > > +++ b/drivers/gpu/drm/msm/Makefile
> > > @@ -23,6 +23,7 @@ adreno-y := \
> > > adreno/a6xx_gpu.o \
> > > adreno/a6xx_gmu.o \
> > > adreno/a6xx_hfi.o \
> > > + adreno/a6xx_preempt.o \
> > >
> > > adreno-$(CONFIG_DEBUG_FS) += adreno/a5xx_debugfs.o \
> > >
> > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > index 32a4faa93d7f..ed0b138a2d66 100644
> > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > @@ -16,6 +16,83 @@
> > >
> > > #define GPU_PAS_ID 13
> > >
> > > +/* IFPC & Preemption static powerup restore list */
> > > +static const uint32_t a7xx_pwrup_reglist[] = {
> > > + REG_A6XX_UCHE_TRAP_BASE,
> > > + REG_A6XX_UCHE_TRAP_BASE + 1,
> > > + REG_A6XX_UCHE_WRITE_THRU_BASE,
> > > + REG_A6XX_UCHE_WRITE_THRU_BASE + 1,
> > > + REG_A6XX_UCHE_GMEM_RANGE_MIN,
> > > + REG_A6XX_UCHE_GMEM_RANGE_MIN + 1,
> > > + REG_A6XX_UCHE_GMEM_RANGE_MAX,
> > > + REG_A6XX_UCHE_GMEM_RANGE_MAX + 1,
> > > + REG_A6XX_UCHE_CACHE_WAYS,
> > > + REG_A6XX_UCHE_MODE_CNTL,
> > > + REG_A6XX_RB_NC_MODE_CNTL,
> > > + REG_A6XX_RB_CMP_DBG_ECO_CNTL,
> > > + REG_A7XX_GRAS_NC_MODE_CNTL,
> > > + REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE,
> > > + REG_A6XX_UCHE_GBIF_GX_CONFIG,
> > > + REG_A6XX_UCHE_CLIENT_PF,
> >
> > REG_A6XX_TPL1_DBG_ECO_CNTL1 here. A friendly warning, missing a register
> > in this list (and the below list) will lead to a very frustrating debug.
> >
> > > +};
> > > +
> > > +static const uint32_t a7xx_ifpc_pwrup_reglist[] = {
> > > + REG_A6XX_TPL1_NC_MODE_CNTL,
> > > + REG_A6XX_SP_NC_MODE_CNTL,
> > > + REG_A6XX_CP_DBG_ECO_CNTL,
> > > + REG_A6XX_CP_PROTECT_CNTL,
> > > + REG_A6XX_CP_PROTECT(0),
> > > + REG_A6XX_CP_PROTECT(1),
> > > + REG_A6XX_CP_PROTECT(2),
> > > + REG_A6XX_CP_PROTECT(3),
> > > + REG_A6XX_CP_PROTECT(4),
> > > + REG_A6XX_CP_PROTECT(5),
> > > + REG_A6XX_CP_PROTECT(6),
> > > + REG_A6XX_CP_PROTECT(7),
> > > + REG_A6XX_CP_PROTECT(8),
> > > + REG_A6XX_CP_PROTECT(9),
> > > + REG_A6XX_CP_PROTECT(10),
> > > + REG_A6XX_CP_PROTECT(11),
> > > + REG_A6XX_CP_PROTECT(12),
> > > + REG_A6XX_CP_PROTECT(13),
> > > + REG_A6XX_CP_PROTECT(14),
> > > + REG_A6XX_CP_PROTECT(15),
> > > + REG_A6XX_CP_PROTECT(16),
> > > + REG_A6XX_CP_PROTECT(17),
> > > + REG_A6XX_CP_PROTECT(18),
> > > + REG_A6XX_CP_PROTECT(19),
> > > + REG_A6XX_CP_PROTECT(20),
> > > + REG_A6XX_CP_PROTECT(21),
> > > + REG_A6XX_CP_PROTECT(22),
> > > + REG_A6XX_CP_PROTECT(23),
> > > + REG_A6XX_CP_PROTECT(24),
> > > + REG_A6XX_CP_PROTECT(25),
> > > + REG_A6XX_CP_PROTECT(26),
> > > + REG_A6XX_CP_PROTECT(27),
> > > + REG_A6XX_CP_PROTECT(28),
> > > + REG_A6XX_CP_PROTECT(29),
> > > + REG_A6XX_CP_PROTECT(30),
> > > + REG_A6XX_CP_PROTECT(31),
> > > + REG_A6XX_CP_PROTECT(32),
> > > + REG_A6XX_CP_PROTECT(33),
> > > + REG_A6XX_CP_PROTECT(34),
> > > + REG_A6XX_CP_PROTECT(35),
> > > + REG_A6XX_CP_PROTECT(36),
> > > + REG_A6XX_CP_PROTECT(37),
> > > + REG_A6XX_CP_PROTECT(38),
> > > + REG_A6XX_CP_PROTECT(39),
> > > + REG_A6XX_CP_PROTECT(40),
> > > + REG_A6XX_CP_PROTECT(41),
> > > + REG_A6XX_CP_PROTECT(42),
> > > + REG_A6XX_CP_PROTECT(43),
> > > + REG_A6XX_CP_PROTECT(44),
> > > + REG_A6XX_CP_PROTECT(45),
> > > + REG_A6XX_CP_PROTECT(46),
> > > + REG_A6XX_CP_PROTECT(47),
> > > + REG_A6XX_CP_AHB_CNTL,
> > > +};
> > > +
> > > +
> > > static inline bool _a6xx_check_idle(struct msm_gpu *gpu)
> > > {
> > > struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > @@ -68,6 +145,8 @@ static void update_shadow_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > >
> > > static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > > {
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > uint32_t wptr;
> > > unsigned long flags;
> > >
> > > @@ -81,12 +160,26 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > > /* Make sure to wrap wptr if we need to */
> > > wptr = get_wptr(ring);
> > >
> > > - spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > -
> > > /* Make sure everything is posted before making a decision */
> > > mb();
> >
> > This looks unnecessary.
> >
> > >
> > > - gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > > + /* Update HW if this is the current ring and we are not in preempt*/
> > > + if (!a6xx_in_preempt(a6xx_gpu)) {
> > > + /*
> > > + * Order the reads of the preempt state and cur_ring. This
> > > + * matches the barrier after writing cur_ring.
> > > + */
> > > + rmb();
> >
> > we can use the lighter smp variant here.
> >
> > > +
> > > + if (a6xx_gpu->cur_ring == ring)
> > > + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > > + else
> > > + ring->skip_inline_wptr = true;
> > > + } else {
> > > + ring->skip_inline_wptr = true;
> > > + }
> > > +
> > > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > }
> > >
> > > static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
> > > @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> >
> > set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
> > not. This is currently not tracked separately for each ring. Can you
> > please check that?
> >
> > I wonder why that didn't cause any gpu errors in testing. Not sure if I
> > am missing something.
> >
> > >
> > > /*
> > > * Write the new TTBR0 to the memstore. This is good for debugging.
> > > + * Needed for preemption
> > > */
> > > - OUT_PKT7(ring, CP_MEM_WRITE, 4);
> > > + OUT_PKT7(ring, CP_MEM_WRITE, 5);
> > > OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr)));
> > > OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr)));
> > > OUT_RING(ring, lower_32_bits(ttbr));
> > > - OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr));
> > > + OUT_RING(ring, upper_32_bits(ttbr));
> > > + OUT_RING(ring, ctx->seqno);
> > >
> > > /*
> > > * Sync both threads after switching pagetables and enable BR only
> > > @@ -268,6 +363,43 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > a6xx_flush(gpu, ring);
> > > }
> > >
> > > +static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> > > + struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
> > > +{
> > > + u64 preempt_offset_priv_secure;
> > > +
> > > + OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
> > > +
> > > + OUT_RING(ring, SMMU_INFO);
> > > + /* don't save SMMU, we write the record from the kernel instead */
> > > + OUT_RING(ring, 0);
> > > + OUT_RING(ring, 0);
> > > +
> > > + /* privileged and non secure buffer save */
> > > + OUT_RING(ring, NON_SECURE_SAVE_ADDR);
> > > + OUT_RING(ring, lower_32_bits(
> > > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
> > > + OUT_RING(ring, upper_32_bits(
> > > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
> > > + OUT_RING(ring, SECURE_SAVE_ADDR);
> > > + preempt_offset_priv_secure =
> > > + PREEMPT_OFFSET_PRIV_SECURE(a6xx_gpu->base.info->preempt_record_size);
> > > + OUT_RING(ring, lower_32_bits(
> > > + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
> > > + OUT_RING(ring, upper_32_bits(
> > > + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
> > > +
> > > + /* user context buffer save, seems to be unnused by fw */
> > > + OUT_RING(ring, NON_PRIV_SAVE_ADDR);
> > > + OUT_RING(ring, 0);
> > > + OUT_RING(ring, 0);
> > > +
> > > + OUT_RING(ring, COUNTER);
> > > + /* seems OK to set to 0 to disable it */
> > > + OUT_RING(ring, 0);
> > > + OUT_RING(ring, 0);
> > > +}
> > > +
> > > static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > {
> > > unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT;
> > > @@ -283,6 +415,13 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
> > > OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR);
> > >
> > > + /*
> > > + * If preemption is enabled, then set the pseudo register for the save
> > > + * sequence
> > > + */
> > > + if (gpu->nr_rings > 1)
> > > + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, submit->queue);
> >
> > Can we move this after set_pagetable()?
> >
> > > +
> > > a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
> > >
> > > get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0),
> > > @@ -376,6 +515,8 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence)));
> > > OUT_RING(ring, submit->seqno);
> > >
> > > + a6xx_gpu->last_seqno[ring->id] = submit->seqno;
> > > +
> > > /* write the ringbuffer timestamp */
> > > OUT_PKT7(ring, CP_EVENT_WRITE, 4);
> > > OUT_RING(ring, CACHE_CLEAN | CP_EVENT_WRITE_0_IRQ | BIT(27));
> > > @@ -389,10 +530,32 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > OUT_PKT7(ring, CP_SET_MARKER, 1);
> > > OUT_RING(ring, 0x100); /* IFPC enable */
> > >
> > > + /* If preemption is enabled */
> > > + if (gpu->nr_rings > 1) {
> > > + /* Yield the floor on command completion */
> > > + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
> > > +
> > > + /*
> > > + * If dword[2:1] are non zero, they specify an address for
> > > + * the CP to write the value of dword[3] to on preemption
> > > + * complete. Write 0 to skip the write
> > > + */
> > > + OUT_RING(ring, 0x00);
> > > + OUT_RING(ring, 0x00);
> > > + /* Data value - not used if the address above is 0 */
> > > + OUT_RING(ring, 0x01);
> > > + /* generate interrupt on preemption completion */
> > > + OUT_RING(ring, 0x00);
> > > + }
> > > +
> > > +
> > > trace_msm_gpu_submit_flush(submit,
> > > gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER));
> > >
> > > a6xx_flush(gpu, ring);
> > > +
> > > + /* Check to see if we need to start preemption */
> > > + a6xx_preempt_trigger(gpu);
> > > }
> > >
> > > static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state)
> > > @@ -588,6 +751,89 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
> > > adreno_gpu->ubwc_config.min_acc_len << 23 | hbb_lo << 21);
> > > }
> > >
> > > +static void a7xx_patch_pwrup_reglist(struct msm_gpu *gpu)
> > > +{
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > + struct adreno_reglist_list reglist[2];
> > > + void *ptr = a6xx_gpu->pwrup_reglist_ptr;
> > > + struct cpu_gpu_lock *lock = ptr;
> > > + u32 *dest = (u32 *)&lock->regs[0];
> > > + int i, j;
> > > +
> > This sequence is required only once. We can use a flag to check and bail out
> > next time.
> >
> > > + lock->gpu_req = lock->cpu_req = lock->turn = 0;
> > > + lock->ifpc_list_len = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
> > > + lock->preemption_list_len = ARRAY_SIZE(a7xx_pwrup_reglist);
> > > +
> > > + /* Static IFPC-only registers */
> > > + reglist[0].regs = a7xx_ifpc_pwrup_reglist;
> > > + reglist[0].count = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
> > > + lock->ifpc_list_len = reglist[0].count;
> > > +
> > > + /* Static IFPC + preemption registers */
> > > + reglist[1].regs = a7xx_pwrup_reglist;
> > > + reglist[1].count = ARRAY_SIZE(a7xx_pwrup_reglist);
> > > + lock->preemption_list_len = reglist[1].count;
> > > +
> > > + /*
> > > + * For each entry in each of the lists, write the offset and the current
> > > + * register value into the GPU buffer
> > > + */
> > > + for (i = 0; i < 2; i++) {
> > > + const u32 *r = reglist[i].regs;
> > > +
> > > + for (j = 0; j < reglist[i].count; j++) {
> > > + *dest++ = r[j];
> > > + *dest++ = gpu_read(gpu, r[j]);
> > > + }
> > > + }
> > > +
> > > + /*
> > > + * The overall register list is composed of
> > > + * 1. Static IFPC-only registers
> > > + * 2. Static IFPC + preemption registers
> > > + * 3. Dynamic IFPC + preemption registers (ex: perfcounter selects)
> > > + *
> > > + * The first two lists are static. Size of these lists are stored as
> > > + * number of pairs in ifpc_list_len and preemption_list_len
> > > + * respectively. With concurrent binning, Some of the perfcounter
> > > + * registers being virtualized, CP needs to know the pipe id to program
> > > + * the aperture inorder to restore the same. Thus, third list is a
> > > + * dynamic list with triplets as
> > > + * (<aperture, shifted 12 bits> <address> <data>), and the length is
> > > + * stored as number for triplets in dynamic_list_len.
> > > + */
> > > + lock->dynamic_list_len = 0;
> > > +}
> > > +
> > > +static int a7xx_preempt_start(struct msm_gpu *gpu)
> > > +{
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > + struct msm_ringbuffer *ring = gpu->rb[0];
> > > +
> > > + if (gpu->nr_rings <= 1)
> > > + return 0;
> > > +
> > > + /* Turn CP protection off */
> > > + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
> > > + OUT_RING(ring, 0);
> > > +
> > > + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, NULL);
> > > +
> > > + /* Yield the floor on command completion */
> > > + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
> > > + OUT_RING(ring, 0x00);
> > > + OUT_RING(ring, 0x00);
> > > + OUT_RING(ring, 0x01);
> >
> > Looks like kgsl use 0x00 here. Not sure if that matters!
> >
> > > + /* Generate interrupt on preemption completion */
> > > + OUT_RING(ring, 0x00);
> > > +
> > > + a6xx_flush(gpu, ring);
> > > +
> > > + return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
> > > +}
> > > +
> > > static int a6xx_cp_init(struct msm_gpu *gpu)
> > > {
> > > struct msm_ringbuffer *ring = gpu->rb[0];
> > > @@ -619,6 +865,8 @@ static int a6xx_cp_init(struct msm_gpu *gpu)
> > >
> > > static int a7xx_cp_init(struct msm_gpu *gpu)
> > > {
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > struct msm_ringbuffer *ring = gpu->rb[0];
> > > u32 mask;
> > >
> > > @@ -626,6 +874,8 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
> > > OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
> > > OUT_RING(ring, BIT(27));
> > >
> > > + a7xx_patch_pwrup_reglist(gpu);
> > > +
> >
> > Looks out of place. I guess you kept it here to avoid an extra a7x
> > check. At least we should move this before the above pm4 packets.
> >
> > > OUT_PKT7(ring, CP_ME_INIT, 7);
> > >
> > > /* Use multiple HW contexts */
> > > @@ -656,11 +906,11 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
> > >
> > > /* *Don't* send a power up reg list for concurrent binning (TODO) */
> > > /* Lo address */
> > > - OUT_RING(ring, 0x00000000);
> > > + OUT_RING(ring, lower_32_bits(a6xx_gpu->pwrup_reglist_iova));
> > > /* Hi address */
> > > - OUT_RING(ring, 0x00000000);
> > > + OUT_RING(ring, upper_32_bits(a6xx_gpu->pwrup_reglist_iova));
> > > /* BIT(31) set => read the regs from the list */
> > > - OUT_RING(ring, 0x00000000);
> > > + OUT_RING(ring, BIT(31));
> > >
> > > a6xx_flush(gpu, ring);
> > > return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
> > > @@ -784,6 +1034,16 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
> > > msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow");
> > > }
> > >
> > > + a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE,
> > > + MSM_BO_WC | MSM_BO_MAP_PRIV,
> > > + gpu->aspace, &a6xx_gpu->pwrup_reglist_bo,
> > > + &a6xx_gpu->pwrup_reglist_iova);
> > > +
> > > + if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr))
> > > + return PTR_ERR(a6xx_gpu->pwrup_reglist_ptr);
> > > +
> > > + msm_gem_object_set_name(a6xx_gpu->pwrup_reglist_bo, "pwrup_reglist");
> > > +
> > > return 0;
> > > }
> > >
> > > @@ -1127,6 +1387,8 @@ static int hw_init(struct msm_gpu *gpu)
> > > if (a6xx_gpu->shadow_bo) {
> > > gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR,
> > > shadowptr(a6xx_gpu, gpu->rb[0]));
> > > + for (unsigned int i = 0; i < gpu->nr_rings; i++)
> > > + a6xx_gpu->shadow[i] = 0;
> > > }
> > >
> > > /* ..which means "always" on A7xx, also for BV shadow */
> > > @@ -1135,6 +1397,8 @@ static int hw_init(struct msm_gpu *gpu)
> > > rbmemptr(gpu->rb[0], bv_rptr));
> > > }
> > >
> > > + a6xx_preempt_hw_init(gpu);
> > > +
> > > /* Always come up on rb 0 */
> > > a6xx_gpu->cur_ring = gpu->rb[0];
> > >
> > > @@ -1180,6 +1444,10 @@ static int hw_init(struct msm_gpu *gpu)
> > > out:
> > > if (adreno_has_gmu_wrapper(adreno_gpu))
> > > return ret;
> > > +
> > > + /* Last step - yield the ringbuffer */
> > > + a7xx_preempt_start(gpu);
> > > +
> > > /*
> > > * Tell the GMU that we are done touching the GPU and it can start power
> > > * management
> > > @@ -1557,8 +1825,13 @@ static irqreturn_t a6xx_irq(struct msm_gpu *gpu)
> > > if (status & A6XX_RBBM_INT_0_MASK_SWFUSEVIOLATION)
> > > a7xx_sw_fuse_violation_irq(gpu);
> > >
> > > - if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS)
> > > + if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS) {
> > > msm_gpu_retire(gpu);
> > > + a6xx_preempt_trigger(gpu);
> > > + }
> > > +
> > > + if (status & A6XX_RBBM_INT_0_MASK_CP_SW)
> > > + a6xx_preempt_irq(gpu);
> > >
> > > return IRQ_HANDLED;
> > > }
> > > @@ -2331,6 +2604,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
> > > a6xx_fault_handler);
> > >
> > > a6xx_calc_ubwc_config(adreno_gpu);
> > > + /* Set up the preemption specific bits and pieces for each ringbuffer */
> > > + a6xx_preempt_init(gpu);
> > >
> > > return gpu;
> > > }
> > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > index e3e5c53ae8af..da10060e38dc 100644
> > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > @@ -12,6 +12,31 @@
> > >
> > > extern bool hang_debug;
> > >
> > > +struct cpu_gpu_lock {
> > > + uint32_t gpu_req;
> > > + uint32_t cpu_req;
> > > + uint32_t turn;
> > > + union {
> > > + struct {
> > > + uint16_t list_length;
> > > + uint16_t list_offset;
> > > + };
> > > + struct {
> > > + uint8_t ifpc_list_len;
> > > + uint8_t preemption_list_len;
> > > + uint16_t dynamic_list_len;
> > > + };
> > > + };
> > > + uint64_t regs[62];
> > > +};
> > > +
> > > +struct adreno_reglist_list {
> > > + /** @reg: List of register **/
> > > + const u32 *regs;
> > > + /** @count: Number of registers in the list **/
> > > + u32 count;
> > > +};
> > > +
> > > /**
> > > * struct a6xx_info - a6xx specific information from device table
> > > *
> > > @@ -31,6 +56,20 @@ struct a6xx_gpu {
> > > uint64_t sqe_iova;
> > >
> > > struct msm_ringbuffer *cur_ring;
> > > + struct msm_ringbuffer *next_ring;
> > > +
> > > + struct drm_gem_object *preempt_bo[MSM_GPU_MAX_RINGS];
> > > + void *preempt[MSM_GPU_MAX_RINGS];
> > > + uint64_t preempt_iova[MSM_GPU_MAX_RINGS];
> > > + uint32_t last_seqno[MSM_GPU_MAX_RINGS];
> > > +
> > > + atomic_t preempt_state;
> > > + spinlock_t eval_lock;
> > > + struct timer_list preempt_timer;
> > > +
> > > + unsigned int preempt_level;
> > > + bool uses_gmem;
> > > + bool skip_save_restore;
> > >
> > > struct a6xx_gmu gmu;
> > >
> > > @@ -38,6 +77,10 @@ struct a6xx_gpu {
> > > uint64_t shadow_iova;
> > > uint32_t *shadow;
> > >
> > > + struct drm_gem_object *pwrup_reglist_bo;
> > > + void *pwrup_reglist_ptr;
> > > + uint64_t pwrup_reglist_iova;
> > > +
> > > bool has_whereami;
> > >
> > > void __iomem *llc_mmio;
> > > @@ -49,6 +92,105 @@ struct a6xx_gpu {
> > >
> > > #define to_a6xx_gpu(x) container_of(x, struct a6xx_gpu, base)
> > >
> > > +/*
> > > + * In order to do lockless preemption we use a simple state machine to progress
> > > + * through the process.
> > > + *
> > > + * PREEMPT_NONE - no preemption in progress. Next state START.
> > > + * PREEMPT_START - The trigger is evaluating if preemption is possible. Next
> > > + * states: TRIGGERED, NONE
> > > + * PREEMPT_FINISH - An intermediate state before moving back to NONE. Next
> > > + * state: NONE.
> > > + * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next
> > > + * states: FAULTED, PENDING
> > > + * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger
> > > + * recovery. Next state: N/A
> > > + * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is
> > > + * checking the success of the operation. Next state: FAULTED, NONE.
> > > + */
> > > +
> > > +enum a6xx_preempt_state {
> > > + PREEMPT_NONE = 0,
> > > + PREEMPT_START,
> > > + PREEMPT_FINISH,
> > > + PREEMPT_TRIGGERED,
> > > + PREEMPT_FAULTED,
> > > + PREEMPT_PENDING,
> > > +};
> > > +
> > > +/*
> > > + * struct a6xx_preempt_record is a shared buffer between the microcode and the
> > > + * CPU to store the state for preemption. The record itself is much larger
> > > + * (2112k) but most of that is used by the CP for storage.
> > > + *
> > > + * There is a preemption record assigned per ringbuffer. When the CPU triggers a
> > > + * preemption, it fills out the record with the useful information (wptr, ring
> > > + * base, etc) and the microcode uses that information to set up the CP following
> > > + * the preemption. When a ring is switched out, the CP will save the ringbuffer
> > > + * state back to the record. In this way, once the records are properly set up
> > > + * the CPU can quickly switch back and forth between ringbuffers by only
> > > + * updating a few registers (often only the wptr).
> > > + *
> > > + * These are the CPU aware registers in the record:
> > > + * @magic: Must always be 0xAE399D6EUL
> > > + * @info: Type of the record - written 0 by the CPU, updated by the CP
> > > + * @errno: preemption error record
> > > + * @data: Data field in YIELD and SET_MARKER packets, Written and used by CP
> > > + * @cntl: Value of RB_CNTL written by CPU, save/restored by CP
> > > + * @rptr: Value of RB_RPTR written by CPU, save/restored by CP
> > > + * @wptr: Value of RB_WPTR written by CPU, save/restored by CP
> > > + * @_pad: Reserved/padding
> > > + * @rptr_addr: Value of RB_RPTR_ADDR_LO|HI written by CPU, save/restored by CP
> > > + * @rbase: Value of RB_BASE written by CPU, save/restored by CP
> > > + * @counter: GPU address of the storage area for the preemption counters
> >
> > doc missing for bv_rptr_addr.
> >
> > > + */
> > > +struct a6xx_preempt_record {
> > > + u32 magic;
> > > + u32 info;
> > > + u32 errno;
> > > + u32 data;
> > > + u32 cntl;
> > > + u32 rptr;
> > > + u32 wptr;
> > > + u32 _pad;
> > > + u64 rptr_addr;
> > > + u64 rbase;
> > > + u64 counter;
> > > + u64 bv_rptr_addr;
> > > +};
> > > +
> > > +#define A6XX_PREEMPT_RECORD_MAGIC 0xAE399D6EUL
> > > +
> > > +#define PREEMPT_RECORD_SIZE_FALLBACK(size) \
> > > + ((size) == 0 ? 4192 * SZ_1K : (size))
> > > +
> > > +#define PREEMPT_OFFSET_SMMU_INFO 0
> > > +#define PREEMPT_OFFSET_PRIV_NON_SECURE (PREEMPT_OFFSET_SMMU_INFO + 4096)
> > > +#define PREEMPT_OFFSET_PRIV_SECURE(size) \
> > > + (PREEMPT_OFFSET_PRIV_NON_SECURE + PREEMPT_RECORD_SIZE_FALLBACK(size))
> > > +#define PREEMPT_SIZE(size) \
> > > + (PREEMPT_OFFSET_PRIV_SECURE(size) + PREEMPT_RECORD_SIZE_FALLBACK(size))
> > > +
> > > +/*
> > > + * The preemption counter block is a storage area for the value of the
> > > + * preemption counters that are saved immediately before context switch. We
> > > + * append it on to the end of the allocation for the preemption record.
> > > + */
> > > +#define A6XX_PREEMPT_COUNTER_SIZE (16 * 4)
> > > +
> > > +#define A6XX_PREEMPT_USER_RECORD_SIZE (192 * 1024)
> >
> > Unused.
> >
> > > +
> > > +struct a7xx_cp_smmu_info {
> > > + u32 magic;
> > > + u32 _pad4;
> > > + u64 ttbr0;
> > > + u32 asid;
> > > + u32 context_idr;
> > > + u32 context_bank;
> > > +};
> > > +
> > > +#define GEN7_CP_SMMU_INFO_MAGIC 0x241350d5UL
> > > +
> > > /*
> > > * Given a register and a count, return a value to program into
> > > * REG_CP_PROTECT_REG(n) - this will block both reads and writes for
> > > @@ -106,6 +248,25 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
> > > int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
> > > void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu);
> > >
> > > +void a6xx_preempt_init(struct msm_gpu *gpu);
> > > +void a6xx_preempt_hw_init(struct msm_gpu *gpu);
> > > +void a6xx_preempt_trigger(struct msm_gpu *gpu);
> > > +void a6xx_preempt_irq(struct msm_gpu *gpu);
> > > +void a6xx_preempt_fini(struct msm_gpu *gpu);
> > > +int a6xx_preempt_submitqueue_setup(struct msm_gpu *gpu,
> > > + struct msm_gpu_submitqueue *queue);
> > > +void a6xx_preempt_submitqueue_close(struct msm_gpu *gpu,
> > > + struct msm_gpu_submitqueue *queue);
> > > +
> > > +/* Return true if we are in a preempt state */
> > > +static inline bool a6xx_in_preempt(struct a6xx_gpu *a6xx_gpu)
> > > +{
> > > + int preempt_state = atomic_read(&a6xx_gpu->preempt_state);
> >
> > I think we should keep a matching barrier before the 'read' similar to the one used in the
> > set_preempt_state helper.
>
> Good idea, but for the one case we found where it matters (the
> a6xx_flush() vs. updating the ring in a6xx_preempt_irq() race) the
> barrier needs to be after the read. The sequence is something like:
>
> Thread A:
>
> a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
> a6xx_gpu->preempt_state = PREEMPT_FINISH;
>
> Thread B:
>
> read a6xx_gpu->preempt_state;
> read a6xx_gpu->cur_ring;
>
> And if the read to preempt_state returns PREEMPT_FINISH, then we need
> cur_ring to reflect the ring we switched to. (I discovered this the
> hard way from debugging deadlocks...)
>
> So, maybe add a smp_rmb() before and after, then drop the explicit
> barrier in a6xx_flush()?
Ack. I think it is better to use a helper similar to set_preempt_state()
and consistently use that everywhere.
>
> >
> > > +
> > > + return !(preempt_state == PREEMPT_NONE ||
> > > + preempt_state == PREEMPT_FINISH);
> > > +}
> > > +
> > > void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
> > > bool suspended);
> > > unsigned long a6xx_gmu_get_freq(struct msm_gpu *gpu);
> > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > new file mode 100644
> > > index 000000000000..1caff76aca6e
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > @@ -0,0 +1,391 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. */
> > > +/* Copyright (c) 2023 Collabora, Ltd. */
> > > +/* Copyright (c) 2024 Valve Corporation */
> > > +
> > > +#include "msm_gem.h"
> > > +#include "a6xx_gpu.h"
> > > +#include "a6xx_gmu.xml.h"
> > > +#include "msm_mmu.h"
> > > +
> > > +/*
> > > + * Try to transition the preemption state from old to new. Return
> > > + * true on success or false if the original state wasn't 'old'
> > > + */
> > > +static inline bool try_preempt_state(struct a6xx_gpu *a6xx_gpu,
> > > + enum a6xx_preempt_state old, enum a6xx_preempt_state new)
> > > +{
> > > + enum a6xx_preempt_state cur = atomic_cmpxchg(&a6xx_gpu->preempt_state,
> > > + old, new);
> > > +
> > > + return (cur == old);
> > > +}
> > > +
> > > +/*
> > > + * Force the preemption state to the specified state. This is used in cases
> > > + * where the current state is known and won't change
> > > + */
> > > +static inline void set_preempt_state(struct a6xx_gpu *gpu,
> > > + enum a6xx_preempt_state new)
> > > +{
> > > + /*
> > > + * preempt_state may be read by other cores trying to trigger a
> > > + * preemption or in the interrupt handler so barriers are needed
> > > + * before...
> > > + */
> > > + smp_mb__before_atomic();
> > > + atomic_set(&gpu->preempt_state, new);
> > > + /* ... and after*/
> > > + smp_mb__after_atomic();
> > > +}
> > > +
> > > +/* Write the most recent wptr for the given ring into the hardware */
> > > +static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > > +{
> > > + unsigned long flags;
> > > + uint32_t wptr;
> > > +
> > > + if (!ring)
> >
> > Is this ever true?
> >
> > > + return;
> > > +
> > > + spin_lock_irqsave(&ring->preempt_lock, flags);
> > > +
> > > + if (ring->skip_inline_wptr) {
> > > + wptr = get_wptr(ring);
> > > +
> > > + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > > +
> > > + ring->skip_inline_wptr = false;
> > > + }
> > > +
> > > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > +}
> > > +
> > > +/* Return the highest priority ringbuffer with something in it */
> > > +static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
> > > +{
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > +
> > > + unsigned long flags;
> > > + int i;
> > > +
> > > + for (i = 0; i < gpu->nr_rings; i++) {
> > > + bool empty;
> > > + struct msm_ringbuffer *ring = gpu->rb[i];
> > > +
> > > + spin_lock_irqsave(&ring->preempt_lock, flags);
> > > + empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
> > > + if (!empty && ring == a6xx_gpu->cur_ring)
> > > + empty = ring->memptrs->fence == a6xx_gpu->last_seqno[i];
> > > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > +
> > > + if (!empty)
> > > + return ring;
> > > + }
> > > +
> > > + return NULL;
> > > +}
> > > +
> > > +static void a6xx_preempt_timer(struct timer_list *t)
> > > +{
> > > + struct a6xx_gpu *a6xx_gpu = from_timer(a6xx_gpu, t, preempt_timer);
> > > + struct msm_gpu *gpu = &a6xx_gpu->base.base;
> > > + struct drm_device *dev = gpu->dev;
> > > +
> > > + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_FAULTED))
> > > + return;
> > > +
> > > + dev_err(dev->dev, "%s: preemption timed out\n", gpu->name);
> > > + kthread_queue_work(gpu->worker, &gpu->recover_work);
> > > +}
> > > +
> > > +void a6xx_preempt_irq(struct msm_gpu *gpu)
> > > +{
> > > + uint32_t status;
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > + struct drm_device *dev = gpu->dev;
> > > +
> > > + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_PENDING))
> > > + return;
> > > +
> > > + /* Delete the preemption watchdog timer */
> > > + del_timer(&a6xx_gpu->preempt_timer);
> > > +
> > > + /*
> > > + * The hardware should be setting the stop bit of CP_CONTEXT_SWITCH_CNTL
> > > + * to zero before firing the interrupt, but there is a non zero chance
> > > + * of a hardware condition or a software race that could set it again
> > > + * before we have a chance to finish. If that happens, log and go for
> > > + * recovery
> > > + */
> > > + status = gpu_read(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL);
> > > + if (unlikely(status & A6XX_CP_CONTEXT_SWITCH_CNTL_STOP)) {
> > > + DRM_DEV_ERROR(&gpu->pdev->dev,
> > > + "!!!!!!!!!!!!!!!! preemption faulted !!!!!!!!!!!!!! irq\n");
> > > + set_preempt_state(a6xx_gpu, PREEMPT_FAULTED);
> > > + dev_err(dev->dev, "%s: Preemption failed to complete\n",
> > > + gpu->name);
> > > + kthread_queue_work(gpu->worker, &gpu->recover_work);
> > > + return;
> > > + }
> > > +
> > > + a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
> > > + a6xx_gpu->next_ring = NULL;
> > > +
> > > + /* Make sure the write to cur_ring is posted before the change in state */
> > > + wmb();
> >
> > Not needed. set_preempt_state has the necessary barrier.
> >
> > > +
> > > + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
> > > +
> > > + update_wptr(gpu, a6xx_gpu->cur_ring);
> > > +
> > > + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> > > +
> > > + /*
> > > + * Retrigger preemption to avoid a deadlock that might occur when preemption
> > > + * is skipped due to it being already in flight when requested.
> > > + */
> > > + a6xx_preempt_trigger(gpu);
> > > +}
> > > +
> > > +void a6xx_preempt_hw_init(struct msm_gpu *gpu)
> > > +{
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > + int i;
> > > +
> > > + /* No preemption if we only have one ring */
> > > + if (gpu->nr_rings == 1)
> > > + return;
> > > +
> > > + for (i = 0; i < gpu->nr_rings; i++) {
> > > + struct a6xx_preempt_record *record_ptr =
> > > + a6xx_gpu->preempt[i] + PREEMPT_OFFSET_PRIV_NON_SECURE;
> > > + record_ptr->wptr = 0;
> > > + record_ptr->rptr = 0;
> > > + record_ptr->rptr_addr = shadowptr(a6xx_gpu, gpu->rb[i]);
> > > + record_ptr->info = 0;
> > > + record_ptr->data = 0;
> > > + record_ptr->rbase = gpu->rb[i]->iova;
> > > + }
> > > +
> > > + /* Write a 0 to signal that we aren't switching pagetables */
> > > + gpu_write64(gpu, REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO, 0);
> > > +
> > > + /* Enable the GMEM save/restore feature for preemption */
> > > + gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE, 0x1);
> > > +
> > > + /* Reset the preemption state */
> > > + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> > > +
> > > + spin_lock_init(&a6xx_gpu->eval_lock);
> > > +
> > > + /* Always come up on rb 0 */
> > > + a6xx_gpu->cur_ring = gpu->rb[0];
> > > +}
> > > +
> > > +void a6xx_preempt_trigger(struct msm_gpu *gpu)
> > > +{
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > + u64 preempt_offset_priv_secure;
> > > + unsigned long flags;
> > > + struct msm_ringbuffer *ring;
> > > + unsigned int cntl;
> > > +
> > > + if (gpu->nr_rings == 1)
> > > + return;
> > > +
> > > + /*
> > > + * Lock to make sure another thread attempting preemption doesn't skip it
> > > + * while we are still evaluating the next ring. This makes sure the other
> > > + * thread does start preemption if we abort it and avoids a soft lock.
> > > + */
> > > + spin_lock_irqsave(&a6xx_gpu->eval_lock, flags);
> > > +
> > > + /*
> > > + * Try to start preemption by moving from NONE to START. If
> > > + * unsuccessful, a preemption is already in flight
> > > + */
> > > + if (!try_preempt_state(a6xx_gpu, PREEMPT_NONE, PREEMPT_START)) {
> > > + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> > > + return;
> > > + }
> > > +
> > > + cntl = A6XX_CP_CONTEXT_SWITCH_CNTL_LEVEL(a6xx_gpu->preempt_level);
> > > +
> > > + if (a6xx_gpu->skip_save_restore)
> > > + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_SKIP_SAVE_RESTORE;
> > > +
> > > + if (a6xx_gpu->uses_gmem)
> > > + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_USES_GMEM;
> > > +
> > > + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_STOP;
> > > +
> > > + /* Get the next ring to preempt to */
> > > + ring = get_next_ring(gpu);
> > > +
> > > + /*
> > > + * If no ring is populated or the highest priority ring is the current
> > > + * one do nothing except to update the wptr to the latest and greatest
> > > + */
> > > + if (!ring || (a6xx_gpu->cur_ring == ring)) {
> > > + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
> > > + update_wptr(gpu, a6xx_gpu->cur_ring);
> > > + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> > > + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> > > + return;
> > > + }
> > > +
> > > + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> > > +
> > > + spin_lock_irqsave(&ring->preempt_lock, flags);
> > > +
> > > + struct a7xx_cp_smmu_info *smmu_info_ptr =
> > > + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_SMMU_INFO;
> > > + struct a6xx_preempt_record *record_ptr =
> > > + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE;
> > > + u64 ttbr0 = ring->memptrs->ttbr0;
> > > + u32 context_idr = ring->memptrs->context_idr;
> > > +
> > > + smmu_info_ptr->ttbr0 = ttbr0;
> > > + smmu_info_ptr->context_idr = context_idr;
> > > + record_ptr->wptr = get_wptr(ring);
> > > +
> > > + /*
> > > + * The GPU will write the wptr we set above when we preempt. Reset
> > > + * skip_inline_wptr to make sure that we don't write WPTR to the same
> > > + * thing twice. It's still possible subsequent submissions will update
> > > + * wptr again, in which case they will set the flag to true. This has
> > > + * to be protected by the lock for setting the flag and updating wptr
> > > + * to be atomic.
> > > + */
> > > + ring->skip_inline_wptr = false;
> > > +
> > > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > +
> > > + gpu_write64(gpu,
> > > + REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO,
> > > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_SMMU_INFO);
> > > +
> > > + gpu_write64(gpu,
> > > + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR,
> > > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE);
> > > +
> > > + preempt_offset_priv_secure =
> > > + PREEMPT_OFFSET_PRIV_SECURE(adreno_gpu->info->preempt_record_size);
> > > + gpu_write64(gpu,
> > > + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR,
> > > + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure);
> >
> > Secure buffers are not supported currently, so we can skip this and the
> > context record allocation. Anyway this has to be a separate buffer
> > mapped in secure pagetable which don't currently have. We can skip the
> > same in pseudo register packet too.
> >
> > > +
> > > + a6xx_gpu->next_ring = ring;
> > > +
> > > + /* Start a timer to catch a stuck preemption */
> > > + mod_timer(&a6xx_gpu->preempt_timer, jiffies + msecs_to_jiffies(10000));
> > > +
> > > + /* Set the preemption state to triggered */
> > > + set_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED);
> > > +
> > > + /* Make sure any previous writes to WPTR are posted */
> > > + gpu_read(gpu, REG_A6XX_CP_RB_WPTR);
> > > +
> > > + /* Make sure everything is written before hitting the button */
> > > + wmb();
> >
> > This and the above read back looks unnecessary. All writes to gpu are
> > ordered anyway.
>
> I thought the whole reason for
> https://lore.kernel.org/linux-kernel/20240508-topic-adreno-v1-1-1babd05c119d@linaro.org/
> is that memory-mapped writes to different GPU registers are *not*
> necessarily ordered from the GPU's perspective (even if they are from
> the CPU). That's why I suggested the readback. Or am I missing
> something?
Lets consider that GBIF unhalt sequence as an exception. Generally, we
can consider writes to gpu registers to be ordered.
-Akhil.
>
> >
> > > +
> > > + /* Trigger the preemption */
> > > + gpu_write(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL, cntl);
> > > +}
> > > +
> > > +static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
> > > + struct msm_ringbuffer *ring)
> > > +{
> > > + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> > > + struct msm_gpu *gpu = &adreno_gpu->base;
> > > + struct drm_gem_object *bo = NULL;
> > > + phys_addr_t ttbr;
> > > + u64 iova = 0;
> > > + void *ptr;
> > > + int asid;
> > > +
> > > + ptr = msm_gem_kernel_new(gpu->dev,
> > > + PREEMPT_SIZE(adreno_gpu->info->preempt_record_size),
> > > + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova);
> >
> > set a name?
> >
> > > +
> > > + memset(ptr, 0, PREEMPT_SIZE(adreno_gpu->info->preempt_record_size));
> > > +
> > > + if (IS_ERR(ptr))
> > > + return PTR_ERR(ptr);
> > > +
> > > + a6xx_gpu->preempt_bo[ring->id] = bo;
> > > + a6xx_gpu->preempt_iova[ring->id] = iova;
> > > + a6xx_gpu->preempt[ring->id] = ptr;
> > > +
> > > + struct a7xx_cp_smmu_info *smmu_info_ptr = ptr + PREEMPT_OFFSET_SMMU_INFO;
> > > + struct a6xx_preempt_record *record_ptr = ptr + PREEMPT_OFFSET_PRIV_NON_SECURE;
> > > +
> > > + msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid);
> > > +
> > > + smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC;
> > > + smmu_info_ptr->ttbr0 = ttbr;
> > > + smmu_info_ptr->asid = 0xdecafbad;
> > > + smmu_info_ptr->context_idr = 0;
> > > +
> > > + /* Set up the defaults on the preemption record */
> > > + record_ptr->magic = A6XX_PREEMPT_RECORD_MAGIC;
> > > + record_ptr->info = 0;
> > > + record_ptr->data = 0;
> > > + record_ptr->rptr = 0;
> > > + record_ptr->wptr = 0;
> > > + record_ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT;
> > > + record_ptr->rbase = ring->iova;
> > > + record_ptr->counter = 0;
> > > + record_ptr->bv_rptr_addr = rbmemptr(ring, bv_rptr);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +void a6xx_preempt_fini(struct msm_gpu *gpu)
> > > +{
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > + int i;
> > > +
> > > + for (i = 0; i < gpu->nr_rings; i++)
> > > + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace);
> > > +}
> > > +
> > > +void a6xx_preempt_init(struct msm_gpu *gpu)
> > > +{
> > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > + int i;
> > > +
> > > + /* No preemption if we only have one ring */
> > > + if (gpu->nr_rings <= 1)
> > > + return;
> > > +
> > > + for (i = 0; i < gpu->nr_rings; i++) {
> > > + if (preempt_init_ring(a6xx_gpu, gpu->rb[i]))
> > > + goto fail;
> > > + }
> > > +
> > > + /* TODO: make this configurable? */
> > > + a6xx_gpu->preempt_level = 1;
> > > + a6xx_gpu->uses_gmem = 1;
> > > + a6xx_gpu->skip_save_restore = 1;
> > > +
> > > + timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
> > > +
> > > + return;
> > > +fail:
> >
> > Log an error so that preemption is not disabled silently?
> >
> > > + /*
> > > + * On any failure our adventure is over. Clean up and
> > > + * set nr_rings to 1 to force preemption off
> > > + */
> > > + a6xx_preempt_fini(gpu);
> > > + gpu->nr_rings = 1;
> > > +
> > > + return;
> > > +}
> > > diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h
> > > index 40791b2ade46..7dde6a312511 100644
> > > --- a/drivers/gpu/drm/msm/msm_ringbuffer.h
> > > +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h
> > > @@ -36,6 +36,7 @@ struct msm_rbmemptrs {
> > >
> > > volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT];
> > > volatile u64 ttbr0;
> > > + volatile u32 context_idr;
> > > };
> > >
> > > struct msm_cp_state {
> > > @@ -100,6 +101,12 @@ struct msm_ringbuffer {
> > > * preemption. Can be aquired from irq context.
> > > */
> > > spinlock_t preempt_lock;
> > > +
> > > + /*
> > > + * Whether we skipped writing wptr and it needs to be updated in the
> > > + * future when the ring becomes current.
> > > + */
> > > + bool skip_inline_wptr;
> >
> > nit: does 'restore_wptr' makes more sense? Or something better? Basically, name it based
> > on the future action?
> >
> > -Akhil
> >
> > > };
> > >
> > > struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> > >
> > > --
> > > 2.46.0
> > >
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-10 16:43 ` Akhil P Oommen
@ 2024-09-12 15:48 ` Antonino Maniscalco
2024-09-16 17:40 ` Akhil P Oommen
0 siblings, 1 reply; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-12 15:48 UTC (permalink / raw)
To: Akhil P Oommen, Connor Abbott
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Sharat Masetty, Neil Armstrong
On 9/10/24 6:43 PM, Akhil P Oommen wrote:
> On Mon, Sep 09, 2024 at 01:22:22PM +0100, Connor Abbott wrote:
>> On Fri, Sep 6, 2024 at 9:03 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
>>>
>>> On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
>>>> This patch implements preemption feature for A6xx targets, this allows
>>>> the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
>>>> hardware as such supports multiple levels of preemption granularities,
>>>> ranging from coarse grained(ringbuffer level) to a more fine grained
>>>> such as draw-call level or a bin boundary level preemption. This patch
>>>> enables the basic preemption level, with more fine grained preemption
>>>> support to follow.
>>>>
>>>> Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
>>>> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
>>>> Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
>>>> ---
>>>> drivers/gpu/drm/msm/Makefile | 1 +
>>>> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
>>>> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
>>>> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 391 ++++++++++++++++++++++++++++++
>>>> drivers/gpu/drm/msm/msm_ringbuffer.h | 7 +
>>>> 5 files changed, 844 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
>>>> index f5e2838c6a76..32e915109a59 100644
>>>> --- a/drivers/gpu/drm/msm/Makefile
>>>> +++ b/drivers/gpu/drm/msm/Makefile
>>>> @@ -23,6 +23,7 @@ adreno-y := \
>>>> adreno/a6xx_gpu.o \
>>>> adreno/a6xx_gmu.o \
>>>> adreno/a6xx_hfi.o \
>>>> + adreno/a6xx_preempt.o \
>>>>
>>>> adreno-$(CONFIG_DEBUG_FS) += adreno/a5xx_debugfs.o \
>>>>
>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>> index 32a4faa93d7f..ed0b138a2d66 100644
>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>> @@ -16,6 +16,83 @@
>>>>
>>>> #define GPU_PAS_ID 13
>>>>
>>>> +/* IFPC & Preemption static powerup restore list */
>>>> +static const uint32_t a7xx_pwrup_reglist[] = {
>>>> + REG_A6XX_UCHE_TRAP_BASE,
>>>> + REG_A6XX_UCHE_TRAP_BASE + 1,
>>>> + REG_A6XX_UCHE_WRITE_THRU_BASE,
>>>> + REG_A6XX_UCHE_WRITE_THRU_BASE + 1,
>>>> + REG_A6XX_UCHE_GMEM_RANGE_MIN,
>>>> + REG_A6XX_UCHE_GMEM_RANGE_MIN + 1,
>>>> + REG_A6XX_UCHE_GMEM_RANGE_MAX,
>>>> + REG_A6XX_UCHE_GMEM_RANGE_MAX + 1,
>>>> + REG_A6XX_UCHE_CACHE_WAYS,
>>>> + REG_A6XX_UCHE_MODE_CNTL,
>>>> + REG_A6XX_RB_NC_MODE_CNTL,
>>>> + REG_A6XX_RB_CMP_DBG_ECO_CNTL,
>>>> + REG_A7XX_GRAS_NC_MODE_CNTL,
>>>> + REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE,
>>>> + REG_A6XX_UCHE_GBIF_GX_CONFIG,
>>>> + REG_A6XX_UCHE_CLIENT_PF,
>>>
>>> REG_A6XX_TPL1_DBG_ECO_CNTL1 here. A friendly warning, missing a register
>>> in this list (and the below list) will lead to a very frustrating debug.
>>>
>>>> +};
>>>> +
>>>> +static const uint32_t a7xx_ifpc_pwrup_reglist[] = {
>>>> + REG_A6XX_TPL1_NC_MODE_CNTL,
>>>> + REG_A6XX_SP_NC_MODE_CNTL,
>>>> + REG_A6XX_CP_DBG_ECO_CNTL,
>>>> + REG_A6XX_CP_PROTECT_CNTL,
>>>> + REG_A6XX_CP_PROTECT(0),
>>>> + REG_A6XX_CP_PROTECT(1),
>>>> + REG_A6XX_CP_PROTECT(2),
>>>> + REG_A6XX_CP_PROTECT(3),
>>>> + REG_A6XX_CP_PROTECT(4),
>>>> + REG_A6XX_CP_PROTECT(5),
>>>> + REG_A6XX_CP_PROTECT(6),
>>>> + REG_A6XX_CP_PROTECT(7),
>>>> + REG_A6XX_CP_PROTECT(8),
>>>> + REG_A6XX_CP_PROTECT(9),
>>>> + REG_A6XX_CP_PROTECT(10),
>>>> + REG_A6XX_CP_PROTECT(11),
>>>> + REG_A6XX_CP_PROTECT(12),
>>>> + REG_A6XX_CP_PROTECT(13),
>>>> + REG_A6XX_CP_PROTECT(14),
>>>> + REG_A6XX_CP_PROTECT(15),
>>>> + REG_A6XX_CP_PROTECT(16),
>>>> + REG_A6XX_CP_PROTECT(17),
>>>> + REG_A6XX_CP_PROTECT(18),
>>>> + REG_A6XX_CP_PROTECT(19),
>>>> + REG_A6XX_CP_PROTECT(20),
>>>> + REG_A6XX_CP_PROTECT(21),
>>>> + REG_A6XX_CP_PROTECT(22),
>>>> + REG_A6XX_CP_PROTECT(23),
>>>> + REG_A6XX_CP_PROTECT(24),
>>>> + REG_A6XX_CP_PROTECT(25),
>>>> + REG_A6XX_CP_PROTECT(26),
>>>> + REG_A6XX_CP_PROTECT(27),
>>>> + REG_A6XX_CP_PROTECT(28),
>>>> + REG_A6XX_CP_PROTECT(29),
>>>> + REG_A6XX_CP_PROTECT(30),
>>>> + REG_A6XX_CP_PROTECT(31),
>>>> + REG_A6XX_CP_PROTECT(32),
>>>> + REG_A6XX_CP_PROTECT(33),
>>>> + REG_A6XX_CP_PROTECT(34),
>>>> + REG_A6XX_CP_PROTECT(35),
>>>> + REG_A6XX_CP_PROTECT(36),
>>>> + REG_A6XX_CP_PROTECT(37),
>>>> + REG_A6XX_CP_PROTECT(38),
>>>> + REG_A6XX_CP_PROTECT(39),
>>>> + REG_A6XX_CP_PROTECT(40),
>>>> + REG_A6XX_CP_PROTECT(41),
>>>> + REG_A6XX_CP_PROTECT(42),
>>>> + REG_A6XX_CP_PROTECT(43),
>>>> + REG_A6XX_CP_PROTECT(44),
>>>> + REG_A6XX_CP_PROTECT(45),
>>>> + REG_A6XX_CP_PROTECT(46),
>>>> + REG_A6XX_CP_PROTECT(47),
>>>> + REG_A6XX_CP_AHB_CNTL,
>>>> +};
>>>> +
>>>> +
>>>> static inline bool _a6xx_check_idle(struct msm_gpu *gpu)
>>>> {
>>>> struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> @@ -68,6 +145,8 @@ static void update_shadow_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
>>>>
>>>> static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
>>>> {
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> uint32_t wptr;
>>>> unsigned long flags;
>>>>
>>>> @@ -81,12 +160,26 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
>>>> /* Make sure to wrap wptr if we need to */
>>>> wptr = get_wptr(ring);
>>>>
>>>> - spin_unlock_irqrestore(&ring->preempt_lock, flags);
>>>> -
>>>> /* Make sure everything is posted before making a decision */
>>>> mb();
>>>
>>> This looks unnecessary.
>>>
>>>>
>>>> - gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
>>>> + /* Update HW if this is the current ring and we are not in preempt*/
>>>> + if (!a6xx_in_preempt(a6xx_gpu)) {
>>>> + /*
>>>> + * Order the reads of the preempt state and cur_ring. This
>>>> + * matches the barrier after writing cur_ring.
>>>> + */
>>>> + rmb();
>>>
>>> we can use the lighter smp variant here.
>>>
>>>> +
>>>> + if (a6xx_gpu->cur_ring == ring)
>>>> + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
>>>> + else
>>>> + ring->skip_inline_wptr = true;
>>>> + } else {
>>>> + ring->skip_inline_wptr = true;
>>>> + }
>>>> +
>>>> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
>>>> }
>>>>
>>>> static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
>>>> @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
>>>
>>> set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
>>> not. This is currently not tracked separately for each ring. Can you
>>> please check that?
>>>
>>> I wonder why that didn't cause any gpu errors in testing. Not sure if I
>>> am missing something.
>>>
>>>>
>>>> /*
>>>> * Write the new TTBR0 to the memstore. This is good for debugging.
>>>> + * Needed for preemption
>>>> */
>>>> - OUT_PKT7(ring, CP_MEM_WRITE, 4);
>>>> + OUT_PKT7(ring, CP_MEM_WRITE, 5);
>>>> OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr)));
>>>> OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr)));
>>>> OUT_RING(ring, lower_32_bits(ttbr));
>>>> - OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr));
>>>> + OUT_RING(ring, upper_32_bits(ttbr));
>>>> + OUT_RING(ring, ctx->seqno);
>>>>
>>>> /*
>>>> * Sync both threads after switching pagetables and enable BR only
>>>> @@ -268,6 +363,43 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>>>> a6xx_flush(gpu, ring);
>>>> }
>>>>
>>>> +static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
>>>> + struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
>>>> +{
>>>> + u64 preempt_offset_priv_secure;
>>>> +
>>>> + OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
>>>> +
>>>> + OUT_RING(ring, SMMU_INFO);
>>>> + /* don't save SMMU, we write the record from the kernel instead */
>>>> + OUT_RING(ring, 0);
>>>> + OUT_RING(ring, 0);
>>>> +
>>>> + /* privileged and non secure buffer save */
>>>> + OUT_RING(ring, NON_SECURE_SAVE_ADDR);
>>>> + OUT_RING(ring, lower_32_bits(
>>>> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
>>>> + OUT_RING(ring, upper_32_bits(
>>>> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
>>>> + OUT_RING(ring, SECURE_SAVE_ADDR);
>>>> + preempt_offset_priv_secure =
>>>> + PREEMPT_OFFSET_PRIV_SECURE(a6xx_gpu->base.info->preempt_record_size);
>>>> + OUT_RING(ring, lower_32_bits(
>>>> + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
>>>> + OUT_RING(ring, upper_32_bits(
>>>> + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
>>>> +
>>>> + /* user context buffer save, seems to be unnused by fw */
>>>> + OUT_RING(ring, NON_PRIV_SAVE_ADDR);
>>>> + OUT_RING(ring, 0);
>>>> + OUT_RING(ring, 0);
>>>> +
>>>> + OUT_RING(ring, COUNTER);
>>>> + /* seems OK to set to 0 to disable it */
>>>> + OUT_RING(ring, 0);
>>>> + OUT_RING(ring, 0);
>>>> +}
>>>> +
>>>> static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>>>> {
>>>> unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT;
>>>> @@ -283,6 +415,13 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>>>> OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
>>>> OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR);
>>>>
>>>> + /*
>>>> + * If preemption is enabled, then set the pseudo register for the save
>>>> + * sequence
>>>> + */
>>>> + if (gpu->nr_rings > 1)
>>>> + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, submit->queue);
>>>
>>> Can we move this after set_pagetable()?
>>>
>>>> +
>>>> a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
>>>>
>>>> get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0),
>>>> @@ -376,6 +515,8 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>>>> OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence)));
>>>> OUT_RING(ring, submit->seqno);
>>>>
>>>> + a6xx_gpu->last_seqno[ring->id] = submit->seqno;
>>>> +
>>>> /* write the ringbuffer timestamp */
>>>> OUT_PKT7(ring, CP_EVENT_WRITE, 4);
>>>> OUT_RING(ring, CACHE_CLEAN | CP_EVENT_WRITE_0_IRQ | BIT(27));
>>>> @@ -389,10 +530,32 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>>>> OUT_PKT7(ring, CP_SET_MARKER, 1);
>>>> OUT_RING(ring, 0x100); /* IFPC enable */
>>>>
>>>> + /* If preemption is enabled */
>>>> + if (gpu->nr_rings > 1) {
>>>> + /* Yield the floor on command completion */
>>>> + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
>>>> +
>>>> + /*
>>>> + * If dword[2:1] are non zero, they specify an address for
>>>> + * the CP to write the value of dword[3] to on preemption
>>>> + * complete. Write 0 to skip the write
>>>> + */
>>>> + OUT_RING(ring, 0x00);
>>>> + OUT_RING(ring, 0x00);
>>>> + /* Data value - not used if the address above is 0 */
>>>> + OUT_RING(ring, 0x01);
>>>> + /* generate interrupt on preemption completion */
>>>> + OUT_RING(ring, 0x00);
>>>> + }
>>>> +
>>>> +
>>>> trace_msm_gpu_submit_flush(submit,
>>>> gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER));
>>>>
>>>> a6xx_flush(gpu, ring);
>>>> +
>>>> + /* Check to see if we need to start preemption */
>>>> + a6xx_preempt_trigger(gpu);
>>>> }
>>>>
>>>> static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state)
>>>> @@ -588,6 +751,89 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
>>>> adreno_gpu->ubwc_config.min_acc_len << 23 | hbb_lo << 21);
>>>> }
>>>>
>>>> +static void a7xx_patch_pwrup_reglist(struct msm_gpu *gpu)
>>>> +{
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> + struct adreno_reglist_list reglist[2];
>>>> + void *ptr = a6xx_gpu->pwrup_reglist_ptr;
>>>> + struct cpu_gpu_lock *lock = ptr;
>>>> + u32 *dest = (u32 *)&lock->regs[0];
>>>> + int i, j;
>>>> +
>>> This sequence is required only once. We can use a flag to check and bail out
>>> next time.
>>>
>>>> + lock->gpu_req = lock->cpu_req = lock->turn = 0;
>>>> + lock->ifpc_list_len = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
>>>> + lock->preemption_list_len = ARRAY_SIZE(a7xx_pwrup_reglist);
>>>> +
>>>> + /* Static IFPC-only registers */
>>>> + reglist[0].regs = a7xx_ifpc_pwrup_reglist;
>>>> + reglist[0].count = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
>>>> + lock->ifpc_list_len = reglist[0].count;
>>>> +
>>>> + /* Static IFPC + preemption registers */
>>>> + reglist[1].regs = a7xx_pwrup_reglist;
>>>> + reglist[1].count = ARRAY_SIZE(a7xx_pwrup_reglist);
>>>> + lock->preemption_list_len = reglist[1].count;
>>>> +
>>>> + /*
>>>> + * For each entry in each of the lists, write the offset and the current
>>>> + * register value into the GPU buffer
>>>> + */
>>>> + for (i = 0; i < 2; i++) {
>>>> + const u32 *r = reglist[i].regs;
>>>> +
>>>> + for (j = 0; j < reglist[i].count; j++) {
>>>> + *dest++ = r[j];
>>>> + *dest++ = gpu_read(gpu, r[j]);
>>>> + }
>>>> + }
>>>> +
>>>> + /*
>>>> + * The overall register list is composed of
>>>> + * 1. Static IFPC-only registers
>>>> + * 2. Static IFPC + preemption registers
>>>> + * 3. Dynamic IFPC + preemption registers (ex: perfcounter selects)
>>>> + *
>>>> + * The first two lists are static. Size of these lists are stored as
>>>> + * number of pairs in ifpc_list_len and preemption_list_len
>>>> + * respectively. With concurrent binning, Some of the perfcounter
>>>> + * registers being virtualized, CP needs to know the pipe id to program
>>>> + * the aperture inorder to restore the same. Thus, third list is a
>>>> + * dynamic list with triplets as
>>>> + * (<aperture, shifted 12 bits> <address> <data>), and the length is
>>>> + * stored as number for triplets in dynamic_list_len.
>>>> + */
>>>> + lock->dynamic_list_len = 0;
>>>> +}
>>>> +
>>>> +static int a7xx_preempt_start(struct msm_gpu *gpu)
>>>> +{
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> + struct msm_ringbuffer *ring = gpu->rb[0];
>>>> +
>>>> + if (gpu->nr_rings <= 1)
>>>> + return 0;
>>>> +
>>>> + /* Turn CP protection off */
>>>> + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
>>>> + OUT_RING(ring, 0);
>>>> +
>>>> + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, NULL);
>>>> +
>>>> + /* Yield the floor on command completion */
>>>> + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
>>>> + OUT_RING(ring, 0x00);
>>>> + OUT_RING(ring, 0x00);
>>>> + OUT_RING(ring, 0x01);
>>>
>>> Looks like kgsl use 0x00 here. Not sure if that matters!
>>>
>>>> + /* Generate interrupt on preemption completion */
>>>> + OUT_RING(ring, 0x00);
>>>> +
>>>> + a6xx_flush(gpu, ring);
>>>> +
>>>> + return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
>>>> +}
>>>> +
>>>> static int a6xx_cp_init(struct msm_gpu *gpu)
>>>> {
>>>> struct msm_ringbuffer *ring = gpu->rb[0];
>>>> @@ -619,6 +865,8 @@ static int a6xx_cp_init(struct msm_gpu *gpu)
>>>>
>>>> static int a7xx_cp_init(struct msm_gpu *gpu)
>>>> {
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> struct msm_ringbuffer *ring = gpu->rb[0];
>>>> u32 mask;
>>>>
>>>> @@ -626,6 +874,8 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
>>>> OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
>>>> OUT_RING(ring, BIT(27));
>>>>
>>>> + a7xx_patch_pwrup_reglist(gpu);
>>>> +
>>>
>>> Looks out of place. I guess you kept it here to avoid an extra a7x
>>> check. At least we should move this before the above pm4 packets.
>>>
>>>> OUT_PKT7(ring, CP_ME_INIT, 7);
>>>>
>>>> /* Use multiple HW contexts */
>>>> @@ -656,11 +906,11 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
>>>>
>>>> /* *Don't* send a power up reg list for concurrent binning (TODO) */
>>>> /* Lo address */
>>>> - OUT_RING(ring, 0x00000000);
>>>> + OUT_RING(ring, lower_32_bits(a6xx_gpu->pwrup_reglist_iova));
>>>> /* Hi address */
>>>> - OUT_RING(ring, 0x00000000);
>>>> + OUT_RING(ring, upper_32_bits(a6xx_gpu->pwrup_reglist_iova));
>>>> /* BIT(31) set => read the regs from the list */
>>>> - OUT_RING(ring, 0x00000000);
>>>> + OUT_RING(ring, BIT(31));
>>>>
>>>> a6xx_flush(gpu, ring);
>>>> return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
>>>> @@ -784,6 +1034,16 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
>>>> msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow");
>>>> }
>>>>
>>>> + a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE,
>>>> + MSM_BO_WC | MSM_BO_MAP_PRIV,
>>>> + gpu->aspace, &a6xx_gpu->pwrup_reglist_bo,
>>>> + &a6xx_gpu->pwrup_reglist_iova);
>>>> +
>>>> + if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr))
>>>> + return PTR_ERR(a6xx_gpu->pwrup_reglist_ptr);
>>>> +
>>>> + msm_gem_object_set_name(a6xx_gpu->pwrup_reglist_bo, "pwrup_reglist");
>>>> +
>>>> return 0;
>>>> }
>>>>
>>>> @@ -1127,6 +1387,8 @@ static int hw_init(struct msm_gpu *gpu)
>>>> if (a6xx_gpu->shadow_bo) {
>>>> gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR,
>>>> shadowptr(a6xx_gpu, gpu->rb[0]));
>>>> + for (unsigned int i = 0; i < gpu->nr_rings; i++)
>>>> + a6xx_gpu->shadow[i] = 0;
>>>> }
>>>>
>>>> /* ..which means "always" on A7xx, also for BV shadow */
>>>> @@ -1135,6 +1397,8 @@ static int hw_init(struct msm_gpu *gpu)
>>>> rbmemptr(gpu->rb[0], bv_rptr));
>>>> }
>>>>
>>>> + a6xx_preempt_hw_init(gpu);
>>>> +
>>>> /* Always come up on rb 0 */
>>>> a6xx_gpu->cur_ring = gpu->rb[0];
>>>>
>>>> @@ -1180,6 +1444,10 @@ static int hw_init(struct msm_gpu *gpu)
>>>> out:
>>>> if (adreno_has_gmu_wrapper(adreno_gpu))
>>>> return ret;
>>>> +
>>>> + /* Last step - yield the ringbuffer */
>>>> + a7xx_preempt_start(gpu);
>>>> +
>>>> /*
>>>> * Tell the GMU that we are done touching the GPU and it can start power
>>>> * management
>>>> @@ -1557,8 +1825,13 @@ static irqreturn_t a6xx_irq(struct msm_gpu *gpu)
>>>> if (status & A6XX_RBBM_INT_0_MASK_SWFUSEVIOLATION)
>>>> a7xx_sw_fuse_violation_irq(gpu);
>>>>
>>>> - if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS)
>>>> + if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS) {
>>>> msm_gpu_retire(gpu);
>>>> + a6xx_preempt_trigger(gpu);
>>>> + }
>>>> +
>>>> + if (status & A6XX_RBBM_INT_0_MASK_CP_SW)
>>>> + a6xx_preempt_irq(gpu);
>>>>
>>>> return IRQ_HANDLED;
>>>> }
>>>> @@ -2331,6 +2604,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
>>>> a6xx_fault_handler);
>>>>
>>>> a6xx_calc_ubwc_config(adreno_gpu);
>>>> + /* Set up the preemption specific bits and pieces for each ringbuffer */
>>>> + a6xx_preempt_init(gpu);
>>>>
>>>> return gpu;
>>>> }
>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>>>> index e3e5c53ae8af..da10060e38dc 100644
>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>>>> @@ -12,6 +12,31 @@
>>>>
>>>> extern bool hang_debug;
>>>>
>>>> +struct cpu_gpu_lock {
>>>> + uint32_t gpu_req;
>>>> + uint32_t cpu_req;
>>>> + uint32_t turn;
>>>> + union {
>>>> + struct {
>>>> + uint16_t list_length;
>>>> + uint16_t list_offset;
>>>> + };
>>>> + struct {
>>>> + uint8_t ifpc_list_len;
>>>> + uint8_t preemption_list_len;
>>>> + uint16_t dynamic_list_len;
>>>> + };
>>>> + };
>>>> + uint64_t regs[62];
>>>> +};
>>>> +
>>>> +struct adreno_reglist_list {
>>>> + /** @reg: List of register **/
>>>> + const u32 *regs;
>>>> + /** @count: Number of registers in the list **/
>>>> + u32 count;
>>>> +};
>>>> +
>>>> /**
>>>> * struct a6xx_info - a6xx specific information from device table
>>>> *
>>>> @@ -31,6 +56,20 @@ struct a6xx_gpu {
>>>> uint64_t sqe_iova;
>>>>
>>>> struct msm_ringbuffer *cur_ring;
>>>> + struct msm_ringbuffer *next_ring;
>>>> +
>>>> + struct drm_gem_object *preempt_bo[MSM_GPU_MAX_RINGS];
>>>> + void *preempt[MSM_GPU_MAX_RINGS];
>>>> + uint64_t preempt_iova[MSM_GPU_MAX_RINGS];
>>>> + uint32_t last_seqno[MSM_GPU_MAX_RINGS];
>>>> +
>>>> + atomic_t preempt_state;
>>>> + spinlock_t eval_lock;
>>>> + struct timer_list preempt_timer;
>>>> +
>>>> + unsigned int preempt_level;
>>>> + bool uses_gmem;
>>>> + bool skip_save_restore;
>>>>
>>>> struct a6xx_gmu gmu;
>>>>
>>>> @@ -38,6 +77,10 @@ struct a6xx_gpu {
>>>> uint64_t shadow_iova;
>>>> uint32_t *shadow;
>>>>
>>>> + struct drm_gem_object *pwrup_reglist_bo;
>>>> + void *pwrup_reglist_ptr;
>>>> + uint64_t pwrup_reglist_iova;
>>>> +
>>>> bool has_whereami;
>>>>
>>>> void __iomem *llc_mmio;
>>>> @@ -49,6 +92,105 @@ struct a6xx_gpu {
>>>>
>>>> #define to_a6xx_gpu(x) container_of(x, struct a6xx_gpu, base)
>>>>
>>>> +/*
>>>> + * In order to do lockless preemption we use a simple state machine to progress
>>>> + * through the process.
>>>> + *
>>>> + * PREEMPT_NONE - no preemption in progress. Next state START.
>>>> + * PREEMPT_START - The trigger is evaluating if preemption is possible. Next
>>>> + * states: TRIGGERED, NONE
>>>> + * PREEMPT_FINISH - An intermediate state before moving back to NONE. Next
>>>> + * state: NONE.
>>>> + * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next
>>>> + * states: FAULTED, PENDING
>>>> + * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger
>>>> + * recovery. Next state: N/A
>>>> + * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is
>>>> + * checking the success of the operation. Next state: FAULTED, NONE.
>>>> + */
>>>> +
>>>> +enum a6xx_preempt_state {
>>>> + PREEMPT_NONE = 0,
>>>> + PREEMPT_START,
>>>> + PREEMPT_FINISH,
>>>> + PREEMPT_TRIGGERED,
>>>> + PREEMPT_FAULTED,
>>>> + PREEMPT_PENDING,
>>>> +};
>>>> +
>>>> +/*
>>>> + * struct a6xx_preempt_record is a shared buffer between the microcode and the
>>>> + * CPU to store the state for preemption. The record itself is much larger
>>>> + * (2112k) but most of that is used by the CP for storage.
>>>> + *
>>>> + * There is a preemption record assigned per ringbuffer. When the CPU triggers a
>>>> + * preemption, it fills out the record with the useful information (wptr, ring
>>>> + * base, etc) and the microcode uses that information to set up the CP following
>>>> + * the preemption. When a ring is switched out, the CP will save the ringbuffer
>>>> + * state back to the record. In this way, once the records are properly set up
>>>> + * the CPU can quickly switch back and forth between ringbuffers by only
>>>> + * updating a few registers (often only the wptr).
>>>> + *
>>>> + * These are the CPU aware registers in the record:
>>>> + * @magic: Must always be 0xAE399D6EUL
>>>> + * @info: Type of the record - written 0 by the CPU, updated by the CP
>>>> + * @errno: preemption error record
>>>> + * @data: Data field in YIELD and SET_MARKER packets, Written and used by CP
>>>> + * @cntl: Value of RB_CNTL written by CPU, save/restored by CP
>>>> + * @rptr: Value of RB_RPTR written by CPU, save/restored by CP
>>>> + * @wptr: Value of RB_WPTR written by CPU, save/restored by CP
>>>> + * @_pad: Reserved/padding
>>>> + * @rptr_addr: Value of RB_RPTR_ADDR_LO|HI written by CPU, save/restored by CP
>>>> + * @rbase: Value of RB_BASE written by CPU, save/restored by CP
>>>> + * @counter: GPU address of the storage area for the preemption counters
>>>
>>> doc missing for bv_rptr_addr.
>>>
>>>> + */
>>>> +struct a6xx_preempt_record {
>>>> + u32 magic;
>>>> + u32 info;
>>>> + u32 errno;
>>>> + u32 data;
>>>> + u32 cntl;
>>>> + u32 rptr;
>>>> + u32 wptr;
>>>> + u32 _pad;
>>>> + u64 rptr_addr;
>>>> + u64 rbase;
>>>> + u64 counter;
>>>> + u64 bv_rptr_addr;
>>>> +};
>>>> +
>>>> +#define A6XX_PREEMPT_RECORD_MAGIC 0xAE399D6EUL
>>>> +
>>>> +#define PREEMPT_RECORD_SIZE_FALLBACK(size) \
>>>> + ((size) == 0 ? 4192 * SZ_1K : (size))
>>>> +
>>>> +#define PREEMPT_OFFSET_SMMU_INFO 0
>>>> +#define PREEMPT_OFFSET_PRIV_NON_SECURE (PREEMPT_OFFSET_SMMU_INFO + 4096)
>>>> +#define PREEMPT_OFFSET_PRIV_SECURE(size) \
>>>> + (PREEMPT_OFFSET_PRIV_NON_SECURE + PREEMPT_RECORD_SIZE_FALLBACK(size))
>>>> +#define PREEMPT_SIZE(size) \
>>>> + (PREEMPT_OFFSET_PRIV_SECURE(size) + PREEMPT_RECORD_SIZE_FALLBACK(size))
>>>> +
>>>> +/*
>>>> + * The preemption counter block is a storage area for the value of the
>>>> + * preemption counters that are saved immediately before context switch. We
>>>> + * append it on to the end of the allocation for the preemption record.
>>>> + */
>>>> +#define A6XX_PREEMPT_COUNTER_SIZE (16 * 4)
>>>> +
>>>> +#define A6XX_PREEMPT_USER_RECORD_SIZE (192 * 1024)
>>>
>>> Unused.
>>>
>>>> +
>>>> +struct a7xx_cp_smmu_info {
>>>> + u32 magic;
>>>> + u32 _pad4;
>>>> + u64 ttbr0;
>>>> + u32 asid;
>>>> + u32 context_idr;
>>>> + u32 context_bank;
>>>> +};
>>>> +
>>>> +#define GEN7_CP_SMMU_INFO_MAGIC 0x241350d5UL
>>>> +
>>>> /*
>>>> * Given a register and a count, return a value to program into
>>>> * REG_CP_PROTECT_REG(n) - this will block both reads and writes for
>>>> @@ -106,6 +248,25 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
>>>> int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
>>>> void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu);
>>>>
>>>> +void a6xx_preempt_init(struct msm_gpu *gpu);
>>>> +void a6xx_preempt_hw_init(struct msm_gpu *gpu);
>>>> +void a6xx_preempt_trigger(struct msm_gpu *gpu);
>>>> +void a6xx_preempt_irq(struct msm_gpu *gpu);
>>>> +void a6xx_preempt_fini(struct msm_gpu *gpu);
>>>> +int a6xx_preempt_submitqueue_setup(struct msm_gpu *gpu,
>>>> + struct msm_gpu_submitqueue *queue);
>>>> +void a6xx_preempt_submitqueue_close(struct msm_gpu *gpu,
>>>> + struct msm_gpu_submitqueue *queue);
>>>> +
>>>> +/* Return true if we are in a preempt state */
>>>> +static inline bool a6xx_in_preempt(struct a6xx_gpu *a6xx_gpu)
>>>> +{
>>>> + int preempt_state = atomic_read(&a6xx_gpu->preempt_state);
>>>
>>> I think we should keep a matching barrier before the 'read' similar to the one used in the
>>> set_preempt_state helper.
>>
>> Good idea, but for the one case we found where it matters (the
>> a6xx_flush() vs. updating the ring in a6xx_preempt_irq() race) the
>> barrier needs to be after the read. The sequence is something like:
>>
>> Thread A:
>>
>> a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
>> a6xx_gpu->preempt_state = PREEMPT_FINISH;
>>
>> Thread B:
>>
>> read a6xx_gpu->preempt_state;
>> read a6xx_gpu->cur_ring;
>>
>> And if the read to preempt_state returns PREEMPT_FINISH, then we need
>> cur_ring to reflect the ring we switched to. (I discovered this the
>> hard way from debugging deadlocks...)
>>
>> So, maybe add a smp_rmb() before and after, then drop the explicit
>> barrier in a6xx_flush()?
>
> Ack. I think it is better to use a helper similar to set_preempt_state()
> and consistently use that everywhere.
Do you mean something for setting cur_ring? There is only one place
where that would be used (besides two other places where the initial
value is set).
>
>>
>>>
>>>> +
>>>> + return !(preempt_state == PREEMPT_NONE ||
>>>> + preempt_state == PREEMPT_FINISH);
>>>> +}
>>>> +
>>>> void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
>>>> bool suspended);
>>>> unsigned long a6xx_gmu_get_freq(struct msm_gpu *gpu);
>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>>>> new file mode 100644
>>>> index 000000000000..1caff76aca6e
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>>>> @@ -0,0 +1,391 @@
>>>> +// SPDX-License-Identifier: GPL-2.0
>>>> +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. */
>>>> +/* Copyright (c) 2023 Collabora, Ltd. */
>>>> +/* Copyright (c) 2024 Valve Corporation */
>>>> +
>>>> +#include "msm_gem.h"
>>>> +#include "a6xx_gpu.h"
>>>> +#include "a6xx_gmu.xml.h"
>>>> +#include "msm_mmu.h"
>>>> +
>>>> +/*
>>>> + * Try to transition the preemption state from old to new. Return
>>>> + * true on success or false if the original state wasn't 'old'
>>>> + */
>>>> +static inline bool try_preempt_state(struct a6xx_gpu *a6xx_gpu,
>>>> + enum a6xx_preempt_state old, enum a6xx_preempt_state new)
>>>> +{
>>>> + enum a6xx_preempt_state cur = atomic_cmpxchg(&a6xx_gpu->preempt_state,
>>>> + old, new);
>>>> +
>>>> + return (cur == old);
>>>> +}
>>>> +
>>>> +/*
>>>> + * Force the preemption state to the specified state. This is used in cases
>>>> + * where the current state is known and won't change
>>>> + */
>>>> +static inline void set_preempt_state(struct a6xx_gpu *gpu,
>>>> + enum a6xx_preempt_state new)
>>>> +{
>>>> + /*
>>>> + * preempt_state may be read by other cores trying to trigger a
>>>> + * preemption or in the interrupt handler so barriers are needed
>>>> + * before...
>>>> + */
>>>> + smp_mb__before_atomic();
>>>> + atomic_set(&gpu->preempt_state, new);
>>>> + /* ... and after*/
>>>> + smp_mb__after_atomic();
>>>> +}
>>>> +
>>>> +/* Write the most recent wptr for the given ring into the hardware */
>>>> +static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
>>>> +{
>>>> + unsigned long flags;
>>>> + uint32_t wptr;
>>>> +
>>>> + if (!ring)
>>>
>>> Is this ever true?
>>>
>>>> + return;
>>>> +
>>>> + spin_lock_irqsave(&ring->preempt_lock, flags);
>>>> +
>>>> + if (ring->skip_inline_wptr) {
>>>> + wptr = get_wptr(ring);
>>>> +
>>>> + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
>>>> +
>>>> + ring->skip_inline_wptr = false;
>>>> + }
>>>> +
>>>> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
>>>> +}
>>>> +
>>>> +/* Return the highest priority ringbuffer with something in it */
>>>> +static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
>>>> +{
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> +
>>>> + unsigned long flags;
>>>> + int i;
>>>> +
>>>> + for (i = 0; i < gpu->nr_rings; i++) {
>>>> + bool empty;
>>>> + struct msm_ringbuffer *ring = gpu->rb[i];
>>>> +
>>>> + spin_lock_irqsave(&ring->preempt_lock, flags);
>>>> + empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
>>>> + if (!empty && ring == a6xx_gpu->cur_ring)
>>>> + empty = ring->memptrs->fence == a6xx_gpu->last_seqno[i];
>>>> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
>>>> +
>>>> + if (!empty)
>>>> + return ring;
>>>> + }
>>>> +
>>>> + return NULL;
>>>> +}
>>>> +
>>>> +static void a6xx_preempt_timer(struct timer_list *t)
>>>> +{
>>>> + struct a6xx_gpu *a6xx_gpu = from_timer(a6xx_gpu, t, preempt_timer);
>>>> + struct msm_gpu *gpu = &a6xx_gpu->base.base;
>>>> + struct drm_device *dev = gpu->dev;
>>>> +
>>>> + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_FAULTED))
>>>> + return;
>>>> +
>>>> + dev_err(dev->dev, "%s: preemption timed out\n", gpu->name);
>>>> + kthread_queue_work(gpu->worker, &gpu->recover_work);
>>>> +}
>>>> +
>>>> +void a6xx_preempt_irq(struct msm_gpu *gpu)
>>>> +{
>>>> + uint32_t status;
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> + struct drm_device *dev = gpu->dev;
>>>> +
>>>> + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_PENDING))
>>>> + return;
>>>> +
>>>> + /* Delete the preemption watchdog timer */
>>>> + del_timer(&a6xx_gpu->preempt_timer);
>>>> +
>>>> + /*
>>>> + * The hardware should be setting the stop bit of CP_CONTEXT_SWITCH_CNTL
>>>> + * to zero before firing the interrupt, but there is a non zero chance
>>>> + * of a hardware condition or a software race that could set it again
>>>> + * before we have a chance to finish. If that happens, log and go for
>>>> + * recovery
>>>> + */
>>>> + status = gpu_read(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL);
>>>> + if (unlikely(status & A6XX_CP_CONTEXT_SWITCH_CNTL_STOP)) {
>>>> + DRM_DEV_ERROR(&gpu->pdev->dev,
>>>> + "!!!!!!!!!!!!!!!! preemption faulted !!!!!!!!!!!!!! irq\n");
>>>> + set_preempt_state(a6xx_gpu, PREEMPT_FAULTED);
>>>> + dev_err(dev->dev, "%s: Preemption failed to complete\n",
>>>> + gpu->name);
>>>> + kthread_queue_work(gpu->worker, &gpu->recover_work);
>>>> + return;
>>>> + }
>>>> +
>>>> + a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
>>>> + a6xx_gpu->next_ring = NULL;
>>>> +
>>>> + /* Make sure the write to cur_ring is posted before the change in state */
>>>> + wmb();
>>>
>>> Not needed. set_preempt_state has the necessary barrier.
>>>
>>>> +
>>>> + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
>>>> +
>>>> + update_wptr(gpu, a6xx_gpu->cur_ring);
>>>> +
>>>> + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
>>>> +
>>>> + /*
>>>> + * Retrigger preemption to avoid a deadlock that might occur when preemption
>>>> + * is skipped due to it being already in flight when requested.
>>>> + */
>>>> + a6xx_preempt_trigger(gpu);
>>>> +}
>>>> +
>>>> +void a6xx_preempt_hw_init(struct msm_gpu *gpu)
>>>> +{
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> + int i;
>>>> +
>>>> + /* No preemption if we only have one ring */
>>>> + if (gpu->nr_rings == 1)
>>>> + return;
>>>> +
>>>> + for (i = 0; i < gpu->nr_rings; i++) {
>>>> + struct a6xx_preempt_record *record_ptr =
>>>> + a6xx_gpu->preempt[i] + PREEMPT_OFFSET_PRIV_NON_SECURE;
>>>> + record_ptr->wptr = 0;
>>>> + record_ptr->rptr = 0;
>>>> + record_ptr->rptr_addr = shadowptr(a6xx_gpu, gpu->rb[i]);
>>>> + record_ptr->info = 0;
>>>> + record_ptr->data = 0;
>>>> + record_ptr->rbase = gpu->rb[i]->iova;
>>>> + }
>>>> +
>>>> + /* Write a 0 to signal that we aren't switching pagetables */
>>>> + gpu_write64(gpu, REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO, 0);
>>>> +
>>>> + /* Enable the GMEM save/restore feature for preemption */
>>>> + gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE, 0x1);
>>>> +
>>>> + /* Reset the preemption state */
>>>> + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
>>>> +
>>>> + spin_lock_init(&a6xx_gpu->eval_lock);
>>>> +
>>>> + /* Always come up on rb 0 */
>>>> + a6xx_gpu->cur_ring = gpu->rb[0];
>>>> +}
>>>> +
>>>> +void a6xx_preempt_trigger(struct msm_gpu *gpu)
>>>> +{
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> + u64 preempt_offset_priv_secure;
>>>> + unsigned long flags;
>>>> + struct msm_ringbuffer *ring;
>>>> + unsigned int cntl;
>>>> +
>>>> + if (gpu->nr_rings == 1)
>>>> + return;
>>>> +
>>>> + /*
>>>> + * Lock to make sure another thread attempting preemption doesn't skip it
>>>> + * while we are still evaluating the next ring. This makes sure the other
>>>> + * thread does start preemption if we abort it and avoids a soft lock.
>>>> + */
>>>> + spin_lock_irqsave(&a6xx_gpu->eval_lock, flags);
>>>> +
>>>> + /*
>>>> + * Try to start preemption by moving from NONE to START. If
>>>> + * unsuccessful, a preemption is already in flight
>>>> + */
>>>> + if (!try_preempt_state(a6xx_gpu, PREEMPT_NONE, PREEMPT_START)) {
>>>> + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
>>>> + return;
>>>> + }
>>>> +
>>>> + cntl = A6XX_CP_CONTEXT_SWITCH_CNTL_LEVEL(a6xx_gpu->preempt_level);
>>>> +
>>>> + if (a6xx_gpu->skip_save_restore)
>>>> + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_SKIP_SAVE_RESTORE;
>>>> +
>>>> + if (a6xx_gpu->uses_gmem)
>>>> + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_USES_GMEM;
>>>> +
>>>> + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_STOP;
>>>> +
>>>> + /* Get the next ring to preempt to */
>>>> + ring = get_next_ring(gpu);
>>>> +
>>>> + /*
>>>> + * If no ring is populated or the highest priority ring is the current
>>>> + * one do nothing except to update the wptr to the latest and greatest
>>>> + */
>>>> + if (!ring || (a6xx_gpu->cur_ring == ring)) {
>>>> + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
>>>> + update_wptr(gpu, a6xx_gpu->cur_ring);
>>>> + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
>>>> + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
>>>> + return;
>>>> + }
>>>> +
>>>> + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
>>>> +
>>>> + spin_lock_irqsave(&ring->preempt_lock, flags);
>>>> +
>>>> + struct a7xx_cp_smmu_info *smmu_info_ptr =
>>>> + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_SMMU_INFO;
>>>> + struct a6xx_preempt_record *record_ptr =
>>>> + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE;
>>>> + u64 ttbr0 = ring->memptrs->ttbr0;
>>>> + u32 context_idr = ring->memptrs->context_idr;
>>>> +
>>>> + smmu_info_ptr->ttbr0 = ttbr0;
>>>> + smmu_info_ptr->context_idr = context_idr;
>>>> + record_ptr->wptr = get_wptr(ring);
>>>> +
>>>> + /*
>>>> + * The GPU will write the wptr we set above when we preempt. Reset
>>>> + * skip_inline_wptr to make sure that we don't write WPTR to the same
>>>> + * thing twice. It's still possible subsequent submissions will update
>>>> + * wptr again, in which case they will set the flag to true. This has
>>>> + * to be protected by the lock for setting the flag and updating wptr
>>>> + * to be atomic.
>>>> + */
>>>> + ring->skip_inline_wptr = false;
>>>> +
>>>> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
>>>> +
>>>> + gpu_write64(gpu,
>>>> + REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO,
>>>> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_SMMU_INFO);
>>>> +
>>>> + gpu_write64(gpu,
>>>> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR,
>>>> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE);
>>>> +
>>>> + preempt_offset_priv_secure =
>>>> + PREEMPT_OFFSET_PRIV_SECURE(adreno_gpu->info->preempt_record_size);
>>>> + gpu_write64(gpu,
>>>> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR,
>>>> + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure);
>>>
>>> Secure buffers are not supported currently, so we can skip this and the
>>> context record allocation. Anyway this has to be a separate buffer
>>> mapped in secure pagetable which don't currently have. We can skip the
>>> same in pseudo register packet too.
>>>
>>>> +
>>>> + a6xx_gpu->next_ring = ring;
>>>> +
>>>> + /* Start a timer to catch a stuck preemption */
>>>> + mod_timer(&a6xx_gpu->preempt_timer, jiffies + msecs_to_jiffies(10000));
>>>> +
>>>> + /* Set the preemption state to triggered */
>>>> + set_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED);
>>>> +
>>>> + /* Make sure any previous writes to WPTR are posted */
>>>> + gpu_read(gpu, REG_A6XX_CP_RB_WPTR);
>>>> +
>>>> + /* Make sure everything is written before hitting the button */
>>>> + wmb();
>>>
>>> This and the above read back looks unnecessary. All writes to gpu are
>>> ordered anyway.
>>
>> I thought the whole reason for
>> https://lore.kernel.org/linux-kernel/20240508-topic-adreno-v1-1-1babd05c119d@linaro.org/
>> is that memory-mapped writes to different GPU registers are *not*
>> necessarily ordered from the GPU's perspective (even if they are from
>> the CPU). That's why I suggested the readback. Or am I missing
>> something?
>
> Lets consider that GBIF unhalt sequence as an exception. Generally, we
> can consider writes to gpu registers to be ordered.
>
> -Akhil.
>
>>
>>>
>>>> +
>>>> + /* Trigger the preemption */
>>>> + gpu_write(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL, cntl);
>>>> +}
>>>> +
>>>> +static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
>>>> + struct msm_ringbuffer *ring)
>>>> +{
>>>> + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
>>>> + struct msm_gpu *gpu = &adreno_gpu->base;
>>>> + struct drm_gem_object *bo = NULL;
>>>> + phys_addr_t ttbr;
>>>> + u64 iova = 0;
>>>> + void *ptr;
>>>> + int asid;
>>>> +
>>>> + ptr = msm_gem_kernel_new(gpu->dev,
>>>> + PREEMPT_SIZE(adreno_gpu->info->preempt_record_size),
>>>> + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova);
>>>
>>> set a name?
>>>
>>>> +
>>>> + memset(ptr, 0, PREEMPT_SIZE(adreno_gpu->info->preempt_record_size));
>>>> +
>>>> + if (IS_ERR(ptr))
>>>> + return PTR_ERR(ptr);
>>>> +
>>>> + a6xx_gpu->preempt_bo[ring->id] = bo;
>>>> + a6xx_gpu->preempt_iova[ring->id] = iova;
>>>> + a6xx_gpu->preempt[ring->id] = ptr;
>>>> +
>>>> + struct a7xx_cp_smmu_info *smmu_info_ptr = ptr + PREEMPT_OFFSET_SMMU_INFO;
>>>> + struct a6xx_preempt_record *record_ptr = ptr + PREEMPT_OFFSET_PRIV_NON_SECURE;
>>>> +
>>>> + msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid);
>>>> +
>>>> + smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC;
>>>> + smmu_info_ptr->ttbr0 = ttbr;
>>>> + smmu_info_ptr->asid = 0xdecafbad;
>>>> + smmu_info_ptr->context_idr = 0;
>>>> +
>>>> + /* Set up the defaults on the preemption record */
>>>> + record_ptr->magic = A6XX_PREEMPT_RECORD_MAGIC;
>>>> + record_ptr->info = 0;
>>>> + record_ptr->data = 0;
>>>> + record_ptr->rptr = 0;
>>>> + record_ptr->wptr = 0;
>>>> + record_ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT;
>>>> + record_ptr->rbase = ring->iova;
>>>> + record_ptr->counter = 0;
>>>> + record_ptr->bv_rptr_addr = rbmemptr(ring, bv_rptr);
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +void a6xx_preempt_fini(struct msm_gpu *gpu)
>>>> +{
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> + int i;
>>>> +
>>>> + for (i = 0; i < gpu->nr_rings; i++)
>>>> + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace);
>>>> +}
>>>> +
>>>> +void a6xx_preempt_init(struct msm_gpu *gpu)
>>>> +{
>>>> + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
>>>> + int i;
>>>> +
>>>> + /* No preemption if we only have one ring */
>>>> + if (gpu->nr_rings <= 1)
>>>> + return;
>>>> +
>>>> + for (i = 0; i < gpu->nr_rings; i++) {
>>>> + if (preempt_init_ring(a6xx_gpu, gpu->rb[i]))
>>>> + goto fail;
>>>> + }
>>>> +
>>>> + /* TODO: make this configurable? */
>>>> + a6xx_gpu->preempt_level = 1;
>>>> + a6xx_gpu->uses_gmem = 1;
>>>> + a6xx_gpu->skip_save_restore = 1;
>>>> +
>>>> + timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
>>>> +
>>>> + return;
>>>> +fail:
>>>
>>> Log an error so that preemption is not disabled silently?
>>>
>>>> + /*
>>>> + * On any failure our adventure is over. Clean up and
>>>> + * set nr_rings to 1 to force preemption off
>>>> + */
>>>> + a6xx_preempt_fini(gpu);
>>>> + gpu->nr_rings = 1;
>>>> +
>>>> + return;
>>>> +}
>>>> diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h
>>>> index 40791b2ade46..7dde6a312511 100644
>>>> --- a/drivers/gpu/drm/msm/msm_ringbuffer.h
>>>> +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h
>>>> @@ -36,6 +36,7 @@ struct msm_rbmemptrs {
>>>>
>>>> volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT];
>>>> volatile u64 ttbr0;
>>>> + volatile u32 context_idr;
>>>> };
>>>>
>>>> struct msm_cp_state {
>>>> @@ -100,6 +101,12 @@ struct msm_ringbuffer {
>>>> * preemption. Can be aquired from irq context.
>>>> */
>>>> spinlock_t preempt_lock;
>>>> +
>>>> + /*
>>>> + * Whether we skipped writing wptr and it needs to be updated in the
>>>> + * future when the ring becomes current.
>>>> + */
>>>> + bool skip_inline_wptr;
>>>
>>> nit: does 'restore_wptr' makes more sense? Or something better? Basically, name it based
>>> on the future action?
>>>
>>> -Akhil
>>>
>>>> };
>>>>
>>>> struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
>>>>
>>>> --
>>>> 2.46.0
>>>>
Best regards,
--
Antonino Maniscalco <antomani103@gmail.com>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-12 15:48 ` Antonino Maniscalco
@ 2024-09-16 17:40 ` Akhil P Oommen
0 siblings, 0 replies; 32+ messages in thread
From: Akhil P Oommen @ 2024-09-16 17:40 UTC (permalink / raw)
To: Antonino Maniscalco
Cc: Connor Abbott, Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Sharat Masetty, Neil Armstrong
On Thu, Sep 12, 2024 at 05:48:45PM +0200, Antonino Maniscalco wrote:
> On 9/10/24 6:43 PM, Akhil P Oommen wrote:
> > On Mon, Sep 09, 2024 at 01:22:22PM +0100, Connor Abbott wrote:
> > > On Fri, Sep 6, 2024 at 9:03 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
> > > >
> > > > On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
> > > > > This patch implements preemption feature for A6xx targets, this allows
> > > > > the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
> > > > > hardware as such supports multiple levels of preemption granularities,
> > > > > ranging from coarse grained(ringbuffer level) to a more fine grained
> > > > > such as draw-call level or a bin boundary level preemption. This patch
> > > > > enables the basic preemption level, with more fine grained preemption
> > > > > support to follow.
> > > > >
> > > > > Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
> > > > > Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> > > > > Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
> > > > > ---
> > > > > drivers/gpu/drm/msm/Makefile | 1 +
> > > > > drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
> > > > > drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
> > > > > drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 391 ++++++++++++++++++++++++++++++
> > > > > drivers/gpu/drm/msm/msm_ringbuffer.h | 7 +
> > > > > 5 files changed, 844 insertions(+), 9 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile
> > > > > index f5e2838c6a76..32e915109a59 100644
> > > > > --- a/drivers/gpu/drm/msm/Makefile
> > > > > +++ b/drivers/gpu/drm/msm/Makefile
> > > > > @@ -23,6 +23,7 @@ adreno-y := \
> > > > > adreno/a6xx_gpu.o \
> > > > > adreno/a6xx_gmu.o \
> > > > > adreno/a6xx_hfi.o \
> > > > > + adreno/a6xx_preempt.o \
> > > > >
> > > > > adreno-$(CONFIG_DEBUG_FS) += adreno/a5xx_debugfs.o \
> > > > >
> > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > > > index 32a4faa93d7f..ed0b138a2d66 100644
> > > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > > > @@ -16,6 +16,83 @@
> > > > >
> > > > > #define GPU_PAS_ID 13
> > > > >
> > > > > +/* IFPC & Preemption static powerup restore list */
> > > > > +static const uint32_t a7xx_pwrup_reglist[] = {
> > > > > + REG_A6XX_UCHE_TRAP_BASE,
> > > > > + REG_A6XX_UCHE_TRAP_BASE + 1,
> > > > > + REG_A6XX_UCHE_WRITE_THRU_BASE,
> > > > > + REG_A6XX_UCHE_WRITE_THRU_BASE + 1,
> > > > > + REG_A6XX_UCHE_GMEM_RANGE_MIN,
> > > > > + REG_A6XX_UCHE_GMEM_RANGE_MIN + 1,
> > > > > + REG_A6XX_UCHE_GMEM_RANGE_MAX,
> > > > > + REG_A6XX_UCHE_GMEM_RANGE_MAX + 1,
> > > > > + REG_A6XX_UCHE_CACHE_WAYS,
> > > > > + REG_A6XX_UCHE_MODE_CNTL,
> > > > > + REG_A6XX_RB_NC_MODE_CNTL,
> > > > > + REG_A6XX_RB_CMP_DBG_ECO_CNTL,
> > > > > + REG_A7XX_GRAS_NC_MODE_CNTL,
> > > > > + REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE,
> > > > > + REG_A6XX_UCHE_GBIF_GX_CONFIG,
> > > > > + REG_A6XX_UCHE_CLIENT_PF,
> > > >
> > > > REG_A6XX_TPL1_DBG_ECO_CNTL1 here. A friendly warning, missing a register
> > > > in this list (and the below list) will lead to a very frustrating debug.
> > > >
> > > > > +};
> > > > > +
> > > > > +static const uint32_t a7xx_ifpc_pwrup_reglist[] = {
> > > > > + REG_A6XX_TPL1_NC_MODE_CNTL,
> > > > > + REG_A6XX_SP_NC_MODE_CNTL,
> > > > > + REG_A6XX_CP_DBG_ECO_CNTL,
> > > > > + REG_A6XX_CP_PROTECT_CNTL,
> > > > > + REG_A6XX_CP_PROTECT(0),
> > > > > + REG_A6XX_CP_PROTECT(1),
> > > > > + REG_A6XX_CP_PROTECT(2),
> > > > > + REG_A6XX_CP_PROTECT(3),
> > > > > + REG_A6XX_CP_PROTECT(4),
> > > > > + REG_A6XX_CP_PROTECT(5),
> > > > > + REG_A6XX_CP_PROTECT(6),
> > > > > + REG_A6XX_CP_PROTECT(7),
> > > > > + REG_A6XX_CP_PROTECT(8),
> > > > > + REG_A6XX_CP_PROTECT(9),
> > > > > + REG_A6XX_CP_PROTECT(10),
> > > > > + REG_A6XX_CP_PROTECT(11),
> > > > > + REG_A6XX_CP_PROTECT(12),
> > > > > + REG_A6XX_CP_PROTECT(13),
> > > > > + REG_A6XX_CP_PROTECT(14),
> > > > > + REG_A6XX_CP_PROTECT(15),
> > > > > + REG_A6XX_CP_PROTECT(16),
> > > > > + REG_A6XX_CP_PROTECT(17),
> > > > > + REG_A6XX_CP_PROTECT(18),
> > > > > + REG_A6XX_CP_PROTECT(19),
> > > > > + REG_A6XX_CP_PROTECT(20),
> > > > > + REG_A6XX_CP_PROTECT(21),
> > > > > + REG_A6XX_CP_PROTECT(22),
> > > > > + REG_A6XX_CP_PROTECT(23),
> > > > > + REG_A6XX_CP_PROTECT(24),
> > > > > + REG_A6XX_CP_PROTECT(25),
> > > > > + REG_A6XX_CP_PROTECT(26),
> > > > > + REG_A6XX_CP_PROTECT(27),
> > > > > + REG_A6XX_CP_PROTECT(28),
> > > > > + REG_A6XX_CP_PROTECT(29),
> > > > > + REG_A6XX_CP_PROTECT(30),
> > > > > + REG_A6XX_CP_PROTECT(31),
> > > > > + REG_A6XX_CP_PROTECT(32),
> > > > > + REG_A6XX_CP_PROTECT(33),
> > > > > + REG_A6XX_CP_PROTECT(34),
> > > > > + REG_A6XX_CP_PROTECT(35),
> > > > > + REG_A6XX_CP_PROTECT(36),
> > > > > + REG_A6XX_CP_PROTECT(37),
> > > > > + REG_A6XX_CP_PROTECT(38),
> > > > > + REG_A6XX_CP_PROTECT(39),
> > > > > + REG_A6XX_CP_PROTECT(40),
> > > > > + REG_A6XX_CP_PROTECT(41),
> > > > > + REG_A6XX_CP_PROTECT(42),
> > > > > + REG_A6XX_CP_PROTECT(43),
> > > > > + REG_A6XX_CP_PROTECT(44),
> > > > > + REG_A6XX_CP_PROTECT(45),
> > > > > + REG_A6XX_CP_PROTECT(46),
> > > > > + REG_A6XX_CP_PROTECT(47),
> > > > > + REG_A6XX_CP_AHB_CNTL,
> > > > > +};
> > > > > +
> > > > > +
> > > > > static inline bool _a6xx_check_idle(struct msm_gpu *gpu)
> > > > > {
> > > > > struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > @@ -68,6 +145,8 @@ static void update_shadow_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > > > >
> > > > > static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > > > > {
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > uint32_t wptr;
> > > > > unsigned long flags;
> > > > >
> > > > > @@ -81,12 +160,26 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > > > > /* Make sure to wrap wptr if we need to */
> > > > > wptr = get_wptr(ring);
> > > > >
> > > > > - spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > > > -
> > > > > /* Make sure everything is posted before making a decision */
> > > > > mb();
> > > >
> > > > This looks unnecessary.
> > > >
> > > > >
> > > > > - gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > > > > + /* Update HW if this is the current ring and we are not in preempt*/
> > > > > + if (!a6xx_in_preempt(a6xx_gpu)) {
> > > > > + /*
> > > > > + * Order the reads of the preempt state and cur_ring. This
> > > > > + * matches the barrier after writing cur_ring.
> > > > > + */
> > > > > + rmb();
> > > >
> > > > we can use the lighter smp variant here.
> > > >
> > > > > +
> > > > > + if (a6xx_gpu->cur_ring == ring)
> > > > > + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > > > > + else
> > > > > + ring->skip_inline_wptr = true;
> > > > > + } else {
> > > > > + ring->skip_inline_wptr = true;
> > > > > + }
> > > > > +
> > > > > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > > > }
> > > > >
> > > > > static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
> > > > > @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> > > >
> > > > set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
> > > > not. This is currently not tracked separately for each ring. Can you
> > > > please check that?
> > > >
> > > > I wonder why that didn't cause any gpu errors in testing. Not sure if I
> > > > am missing something.
> > > >
> > > > >
> > > > > /*
> > > > > * Write the new TTBR0 to the memstore. This is good for debugging.
> > > > > + * Needed for preemption
> > > > > */
> > > > > - OUT_PKT7(ring, CP_MEM_WRITE, 4);
> > > > > + OUT_PKT7(ring, CP_MEM_WRITE, 5);
> > > > > OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr)));
> > > > > OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr)));
> > > > > OUT_RING(ring, lower_32_bits(ttbr));
> > > > > - OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr));
> > > > > + OUT_RING(ring, upper_32_bits(ttbr));
> > > > > + OUT_RING(ring, ctx->seqno);
> > > > >
> > > > > /*
> > > > > * Sync both threads after switching pagetables and enable BR only
> > > > > @@ -268,6 +363,43 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > > > a6xx_flush(gpu, ring);
> > > > > }
> > > > >
> > > > > +static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> > > > > + struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
> > > > > +{
> > > > > + u64 preempt_offset_priv_secure;
> > > > > +
> > > > > + OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
> > > > > +
> > > > > + OUT_RING(ring, SMMU_INFO);
> > > > > + /* don't save SMMU, we write the record from the kernel instead */
> > > > > + OUT_RING(ring, 0);
> > > > > + OUT_RING(ring, 0);
> > > > > +
> > > > > + /* privileged and non secure buffer save */
> > > > > + OUT_RING(ring, NON_SECURE_SAVE_ADDR);
> > > > > + OUT_RING(ring, lower_32_bits(
> > > > > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
> > > > > + OUT_RING(ring, upper_32_bits(
> > > > > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE));
> > > > > + OUT_RING(ring, SECURE_SAVE_ADDR);
> > > > > + preempt_offset_priv_secure =
> > > > > + PREEMPT_OFFSET_PRIV_SECURE(a6xx_gpu->base.info->preempt_record_size);
> > > > > + OUT_RING(ring, lower_32_bits(
> > > > > + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
> > > > > + OUT_RING(ring, upper_32_bits(
> > > > > + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure));
> > > > > +
> > > > > + /* user context buffer save, seems to be unnused by fw */
> > > > > + OUT_RING(ring, NON_PRIV_SAVE_ADDR);
> > > > > + OUT_RING(ring, 0);
> > > > > + OUT_RING(ring, 0);
> > > > > +
> > > > > + OUT_RING(ring, COUNTER);
> > > > > + /* seems OK to set to 0 to disable it */
> > > > > + OUT_RING(ring, 0);
> > > > > + OUT_RING(ring, 0);
> > > > > +}
> > > > > +
> > > > > static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > > > {
> > > > > unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT;
> > > > > @@ -283,6 +415,13 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > > > OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
> > > > > OUT_RING(ring, CP_THREAD_CONTROL_0_SYNC_THREADS | CP_SET_THREAD_BR);
> > > > >
> > > > > + /*
> > > > > + * If preemption is enabled, then set the pseudo register for the save
> > > > > + * sequence
> > > > > + */
> > > > > + if (gpu->nr_rings > 1)
> > > > > + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, submit->queue);
> > > >
> > > > Can we move this after set_pagetable()?
> > > >
> > > > > +
> > > > > a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx);
> > > > >
> > > > > get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0),
> > > > > @@ -376,6 +515,8 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > > > OUT_RING(ring, upper_32_bits(rbmemptr(ring, bv_fence)));
> > > > > OUT_RING(ring, submit->seqno);
> > > > >
> > > > > + a6xx_gpu->last_seqno[ring->id] = submit->seqno;
> > > > > +
> > > > > /* write the ringbuffer timestamp */
> > > > > OUT_PKT7(ring, CP_EVENT_WRITE, 4);
> > > > > OUT_RING(ring, CACHE_CLEAN | CP_EVENT_WRITE_0_IRQ | BIT(27));
> > > > > @@ -389,10 +530,32 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > > > OUT_PKT7(ring, CP_SET_MARKER, 1);
> > > > > OUT_RING(ring, 0x100); /* IFPC enable */
> > > > >
> > > > > + /* If preemption is enabled */
> > > > > + if (gpu->nr_rings > 1) {
> > > > > + /* Yield the floor on command completion */
> > > > > + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
> > > > > +
> > > > > + /*
> > > > > + * If dword[2:1] are non zero, they specify an address for
> > > > > + * the CP to write the value of dword[3] to on preemption
> > > > > + * complete. Write 0 to skip the write
> > > > > + */
> > > > > + OUT_RING(ring, 0x00);
> > > > > + OUT_RING(ring, 0x00);
> > > > > + /* Data value - not used if the address above is 0 */
> > > > > + OUT_RING(ring, 0x01);
> > > > > + /* generate interrupt on preemption completion */
> > > > > + OUT_RING(ring, 0x00);
> > > > > + }
> > > > > +
> > > > > +
> > > > > trace_msm_gpu_submit_flush(submit,
> > > > > gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER));
> > > > >
> > > > > a6xx_flush(gpu, ring);
> > > > > +
> > > > > + /* Check to see if we need to start preemption */
> > > > > + a6xx_preempt_trigger(gpu);
> > > > > }
> > > > >
> > > > > static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state)
> > > > > @@ -588,6 +751,89 @@ static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
> > > > > adreno_gpu->ubwc_config.min_acc_len << 23 | hbb_lo << 21);
> > > > > }
> > > > >
> > > > > +static void a7xx_patch_pwrup_reglist(struct msm_gpu *gpu)
> > > > > +{
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > + struct adreno_reglist_list reglist[2];
> > > > > + void *ptr = a6xx_gpu->pwrup_reglist_ptr;
> > > > > + struct cpu_gpu_lock *lock = ptr;
> > > > > + u32 *dest = (u32 *)&lock->regs[0];
> > > > > + int i, j;
> > > > > +
> > > > This sequence is required only once. We can use a flag to check and bail out
> > > > next time.
> > > >
> > > > > + lock->gpu_req = lock->cpu_req = lock->turn = 0;
> > > > > + lock->ifpc_list_len = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
> > > > > + lock->preemption_list_len = ARRAY_SIZE(a7xx_pwrup_reglist);
> > > > > +
> > > > > + /* Static IFPC-only registers */
> > > > > + reglist[0].regs = a7xx_ifpc_pwrup_reglist;
> > > > > + reglist[0].count = ARRAY_SIZE(a7xx_ifpc_pwrup_reglist);
> > > > > + lock->ifpc_list_len = reglist[0].count;
> > > > > +
> > > > > + /* Static IFPC + preemption registers */
> > > > > + reglist[1].regs = a7xx_pwrup_reglist;
> > > > > + reglist[1].count = ARRAY_SIZE(a7xx_pwrup_reglist);
> > > > > + lock->preemption_list_len = reglist[1].count;
> > > > > +
> > > > > + /*
> > > > > + * For each entry in each of the lists, write the offset and the current
> > > > > + * register value into the GPU buffer
> > > > > + */
> > > > > + for (i = 0; i < 2; i++) {
> > > > > + const u32 *r = reglist[i].regs;
> > > > > +
> > > > > + for (j = 0; j < reglist[i].count; j++) {
> > > > > + *dest++ = r[j];
> > > > > + *dest++ = gpu_read(gpu, r[j]);
> > > > > + }
> > > > > + }
> > > > > +
> > > > > + /*
> > > > > + * The overall register list is composed of
> > > > > + * 1. Static IFPC-only registers
> > > > > + * 2. Static IFPC + preemption registers
> > > > > + * 3. Dynamic IFPC + preemption registers (ex: perfcounter selects)
> > > > > + *
> > > > > + * The first two lists are static. Size of these lists are stored as
> > > > > + * number of pairs in ifpc_list_len and preemption_list_len
> > > > > + * respectively. With concurrent binning, Some of the perfcounter
> > > > > + * registers being virtualized, CP needs to know the pipe id to program
> > > > > + * the aperture inorder to restore the same. Thus, third list is a
> > > > > + * dynamic list with triplets as
> > > > > + * (<aperture, shifted 12 bits> <address> <data>), and the length is
> > > > > + * stored as number for triplets in dynamic_list_len.
> > > > > + */
> > > > > + lock->dynamic_list_len = 0;
> > > > > +}
> > > > > +
> > > > > +static int a7xx_preempt_start(struct msm_gpu *gpu)
> > > > > +{
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > + struct msm_ringbuffer *ring = gpu->rb[0];
> > > > > +
> > > > > + if (gpu->nr_rings <= 1)
> > > > > + return 0;
> > > > > +
> > > > > + /* Turn CP protection off */
> > > > > + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1);
> > > > > + OUT_RING(ring, 0);
> > > > > +
> > > > > + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, NULL);
> > > > > +
> > > > > + /* Yield the floor on command completion */
> > > > > + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4);
> > > > > + OUT_RING(ring, 0x00);
> > > > > + OUT_RING(ring, 0x00);
> > > > > + OUT_RING(ring, 0x01);
> > > >
> > > > Looks like kgsl use 0x00 here. Not sure if that matters!
> > > >
> > > > > + /* Generate interrupt on preemption completion */
> > > > > + OUT_RING(ring, 0x00);
> > > > > +
> > > > > + a6xx_flush(gpu, ring);
> > > > > +
> > > > > + return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
> > > > > +}
> > > > > +
> > > > > static int a6xx_cp_init(struct msm_gpu *gpu)
> > > > > {
> > > > > struct msm_ringbuffer *ring = gpu->rb[0];
> > > > > @@ -619,6 +865,8 @@ static int a6xx_cp_init(struct msm_gpu *gpu)
> > > > >
> > > > > static int a7xx_cp_init(struct msm_gpu *gpu)
> > > > > {
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > struct msm_ringbuffer *ring = gpu->rb[0];
> > > > > u32 mask;
> > > > >
> > > > > @@ -626,6 +874,8 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
> > > > > OUT_PKT7(ring, CP_THREAD_CONTROL, 1);
> > > > > OUT_RING(ring, BIT(27));
> > > > >
> > > > > + a7xx_patch_pwrup_reglist(gpu);
> > > > > +
> > > >
> > > > Looks out of place. I guess you kept it here to avoid an extra a7x
> > > > check. At least we should move this before the above pm4 packets.
> > > >
> > > > > OUT_PKT7(ring, CP_ME_INIT, 7);
> > > > >
> > > > > /* Use multiple HW contexts */
> > > > > @@ -656,11 +906,11 @@ static int a7xx_cp_init(struct msm_gpu *gpu)
> > > > >
> > > > > /* *Don't* send a power up reg list for concurrent binning (TODO) */
> > > > > /* Lo address */
> > > > > - OUT_RING(ring, 0x00000000);
> > > > > + OUT_RING(ring, lower_32_bits(a6xx_gpu->pwrup_reglist_iova));
> > > > > /* Hi address */
> > > > > - OUT_RING(ring, 0x00000000);
> > > > > + OUT_RING(ring, upper_32_bits(a6xx_gpu->pwrup_reglist_iova));
> > > > > /* BIT(31) set => read the regs from the list */
> > > > > - OUT_RING(ring, 0x00000000);
> > > > > + OUT_RING(ring, BIT(31));
> > > > >
> > > > > a6xx_flush(gpu, ring);
> > > > > return a6xx_idle(gpu, ring) ? 0 : -EINVAL;
> > > > > @@ -784,6 +1034,16 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
> > > > > msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow");
> > > > > }
> > > > >
> > > > > + a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE,
> > > > > + MSM_BO_WC | MSM_BO_MAP_PRIV,
> > > > > + gpu->aspace, &a6xx_gpu->pwrup_reglist_bo,
> > > > > + &a6xx_gpu->pwrup_reglist_iova);
> > > > > +
> > > > > + if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr))
> > > > > + return PTR_ERR(a6xx_gpu->pwrup_reglist_ptr);
> > > > > +
> > > > > + msm_gem_object_set_name(a6xx_gpu->pwrup_reglist_bo, "pwrup_reglist");
> > > > > +
> > > > > return 0;
> > > > > }
> > > > >
> > > > > @@ -1127,6 +1387,8 @@ static int hw_init(struct msm_gpu *gpu)
> > > > > if (a6xx_gpu->shadow_bo) {
> > > > > gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR,
> > > > > shadowptr(a6xx_gpu, gpu->rb[0]));
> > > > > + for (unsigned int i = 0; i < gpu->nr_rings; i++)
> > > > > + a6xx_gpu->shadow[i] = 0;
> > > > > }
> > > > >
> > > > > /* ..which means "always" on A7xx, also for BV shadow */
> > > > > @@ -1135,6 +1397,8 @@ static int hw_init(struct msm_gpu *gpu)
> > > > > rbmemptr(gpu->rb[0], bv_rptr));
> > > > > }
> > > > >
> > > > > + a6xx_preempt_hw_init(gpu);
> > > > > +
> > > > > /* Always come up on rb 0 */
> > > > > a6xx_gpu->cur_ring = gpu->rb[0];
> > > > >
> > > > > @@ -1180,6 +1444,10 @@ static int hw_init(struct msm_gpu *gpu)
> > > > > out:
> > > > > if (adreno_has_gmu_wrapper(adreno_gpu))
> > > > > return ret;
> > > > > +
> > > > > + /* Last step - yield the ringbuffer */
> > > > > + a7xx_preempt_start(gpu);
> > > > > +
> > > > > /*
> > > > > * Tell the GMU that we are done touching the GPU and it can start power
> > > > > * management
> > > > > @@ -1557,8 +1825,13 @@ static irqreturn_t a6xx_irq(struct msm_gpu *gpu)
> > > > > if (status & A6XX_RBBM_INT_0_MASK_SWFUSEVIOLATION)
> > > > > a7xx_sw_fuse_violation_irq(gpu);
> > > > >
> > > > > - if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS)
> > > > > + if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS) {
> > > > > msm_gpu_retire(gpu);
> > > > > + a6xx_preempt_trigger(gpu);
> > > > > + }
> > > > > +
> > > > > + if (status & A6XX_RBBM_INT_0_MASK_CP_SW)
> > > > > + a6xx_preempt_irq(gpu);
> > > > >
> > > > > return IRQ_HANDLED;
> > > > > }
> > > > > @@ -2331,6 +2604,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
> > > > > a6xx_fault_handler);
> > > > >
> > > > > a6xx_calc_ubwc_config(adreno_gpu);
> > > > > + /* Set up the preemption specific bits and pieces for each ringbuffer */
> > > > > + a6xx_preempt_init(gpu);
> > > > >
> > > > > return gpu;
> > > > > }
> > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > > > index e3e5c53ae8af..da10060e38dc 100644
> > > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > > > @@ -12,6 +12,31 @@
> > > > >
> > > > > extern bool hang_debug;
> > > > >
> > > > > +struct cpu_gpu_lock {
> > > > > + uint32_t gpu_req;
> > > > > + uint32_t cpu_req;
> > > > > + uint32_t turn;
> > > > > + union {
> > > > > + struct {
> > > > > + uint16_t list_length;
> > > > > + uint16_t list_offset;
> > > > > + };
> > > > > + struct {
> > > > > + uint8_t ifpc_list_len;
> > > > > + uint8_t preemption_list_len;
> > > > > + uint16_t dynamic_list_len;
> > > > > + };
> > > > > + };
> > > > > + uint64_t regs[62];
> > > > > +};
> > > > > +
> > > > > +struct adreno_reglist_list {
> > > > > + /** @reg: List of register **/
> > > > > + const u32 *regs;
> > > > > + /** @count: Number of registers in the list **/
> > > > > + u32 count;
> > > > > +};
> > > > > +
> > > > > /**
> > > > > * struct a6xx_info - a6xx specific information from device table
> > > > > *
> > > > > @@ -31,6 +56,20 @@ struct a6xx_gpu {
> > > > > uint64_t sqe_iova;
> > > > >
> > > > > struct msm_ringbuffer *cur_ring;
> > > > > + struct msm_ringbuffer *next_ring;
> > > > > +
> > > > > + struct drm_gem_object *preempt_bo[MSM_GPU_MAX_RINGS];
> > > > > + void *preempt[MSM_GPU_MAX_RINGS];
> > > > > + uint64_t preempt_iova[MSM_GPU_MAX_RINGS];
> > > > > + uint32_t last_seqno[MSM_GPU_MAX_RINGS];
> > > > > +
> > > > > + atomic_t preempt_state;
> > > > > + spinlock_t eval_lock;
> > > > > + struct timer_list preempt_timer;
> > > > > +
> > > > > + unsigned int preempt_level;
> > > > > + bool uses_gmem;
> > > > > + bool skip_save_restore;
> > > > >
> > > > > struct a6xx_gmu gmu;
> > > > >
> > > > > @@ -38,6 +77,10 @@ struct a6xx_gpu {
> > > > > uint64_t shadow_iova;
> > > > > uint32_t *shadow;
> > > > >
> > > > > + struct drm_gem_object *pwrup_reglist_bo;
> > > > > + void *pwrup_reglist_ptr;
> > > > > + uint64_t pwrup_reglist_iova;
> > > > > +
> > > > > bool has_whereami;
> > > > >
> > > > > void __iomem *llc_mmio;
> > > > > @@ -49,6 +92,105 @@ struct a6xx_gpu {
> > > > >
> > > > > #define to_a6xx_gpu(x) container_of(x, struct a6xx_gpu, base)
> > > > >
> > > > > +/*
> > > > > + * In order to do lockless preemption we use a simple state machine to progress
> > > > > + * through the process.
> > > > > + *
> > > > > + * PREEMPT_NONE - no preemption in progress. Next state START.
> > > > > + * PREEMPT_START - The trigger is evaluating if preemption is possible. Next
> > > > > + * states: TRIGGERED, NONE
> > > > > + * PREEMPT_FINISH - An intermediate state before moving back to NONE. Next
> > > > > + * state: NONE.
> > > > > + * PREEMPT_TRIGGERED: A preemption has been executed on the hardware. Next
> > > > > + * states: FAULTED, PENDING
> > > > > + * PREEMPT_FAULTED: A preemption timed out (never completed). This will trigger
> > > > > + * recovery. Next state: N/A
> > > > > + * PREEMPT_PENDING: Preemption complete interrupt fired - the callback is
> > > > > + * checking the success of the operation. Next state: FAULTED, NONE.
> > > > > + */
> > > > > +
> > > > > +enum a6xx_preempt_state {
> > > > > + PREEMPT_NONE = 0,
> > > > > + PREEMPT_START,
> > > > > + PREEMPT_FINISH,
> > > > > + PREEMPT_TRIGGERED,
> > > > > + PREEMPT_FAULTED,
> > > > > + PREEMPT_PENDING,
> > > > > +};
> > > > > +
> > > > > +/*
> > > > > + * struct a6xx_preempt_record is a shared buffer between the microcode and the
> > > > > + * CPU to store the state for preemption. The record itself is much larger
> > > > > + * (2112k) but most of that is used by the CP for storage.
> > > > > + *
> > > > > + * There is a preemption record assigned per ringbuffer. When the CPU triggers a
> > > > > + * preemption, it fills out the record with the useful information (wptr, ring
> > > > > + * base, etc) and the microcode uses that information to set up the CP following
> > > > > + * the preemption. When a ring is switched out, the CP will save the ringbuffer
> > > > > + * state back to the record. In this way, once the records are properly set up
> > > > > + * the CPU can quickly switch back and forth between ringbuffers by only
> > > > > + * updating a few registers (often only the wptr).
> > > > > + *
> > > > > + * These are the CPU aware registers in the record:
> > > > > + * @magic: Must always be 0xAE399D6EUL
> > > > > + * @info: Type of the record - written 0 by the CPU, updated by the CP
> > > > > + * @errno: preemption error record
> > > > > + * @data: Data field in YIELD and SET_MARKER packets, Written and used by CP
> > > > > + * @cntl: Value of RB_CNTL written by CPU, save/restored by CP
> > > > > + * @rptr: Value of RB_RPTR written by CPU, save/restored by CP
> > > > > + * @wptr: Value of RB_WPTR written by CPU, save/restored by CP
> > > > > + * @_pad: Reserved/padding
> > > > > + * @rptr_addr: Value of RB_RPTR_ADDR_LO|HI written by CPU, save/restored by CP
> > > > > + * @rbase: Value of RB_BASE written by CPU, save/restored by CP
> > > > > + * @counter: GPU address of the storage area for the preemption counters
> > > >
> > > > doc missing for bv_rptr_addr.
> > > >
> > > > > + */
> > > > > +struct a6xx_preempt_record {
> > > > > + u32 magic;
> > > > > + u32 info;
> > > > > + u32 errno;
> > > > > + u32 data;
> > > > > + u32 cntl;
> > > > > + u32 rptr;
> > > > > + u32 wptr;
> > > > > + u32 _pad;
> > > > > + u64 rptr_addr;
> > > > > + u64 rbase;
> > > > > + u64 counter;
> > > > > + u64 bv_rptr_addr;
> > > > > +};
> > > > > +
> > > > > +#define A6XX_PREEMPT_RECORD_MAGIC 0xAE399D6EUL
> > > > > +
> > > > > +#define PREEMPT_RECORD_SIZE_FALLBACK(size) \
> > > > > + ((size) == 0 ? 4192 * SZ_1K : (size))
> > > > > +
> > > > > +#define PREEMPT_OFFSET_SMMU_INFO 0
> > > > > +#define PREEMPT_OFFSET_PRIV_NON_SECURE (PREEMPT_OFFSET_SMMU_INFO + 4096)
> > > > > +#define PREEMPT_OFFSET_PRIV_SECURE(size) \
> > > > > + (PREEMPT_OFFSET_PRIV_NON_SECURE + PREEMPT_RECORD_SIZE_FALLBACK(size))
> > > > > +#define PREEMPT_SIZE(size) \
> > > > > + (PREEMPT_OFFSET_PRIV_SECURE(size) + PREEMPT_RECORD_SIZE_FALLBACK(size))
> > > > > +
> > > > > +/*
> > > > > + * The preemption counter block is a storage area for the value of the
> > > > > + * preemption counters that are saved immediately before context switch. We
> > > > > + * append it on to the end of the allocation for the preemption record.
> > > > > + */
> > > > > +#define A6XX_PREEMPT_COUNTER_SIZE (16 * 4)
> > > > > +
> > > > > +#define A6XX_PREEMPT_USER_RECORD_SIZE (192 * 1024)
> > > >
> > > > Unused.
> > > >
> > > > > +
> > > > > +struct a7xx_cp_smmu_info {
> > > > > + u32 magic;
> > > > > + u32 _pad4;
> > > > > + u64 ttbr0;
> > > > > + u32 asid;
> > > > > + u32 context_idr;
> > > > > + u32 context_bank;
> > > > > +};
> > > > > +
> > > > > +#define GEN7_CP_SMMU_INFO_MAGIC 0x241350d5UL
> > > > > +
> > > > > /*
> > > > > * Given a register and a count, return a value to program into
> > > > > * REG_CP_PROTECT_REG(n) - this will block both reads and writes for
> > > > > @@ -106,6 +248,25 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
> > > > > int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node);
> > > > > void a6xx_gmu_remove(struct a6xx_gpu *a6xx_gpu);
> > > > >
> > > > > +void a6xx_preempt_init(struct msm_gpu *gpu);
> > > > > +void a6xx_preempt_hw_init(struct msm_gpu *gpu);
> > > > > +void a6xx_preempt_trigger(struct msm_gpu *gpu);
> > > > > +void a6xx_preempt_irq(struct msm_gpu *gpu);
> > > > > +void a6xx_preempt_fini(struct msm_gpu *gpu);
> > > > > +int a6xx_preempt_submitqueue_setup(struct msm_gpu *gpu,
> > > > > + struct msm_gpu_submitqueue *queue);
> > > > > +void a6xx_preempt_submitqueue_close(struct msm_gpu *gpu,
> > > > > + struct msm_gpu_submitqueue *queue);
> > > > > +
> > > > > +/* Return true if we are in a preempt state */
> > > > > +static inline bool a6xx_in_preempt(struct a6xx_gpu *a6xx_gpu)
> > > > > +{
> > > > > + int preempt_state = atomic_read(&a6xx_gpu->preempt_state);
> > > >
> > > > I think we should keep a matching barrier before the 'read' similar to the one used in the
> > > > set_preempt_state helper.
> > >
> > > Good idea, but for the one case we found where it matters (the
> > > a6xx_flush() vs. updating the ring in a6xx_preempt_irq() race) the
> > > barrier needs to be after the read. The sequence is something like:
> > >
> > > Thread A:
> > >
> > > a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
> > > a6xx_gpu->preempt_state = PREEMPT_FINISH;
> > >
> > > Thread B:
> > >
> > > read a6xx_gpu->preempt_state;
> > > read a6xx_gpu->cur_ring;
> > >
> > > And if the read to preempt_state returns PREEMPT_FINISH, then we need
> > > cur_ring to reflect the ring we switched to. (I discovered this the
> > > hard way from debugging deadlocks...)
> > >
> > > So, maybe add a smp_rmb() before and after, then drop the explicit
> > > barrier in a6xx_flush()?
> >
> > Ack. I think it is better to use a helper similar to set_preempt_state()
> > and consistently use that everywhere.
>
> Do you mean something for setting cur_ring? There is only one place where
> that would be used (besides two other places where the initial value is
> set).
No. I meant a helper for reading preempt_state with necesary barriers.
Okay, it looks like this is the only place where we do a
atomic_read(preempt_state). It is fine to just add the barriers with
some documentation here.
-Akhil.
>
> >
> > >
> > > >
> > > > > +
> > > > > + return !(preempt_state == PREEMPT_NONE ||
> > > > > + preempt_state == PREEMPT_FINISH);
> > > > > +}
> > > > > +
> > > > > void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
> > > > > bool suspended);
> > > > > unsigned long a6xx_gmu_get_freq(struct msm_gpu *gpu);
> > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > > > new file mode 100644
> > > > > index 000000000000..1caff76aca6e
> > > > > --- /dev/null
> > > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > > > @@ -0,0 +1,391 @@
> > > > > +// SPDX-License-Identifier: GPL-2.0
> > > > > +/* Copyright (c) 2018, The Linux Foundation. All rights reserved. */
> > > > > +/* Copyright (c) 2023 Collabora, Ltd. */
> > > > > +/* Copyright (c) 2024 Valve Corporation */
> > > > > +
> > > > > +#include "msm_gem.h"
> > > > > +#include "a6xx_gpu.h"
> > > > > +#include "a6xx_gmu.xml.h"
> > > > > +#include "msm_mmu.h"
> > > > > +
> > > > > +/*
> > > > > + * Try to transition the preemption state from old to new. Return
> > > > > + * true on success or false if the original state wasn't 'old'
> > > > > + */
> > > > > +static inline bool try_preempt_state(struct a6xx_gpu *a6xx_gpu,
> > > > > + enum a6xx_preempt_state old, enum a6xx_preempt_state new)
> > > > > +{
> > > > > + enum a6xx_preempt_state cur = atomic_cmpxchg(&a6xx_gpu->preempt_state,
> > > > > + old, new);
> > > > > +
> > > > > + return (cur == old);
> > > > > +}
> > > > > +
> > > > > +/*
> > > > > + * Force the preemption state to the specified state. This is used in cases
> > > > > + * where the current state is known and won't change
> > > > > + */
> > > > > +static inline void set_preempt_state(struct a6xx_gpu *gpu,
> > > > > + enum a6xx_preempt_state new)
> > > > > +{
> > > > > + /*
> > > > > + * preempt_state may be read by other cores trying to trigger a
> > > > > + * preemption or in the interrupt handler so barriers are needed
> > > > > + * before...
> > > > > + */
> > > > > + smp_mb__before_atomic();
> > > > > + atomic_set(&gpu->preempt_state, new);
> > > > > + /* ... and after*/
> > > > > + smp_mb__after_atomic();
> > > > > +}
> > > > > +
> > > > > +/* Write the most recent wptr for the given ring into the hardware */
> > > > > +static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> > > > > +{
> > > > > + unsigned long flags;
> > > > > + uint32_t wptr;
> > > > > +
> > > > > + if (!ring)
> > > >
> > > > Is this ever true?
> > > >
> > > > > + return;
> > > > > +
> > > > > + spin_lock_irqsave(&ring->preempt_lock, flags);
> > > > > +
> > > > > + if (ring->skip_inline_wptr) {
> > > > > + wptr = get_wptr(ring);
> > > > > +
> > > > > + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > > > > +
> > > > > + ring->skip_inline_wptr = false;
> > > > > + }
> > > > > +
> > > > > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > > > +}
> > > > > +
> > > > > +/* Return the highest priority ringbuffer with something in it */
> > > > > +static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
> > > > > +{
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > +
> > > > > + unsigned long flags;
> > > > > + int i;
> > > > > +
> > > > > + for (i = 0; i < gpu->nr_rings; i++) {
> > > > > + bool empty;
> > > > > + struct msm_ringbuffer *ring = gpu->rb[i];
> > > > > +
> > > > > + spin_lock_irqsave(&ring->preempt_lock, flags);
> > > > > + empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
> > > > > + if (!empty && ring == a6xx_gpu->cur_ring)
> > > > > + empty = ring->memptrs->fence == a6xx_gpu->last_seqno[i];
> > > > > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > > > +
> > > > > + if (!empty)
> > > > > + return ring;
> > > > > + }
> > > > > +
> > > > > + return NULL;
> > > > > +}
> > > > > +
> > > > > +static void a6xx_preempt_timer(struct timer_list *t)
> > > > > +{
> > > > > + struct a6xx_gpu *a6xx_gpu = from_timer(a6xx_gpu, t, preempt_timer);
> > > > > + struct msm_gpu *gpu = &a6xx_gpu->base.base;
> > > > > + struct drm_device *dev = gpu->dev;
> > > > > +
> > > > > + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_FAULTED))
> > > > > + return;
> > > > > +
> > > > > + dev_err(dev->dev, "%s: preemption timed out\n", gpu->name);
> > > > > + kthread_queue_work(gpu->worker, &gpu->recover_work);
> > > > > +}
> > > > > +
> > > > > +void a6xx_preempt_irq(struct msm_gpu *gpu)
> > > > > +{
> > > > > + uint32_t status;
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > + struct drm_device *dev = gpu->dev;
> > > > > +
> > > > > + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_PENDING))
> > > > > + return;
> > > > > +
> > > > > + /* Delete the preemption watchdog timer */
> > > > > + del_timer(&a6xx_gpu->preempt_timer);
> > > > > +
> > > > > + /*
> > > > > + * The hardware should be setting the stop bit of CP_CONTEXT_SWITCH_CNTL
> > > > > + * to zero before firing the interrupt, but there is a non zero chance
> > > > > + * of a hardware condition or a software race that could set it again
> > > > > + * before we have a chance to finish. If that happens, log and go for
> > > > > + * recovery
> > > > > + */
> > > > > + status = gpu_read(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL);
> > > > > + if (unlikely(status & A6XX_CP_CONTEXT_SWITCH_CNTL_STOP)) {
> > > > > + DRM_DEV_ERROR(&gpu->pdev->dev,
> > > > > + "!!!!!!!!!!!!!!!! preemption faulted !!!!!!!!!!!!!! irq\n");
> > > > > + set_preempt_state(a6xx_gpu, PREEMPT_FAULTED);
> > > > > + dev_err(dev->dev, "%s: Preemption failed to complete\n",
> > > > > + gpu->name);
> > > > > + kthread_queue_work(gpu->worker, &gpu->recover_work);
> > > > > + return;
> > > > > + }
> > > > > +
> > > > > + a6xx_gpu->cur_ring = a6xx_gpu->next_ring;
> > > > > + a6xx_gpu->next_ring = NULL;
> > > > > +
> > > > > + /* Make sure the write to cur_ring is posted before the change in state */
> > > > > + wmb();
> > > >
> > > > Not needed. set_preempt_state has the necessary barrier.
> > > >
> > > > > +
> > > > > + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
> > > > > +
> > > > > + update_wptr(gpu, a6xx_gpu->cur_ring);
> > > > > +
> > > > > + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> > > > > +
> > > > > + /*
> > > > > + * Retrigger preemption to avoid a deadlock that might occur when preemption
> > > > > + * is skipped due to it being already in flight when requested.
> > > > > + */
> > > > > + a6xx_preempt_trigger(gpu);
> > > > > +}
> > > > > +
> > > > > +void a6xx_preempt_hw_init(struct msm_gpu *gpu)
> > > > > +{
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > + int i;
> > > > > +
> > > > > + /* No preemption if we only have one ring */
> > > > > + if (gpu->nr_rings == 1)
> > > > > + return;
> > > > > +
> > > > > + for (i = 0; i < gpu->nr_rings; i++) {
> > > > > + struct a6xx_preempt_record *record_ptr =
> > > > > + a6xx_gpu->preempt[i] + PREEMPT_OFFSET_PRIV_NON_SECURE;
> > > > > + record_ptr->wptr = 0;
> > > > > + record_ptr->rptr = 0;
> > > > > + record_ptr->rptr_addr = shadowptr(a6xx_gpu, gpu->rb[i]);
> > > > > + record_ptr->info = 0;
> > > > > + record_ptr->data = 0;
> > > > > + record_ptr->rbase = gpu->rb[i]->iova;
> > > > > + }
> > > > > +
> > > > > + /* Write a 0 to signal that we aren't switching pagetables */
> > > > > + gpu_write64(gpu, REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO, 0);
> > > > > +
> > > > > + /* Enable the GMEM save/restore feature for preemption */
> > > > > + gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE, 0x1);
> > > > > +
> > > > > + /* Reset the preemption state */
> > > > > + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> > > > > +
> > > > > + spin_lock_init(&a6xx_gpu->eval_lock);
> > > > > +
> > > > > + /* Always come up on rb 0 */
> > > > > + a6xx_gpu->cur_ring = gpu->rb[0];
> > > > > +}
> > > > > +
> > > > > +void a6xx_preempt_trigger(struct msm_gpu *gpu)
> > > > > +{
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > + u64 preempt_offset_priv_secure;
> > > > > + unsigned long flags;
> > > > > + struct msm_ringbuffer *ring;
> > > > > + unsigned int cntl;
> > > > > +
> > > > > + if (gpu->nr_rings == 1)
> > > > > + return;
> > > > > +
> > > > > + /*
> > > > > + * Lock to make sure another thread attempting preemption doesn't skip it
> > > > > + * while we are still evaluating the next ring. This makes sure the other
> > > > > + * thread does start preemption if we abort it and avoids a soft lock.
> > > > > + */
> > > > > + spin_lock_irqsave(&a6xx_gpu->eval_lock, flags);
> > > > > +
> > > > > + /*
> > > > > + * Try to start preemption by moving from NONE to START. If
> > > > > + * unsuccessful, a preemption is already in flight
> > > > > + */
> > > > > + if (!try_preempt_state(a6xx_gpu, PREEMPT_NONE, PREEMPT_START)) {
> > > > > + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> > > > > + return;
> > > > > + }
> > > > > +
> > > > > + cntl = A6XX_CP_CONTEXT_SWITCH_CNTL_LEVEL(a6xx_gpu->preempt_level);
> > > > > +
> > > > > + if (a6xx_gpu->skip_save_restore)
> > > > > + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_SKIP_SAVE_RESTORE;
> > > > > +
> > > > > + if (a6xx_gpu->uses_gmem)
> > > > > + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_USES_GMEM;
> > > > > +
> > > > > + cntl |= A6XX_CP_CONTEXT_SWITCH_CNTL_STOP;
> > > > > +
> > > > > + /* Get the next ring to preempt to */
> > > > > + ring = get_next_ring(gpu);
> > > > > +
> > > > > + /*
> > > > > + * If no ring is populated or the highest priority ring is the current
> > > > > + * one do nothing except to update the wptr to the latest and greatest
> > > > > + */
> > > > > + if (!ring || (a6xx_gpu->cur_ring == ring)) {
> > > > > + set_preempt_state(a6xx_gpu, PREEMPT_FINISH);
> > > > > + update_wptr(gpu, a6xx_gpu->cur_ring);
> > > > > + set_preempt_state(a6xx_gpu, PREEMPT_NONE);
> > > > > + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> > > > > + return;
> > > > > + }
> > > > > +
> > > > > + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags);
> > > > > +
> > > > > + spin_lock_irqsave(&ring->preempt_lock, flags);
> > > > > +
> > > > > + struct a7xx_cp_smmu_info *smmu_info_ptr =
> > > > > + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_SMMU_INFO;
> > > > > + struct a6xx_preempt_record *record_ptr =
> > > > > + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE;
> > > > > + u64 ttbr0 = ring->memptrs->ttbr0;
> > > > > + u32 context_idr = ring->memptrs->context_idr;
> > > > > +
> > > > > + smmu_info_ptr->ttbr0 = ttbr0;
> > > > > + smmu_info_ptr->context_idr = context_idr;
> > > > > + record_ptr->wptr = get_wptr(ring);
> > > > > +
> > > > > + /*
> > > > > + * The GPU will write the wptr we set above when we preempt. Reset
> > > > > + * skip_inline_wptr to make sure that we don't write WPTR to the same
> > > > > + * thing twice. It's still possible subsequent submissions will update
> > > > > + * wptr again, in which case they will set the flag to true. This has
> > > > > + * to be protected by the lock for setting the flag and updating wptr
> > > > > + * to be atomic.
> > > > > + */
> > > > > + ring->skip_inline_wptr = false;
> > > > > +
> > > > > + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > > > +
> > > > > + gpu_write64(gpu,
> > > > > + REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO,
> > > > > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_SMMU_INFO);
> > > > > +
> > > > > + gpu_write64(gpu,
> > > > > + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR,
> > > > > + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE);
> > > > > +
> > > > > + preempt_offset_priv_secure =
> > > > > + PREEMPT_OFFSET_PRIV_SECURE(adreno_gpu->info->preempt_record_size);
> > > > > + gpu_write64(gpu,
> > > > > + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR,
> > > > > + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure);
> > > >
> > > > Secure buffers are not supported currently, so we can skip this and the
> > > > context record allocation. Anyway this has to be a separate buffer
> > > > mapped in secure pagetable which don't currently have. We can skip the
> > > > same in pseudo register packet too.
> > > >
> > > > > +
> > > > > + a6xx_gpu->next_ring = ring;
> > > > > +
> > > > > + /* Start a timer to catch a stuck preemption */
> > > > > + mod_timer(&a6xx_gpu->preempt_timer, jiffies + msecs_to_jiffies(10000));
> > > > > +
> > > > > + /* Set the preemption state to triggered */
> > > > > + set_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED);
> > > > > +
> > > > > + /* Make sure any previous writes to WPTR are posted */
> > > > > + gpu_read(gpu, REG_A6XX_CP_RB_WPTR);
> > > > > +
> > > > > + /* Make sure everything is written before hitting the button */
> > > > > + wmb();
> > > >
> > > > This and the above read back looks unnecessary. All writes to gpu are
> > > > ordered anyway.
> > >
> > > I thought the whole reason for
> > > https://lore.kernel.org/linux-kernel/20240508-topic-adreno-v1-1-1babd05c119d@linaro.org/
> > > is that memory-mapped writes to different GPU registers are *not*
> > > necessarily ordered from the GPU's perspective (even if they are from
> > > the CPU). That's why I suggested the readback. Or am I missing
> > > something?
> >
> > Lets consider that GBIF unhalt sequence as an exception. Generally, we
> > can consider writes to gpu registers to be ordered.
> >
> > -Akhil.
> >
> > >
> > > >
> > > > > +
> > > > > + /* Trigger the preemption */
> > > > > + gpu_write(gpu, REG_A6XX_CP_CONTEXT_SWITCH_CNTL, cntl);
> > > > > +}
> > > > > +
> > > > > +static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
> > > > > + struct msm_ringbuffer *ring)
> > > > > +{
> > > > > + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> > > > > + struct msm_gpu *gpu = &adreno_gpu->base;
> > > > > + struct drm_gem_object *bo = NULL;
> > > > > + phys_addr_t ttbr;
> > > > > + u64 iova = 0;
> > > > > + void *ptr;
> > > > > + int asid;
> > > > > +
> > > > > + ptr = msm_gem_kernel_new(gpu->dev,
> > > > > + PREEMPT_SIZE(adreno_gpu->info->preempt_record_size),
> > > > > + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova);
> > > >
> > > > set a name?
> > > >
> > > > > +
> > > > > + memset(ptr, 0, PREEMPT_SIZE(adreno_gpu->info->preempt_record_size));
> > > > > +
> > > > > + if (IS_ERR(ptr))
> > > > > + return PTR_ERR(ptr);
> > > > > +
> > > > > + a6xx_gpu->preempt_bo[ring->id] = bo;
> > > > > + a6xx_gpu->preempt_iova[ring->id] = iova;
> > > > > + a6xx_gpu->preempt[ring->id] = ptr;
> > > > > +
> > > > > + struct a7xx_cp_smmu_info *smmu_info_ptr = ptr + PREEMPT_OFFSET_SMMU_INFO;
> > > > > + struct a6xx_preempt_record *record_ptr = ptr + PREEMPT_OFFSET_PRIV_NON_SECURE;
> > > > > +
> > > > > + msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid);
> > > > > +
> > > > > + smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC;
> > > > > + smmu_info_ptr->ttbr0 = ttbr;
> > > > > + smmu_info_ptr->asid = 0xdecafbad;
> > > > > + smmu_info_ptr->context_idr = 0;
> > > > > +
> > > > > + /* Set up the defaults on the preemption record */
> > > > > + record_ptr->magic = A6XX_PREEMPT_RECORD_MAGIC;
> > > > > + record_ptr->info = 0;
> > > > > + record_ptr->data = 0;
> > > > > + record_ptr->rptr = 0;
> > > > > + record_ptr->wptr = 0;
> > > > > + record_ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT;
> > > > > + record_ptr->rbase = ring->iova;
> > > > > + record_ptr->counter = 0;
> > > > > + record_ptr->bv_rptr_addr = rbmemptr(ring, bv_rptr);
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +void a6xx_preempt_fini(struct msm_gpu *gpu)
> > > > > +{
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > + int i;
> > > > > +
> > > > > + for (i = 0; i < gpu->nr_rings; i++)
> > > > > + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace);
> > > > > +}
> > > > > +
> > > > > +void a6xx_preempt_init(struct msm_gpu *gpu)
> > > > > +{
> > > > > + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
> > > > > + int i;
> > > > > +
> > > > > + /* No preemption if we only have one ring */
> > > > > + if (gpu->nr_rings <= 1)
> > > > > + return;
> > > > > +
> > > > > + for (i = 0; i < gpu->nr_rings; i++) {
> > > > > + if (preempt_init_ring(a6xx_gpu, gpu->rb[i]))
> > > > > + goto fail;
> > > > > + }
> > > > > +
> > > > > + /* TODO: make this configurable? */
> > > > > + a6xx_gpu->preempt_level = 1;
> > > > > + a6xx_gpu->uses_gmem = 1;
> > > > > + a6xx_gpu->skip_save_restore = 1;
> > > > > +
> > > > > + timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
> > > > > +
> > > > > + return;
> > > > > +fail:
> > > >
> > > > Log an error so that preemption is not disabled silently?
> > > >
> > > > > + /*
> > > > > + * On any failure our adventure is over. Clean up and
> > > > > + * set nr_rings to 1 to force preemption off
> > > > > + */
> > > > > + a6xx_preempt_fini(gpu);
> > > > > + gpu->nr_rings = 1;
> > > > > +
> > > > > + return;
> > > > > +}
> > > > > diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h
> > > > > index 40791b2ade46..7dde6a312511 100644
> > > > > --- a/drivers/gpu/drm/msm/msm_ringbuffer.h
> > > > > +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h
> > > > > @@ -36,6 +36,7 @@ struct msm_rbmemptrs {
> > > > >
> > > > > volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT];
> > > > > volatile u64 ttbr0;
> > > > > + volatile u32 context_idr;
> > > > > };
> > > > >
> > > > > struct msm_cp_state {
> > > > > @@ -100,6 +101,12 @@ struct msm_ringbuffer {
> > > > > * preemption. Can be aquired from irq context.
> > > > > */
> > > > > spinlock_t preempt_lock;
> > > > > +
> > > > > + /*
> > > > > + * Whether we skipped writing wptr and it needs to be updated in the
> > > > > + * future when the ring becomes current.
> > > > > + */
> > > > > + bool skip_inline_wptr;
> > > >
> > > > nit: does 'restore_wptr' makes more sense? Or something better? Basically, name it based
> > > > on the future action?
> > > >
> > > > -Akhil
> > > >
> > > > > };
> > > > >
> > > > > struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> > > > >
> > > > > --
> > > > > 2.46.0
> > > > >
>
> Best regards,
> --
> Antonino Maniscalco <antomani103@gmail.com>
>
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-06 19:54 ` Akhil P Oommen
2024-09-09 12:22 ` Connor Abbott
@ 2024-09-09 13:15 ` Antonino Maniscalco
2024-09-09 13:42 ` Connor Abbott
2024-09-09 17:24 ` Antonino Maniscalco
1 sibling, 2 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-09 13:15 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Sharat Masetty, Neil Armstrong
On 9/6/24 9:54 PM, Akhil P Oommen wrote:
> On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
>> This patch implements preemption feature for A6xx targets, this allows
>> the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
>> hardware as such supports multiple levels of preemption granularities,
>> ranging from coarse grained(ringbuffer level) to a more fine grained
>> such as draw-call level or a bin boundary level preemption. This patch
>> enables the basic preemption level, with more fine grained preemption
>> support to follow.
>>
>> Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
>> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
>> Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
>> ---
>> drivers/gpu/drm/msm/Makefile | 1 +
>> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
>> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
...
>
> we can use the lighter smp variant here.
>
>> +
>> + if (a6xx_gpu->cur_ring == ring)
>> + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
>> + else
>> + ring->skip_inline_wptr = true;
>> + } else {
>> + ring->skip_inline_wptr = true;
>> + }
>> +
>> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
>> }
>>
>> static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
>> @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
>
> set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
> not. This is currently not tracked separately for each ring. Can you
> please check that?
I totally missed that. Thanks for catching it!
>
> I wonder why that didn't cause any gpu errors in testing. Not sure if I
> am missing something.
>
I think this is because, so long as a single context doesn't submit to
two different rings with differenr priorities, we will only be incorrect
in the sense that we emit more page table switches than necessary and
never less. However untrusted userspace could create a context that
submits to two different rings and that would lead to execution in the
wrong context so we must fix this.
>>
>> /*
>> * Write the new TTBR0 to the memstore. This is good for debugging.
>> + * Needed for preemption
>> */
>> - OUT_PKT7(ring, CP_MEM_WRITE, 4);
>> + OUT_PKT7(ring, CP_MEM_WRITE, 5);
>> OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr)));
>> OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr)));
>> OUT_RING(ring, lower_32_bits(ttbr));
>> - OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr));
>> + OUT_RING(ring, upper_32_bits(ttbr));
>> + OUT_RING(ring, ctx->seqno);
>>
>> /*
>> * Sync both threads after switching pagetables and enable BR only
>> @@ -268,6 +363,43 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>> a6xx_flush(gpu, ring);
>> }
...
>> + struct a6xx_preempt_record *record_ptr =
>> + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE;
>> + u64 ttbr0 = ring->memptrs->ttbr0;
>> + u32 context_idr = ring->memptrs->context_idr;
>> +
>> + smmu_info_ptr->ttbr0 = ttbr0;
>> + smmu_info_ptr->context_idr = context_idr;
>> + record_ptr->wptr = get_wptr(ring);
>> +
>> + /*
>> + * The GPU will write the wptr we set above when we preempt. Reset
>> + * skip_inline_wptr to make sure that we don't write WPTR to the same
>> + * thing twice. It's still possible subsequent submissions will update
>> + * wptr again, in which case they will set the flag to true. This has
>> + * to be protected by the lock for setting the flag and updating wptr
>> + * to be atomic.
>> + */
>> + ring->skip_inline_wptr = false;
>> +
>> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
>> +
>> + gpu_write64(gpu,
>> + REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO,
>> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_SMMU_INFO);
>> +
>> + gpu_write64(gpu,
>> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR,
>> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE);
>> +
>> + preempt_offset_priv_secure =
>> + PREEMPT_OFFSET_PRIV_SECURE(adreno_gpu->info->preempt_record_size);
>> + gpu_write64(gpu,
>> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR,
>> + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure);
>
> Secure buffers are not supported currently, so we can skip this and the
> context record allocation. Anyway this has to be a separate buffer
> mapped in secure pagetable which don't currently have. We can skip the
> same in pseudo register packet too.
>
Mmm it would appear that not setting it causes an hang very early. I'll
see if I can find out more about what is going on.
>> +
>> + a6xx_gpu->next_ring = ring;
>> +
...
>>
>> struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
>>
>> --
>> 2.46.0
>>
Best regards,
--
Antonino Maniscalco <antomani103@gmail.com>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-09 13:15 ` Antonino Maniscalco
@ 2024-09-09 13:42 ` Connor Abbott
2024-09-09 14:40 ` Rob Clark
2024-09-09 17:24 ` Antonino Maniscalco
1 sibling, 1 reply; 32+ messages in thread
From: Connor Abbott @ 2024-09-09 13:42 UTC (permalink / raw)
To: Antonino Maniscalco
Cc: Akhil P Oommen, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Jonathan Corbet, linux-arm-msm, dri-devel,
freedreno, linux-kernel, linux-doc, Sharat Masetty,
Neil Armstrong
On Mon, Sep 9, 2024 at 2:15 PM Antonino Maniscalco
<antomani103@gmail.com> wrote:
>
> On 9/6/24 9:54 PM, Akhil P Oommen wrote:
> > On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
> >> This patch implements preemption feature for A6xx targets, this allows
> >> the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
> >> hardware as such supports multiple levels of preemption granularities,
> >> ranging from coarse grained(ringbuffer level) to a more fine grained
> >> such as draw-call level or a bin boundary level preemption. This patch
> >> enables the basic preemption level, with more fine grained preemption
> >> support to follow.
> >>
> >> Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
> >> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> >> Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
> >> ---
> >> drivers/gpu/drm/msm/Makefile | 1 +
> >> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
> >> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
> ...
> >
> > we can use the lighter smp variant here.
> >
> >> +
> >> + if (a6xx_gpu->cur_ring == ring)
> >> + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> >> + else
> >> + ring->skip_inline_wptr = true;
> >> + } else {
> >> + ring->skip_inline_wptr = true;
> >> + }
> >> +
> >> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> >> }
> >>
> >> static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
> >> @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> >
> > set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
> > not. This is currently not tracked separately for each ring. Can you
> > please check that?
>
> I totally missed that. Thanks for catching it!
>
> >
> > I wonder why that didn't cause any gpu errors in testing. Not sure if I
> > am missing something.
> >
>
> I think this is because, so long as a single context doesn't submit to
> two different rings with differenr priorities, we will only be incorrect
> in the sense that we emit more page table switches than necessary and
> never less. However untrusted userspace could create a context that
> submits to two different rings and that would lead to execution in the
> wrong context so we must fix this.
FWIW, in Mesa in the future we may want to expose multiple Vulkan
queues per device. Then this would definitely blow up.
Connor
>
> >>
> >> /*
> >> * Write the new TTBR0 to the memstore. This is good for debugging.
> >> + * Needed for preemption
> >> */
> >> - OUT_PKT7(ring, CP_MEM_WRITE, 4);
> >> + OUT_PKT7(ring, CP_MEM_WRITE, 5);
> >> OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr)));
> >> OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr)));
> >> OUT_RING(ring, lower_32_bits(ttbr));
> >> - OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr));
> >> + OUT_RING(ring, upper_32_bits(ttbr));
> >> + OUT_RING(ring, ctx->seqno);
> >>
> >> /*
> >> * Sync both threads after switching pagetables and enable BR only
> >> @@ -268,6 +363,43 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> >> a6xx_flush(gpu, ring);
> >> }
> ...
> >> + struct a6xx_preempt_record *record_ptr =
> >> + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE;
> >> + u64 ttbr0 = ring->memptrs->ttbr0;
> >> + u32 context_idr = ring->memptrs->context_idr;
> >> +
> >> + smmu_info_ptr->ttbr0 = ttbr0;
> >> + smmu_info_ptr->context_idr = context_idr;
> >> + record_ptr->wptr = get_wptr(ring);
> >> +
> >> + /*
> >> + * The GPU will write the wptr we set above when we preempt. Reset
> >> + * skip_inline_wptr to make sure that we don't write WPTR to the same
> >> + * thing twice. It's still possible subsequent submissions will update
> >> + * wptr again, in which case they will set the flag to true. This has
> >> + * to be protected by the lock for setting the flag and updating wptr
> >> + * to be atomic.
> >> + */
> >> + ring->skip_inline_wptr = false;
> >> +
> >> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> >> +
> >> + gpu_write64(gpu,
> >> + REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO,
> >> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_SMMU_INFO);
> >> +
> >> + gpu_write64(gpu,
> >> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR,
> >> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE);
> >> +
> >> + preempt_offset_priv_secure =
> >> + PREEMPT_OFFSET_PRIV_SECURE(adreno_gpu->info->preempt_record_size);
> >> + gpu_write64(gpu,
> >> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR,
> >> + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure);
> >
> > Secure buffers are not supported currently, so we can skip this and the
> > context record allocation. Anyway this has to be a separate buffer
> > mapped in secure pagetable which don't currently have. We can skip the
> > same in pseudo register packet too.
> >
>
> Mmm it would appear that not setting it causes an hang very early. I'll
> see if I can find out more about what is going on.
>
> >> +
> >> + a6xx_gpu->next_ring = ring;
> >> +
> ...
> >>
> >> struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> >>
> >> --
> >> 2.46.0
> >>
>
> Best regards,
> --
> Antonino Maniscalco <antomani103@gmail.com>
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-09 13:42 ` Connor Abbott
@ 2024-09-09 14:40 ` Rob Clark
2024-09-10 16:49 ` Akhil P Oommen
0 siblings, 1 reply; 32+ messages in thread
From: Rob Clark @ 2024-09-09 14:40 UTC (permalink / raw)
To: Connor Abbott
Cc: Antonino Maniscalco, Akhil P Oommen, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Jonathan Corbet, linux-arm-msm, dri-devel,
freedreno, linux-kernel, linux-doc, Sharat Masetty,
Neil Armstrong
On Mon, Sep 9, 2024 at 6:43 AM Connor Abbott <cwabbott0@gmail.com> wrote:
>
> On Mon, Sep 9, 2024 at 2:15 PM Antonino Maniscalco
> <antomani103@gmail.com> wrote:
> >
> > On 9/6/24 9:54 PM, Akhil P Oommen wrote:
> > > On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
> > >> This patch implements preemption feature for A6xx targets, this allows
> > >> the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
> > >> hardware as such supports multiple levels of preemption granularities,
> > >> ranging from coarse grained(ringbuffer level) to a more fine grained
> > >> such as draw-call level or a bin boundary level preemption. This patch
> > >> enables the basic preemption level, with more fine grained preemption
> > >> support to follow.
> > >>
> > >> Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
> > >> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> > >> Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
> > >> ---
> > >> drivers/gpu/drm/msm/Makefile | 1 +
> > >> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
> > >> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
> > ...
> > >
> > > we can use the lighter smp variant here.
> > >
> > >> +
> > >> + if (a6xx_gpu->cur_ring == ring)
> > >> + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > >> + else
> > >> + ring->skip_inline_wptr = true;
> > >> + } else {
> > >> + ring->skip_inline_wptr = true;
> > >> + }
> > >> +
> > >> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > >> }
> > >>
> > >> static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
> > >> @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> > >
> > > set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
> > > not. This is currently not tracked separately for each ring. Can you
> > > please check that?
> >
> > I totally missed that. Thanks for catching it!
> >
> > >
> > > I wonder why that didn't cause any gpu errors in testing. Not sure if I
> > > am missing something.
> > >
> >
> > I think this is because, so long as a single context doesn't submit to
> > two different rings with differenr priorities, we will only be incorrect
> > in the sense that we emit more page table switches than necessary and
> > never less. However untrusted userspace could create a context that
> > submits to two different rings and that would lead to execution in the
> > wrong context so we must fix this.
>
> FWIW, in Mesa in the future we may want to expose multiple Vulkan
> queues per device. Then this would definitely blow up.
This will actually be required by future android versions, with the
switch to vk hwui backend (because apparently locking is hard, the
solution was to use different queue's for different threads)
https://gitlab.freedesktop.org/mesa/mesa/-/issues/11326
BR,
-R
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-09 14:40 ` Rob Clark
@ 2024-09-10 16:49 ` Akhil P Oommen
0 siblings, 0 replies; 32+ messages in thread
From: Akhil P Oommen @ 2024-09-10 16:49 UTC (permalink / raw)
To: Rob Clark
Cc: Connor Abbott, Antonino Maniscalco, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Jonathan Corbet, linux-arm-msm, dri-devel,
freedreno, linux-kernel, linux-doc, Sharat Masetty,
Neil Armstrong
On Mon, Sep 09, 2024 at 07:40:07AM -0700, Rob Clark wrote:
> On Mon, Sep 9, 2024 at 6:43 AM Connor Abbott <cwabbott0@gmail.com> wrote:
> >
> > On Mon, Sep 9, 2024 at 2:15 PM Antonino Maniscalco
> > <antomani103@gmail.com> wrote:
> > >
> > > On 9/6/24 9:54 PM, Akhil P Oommen wrote:
> > > > On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
> > > >> This patch implements preemption feature for A6xx targets, this allows
> > > >> the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
> > > >> hardware as such supports multiple levels of preemption granularities,
> > > >> ranging from coarse grained(ringbuffer level) to a more fine grained
> > > >> such as draw-call level or a bin boundary level preemption. This patch
> > > >> enables the basic preemption level, with more fine grained preemption
> > > >> support to follow.
> > > >>
> > > >> Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
> > > >> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> > > >> Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
> > > >> ---
> > > >> drivers/gpu/drm/msm/Makefile | 1 +
> > > >> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
> > > >> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
> > > ...
> > > >
> > > > we can use the lighter smp variant here.
> > > >
> > > >> +
> > > >> + if (a6xx_gpu->cur_ring == ring)
> > > >> + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
> > > >> + else
> > > >> + ring->skip_inline_wptr = true;
> > > >> + } else {
> > > >> + ring->skip_inline_wptr = true;
> > > >> + }
> > > >> +
> > > >> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
> > > >> }
> > > >>
> > > >> static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter,
> > > >> @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> > > >
> > > > set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
> > > > not. This is currently not tracked separately for each ring. Can you
> > > > please check that?
> > >
> > > I totally missed that. Thanks for catching it!
> > >
> > > >
> > > > I wonder why that didn't cause any gpu errors in testing. Not sure if I
> > > > am missing something.
> > > >
> > >
> > > I think this is because, so long as a single context doesn't submit to
> > > two different rings with differenr priorities, we will only be incorrect
> > > in the sense that we emit more page table switches than necessary and
> > > never less. However untrusted userspace could create a context that
> > > submits to two different rings and that would lead to execution in the
> > > wrong context so we must fix this.
Yep, it would be a security bug!
-Akhil
> >
> > FWIW, in Mesa in the future we may want to expose multiple Vulkan
> > queues per device. Then this would definitely blow up.
>
> This will actually be required by future android versions, with the
> switch to vk hwui backend (because apparently locking is hard, the
> solution was to use different queue's for different threads)
>
> https://gitlab.freedesktop.org/mesa/mesa/-/issues/11326
>
> BR,
> -R
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets
2024-09-09 13:15 ` Antonino Maniscalco
2024-09-09 13:42 ` Connor Abbott
@ 2024-09-09 17:24 ` Antonino Maniscalco
1 sibling, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-09 17:24 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Sharat Masetty, Neil Armstrong
On 9/9/24 3:15 PM, Antonino Maniscalco wrote:
> On 9/6/24 9:54 PM, Akhil P Oommen wrote:
>> On Thu, Sep 05, 2024 at 04:51:22PM +0200, Antonino Maniscalco wrote:
>>> This patch implements preemption feature for A6xx targets, this allows
>>> the GPU to switch to a higher priority ringbuffer if one is ready. A6XX
>>> hardware as such supports multiple levels of preemption granularities,
>>> ranging from coarse grained(ringbuffer level) to a more fine grained
>>> such as draw-call level or a bin boundary level preemption. This patch
>>> enables the basic preemption level, with more fine grained preemption
>>> support to follow.
>>>
>>> Signed-off-by: Sharat Masetty <smasetty@codeaurora.org>
>>> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
>>> Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
>>> ---
>>> drivers/gpu/drm/msm/Makefile | 1 +
>>> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 293 +++++++++++++++++++++-
>>> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 161 ++++++++++++
> ...
>>
>> we can use the lighter smp variant here.
>>
>>> +
>>> + if (a6xx_gpu->cur_ring == ring)
>>> + gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr);
>>> + else
>>> + ring->skip_inline_wptr = true;
>>> + } else {
>>> + ring->skip_inline_wptr = true;
>>> + }
>>> +
>>> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
>>> }
>>> static void get_stats_counter(struct msm_ringbuffer *ring, u32
>>> counter,
>>> @@ -138,12 +231,14 @@ static void a6xx_set_pagetable(struct a6xx_gpu
>>> *a6xx_gpu,
>>
>> set_pagetable checks "cur_ctx_seqno" to see if pt switch is needed or
>> not. This is currently not tracked separately for each ring. Can you
>> please check that?
>
> I totally missed that. Thanks for catching it!
>
>>
>> I wonder why that didn't cause any gpu errors in testing. Not sure if I
>> am missing something.
>>
>
> I think this is because, so long as a single context doesn't submit to
> two different rings with differenr priorities, we will only be incorrect
> in the sense that we emit more page table switches than necessary and
> never less. However untrusted userspace could create a context that
> submits to two different rings and that would lead to execution in the
> wrong context so we must fix this.
>
>>> /*
>>> * Write the new TTBR0 to the memstore. This is good for
>>> debugging.
>>> + * Needed for preemption
>>> */
>>> - OUT_PKT7(ring, CP_MEM_WRITE, 4);
>>> + OUT_PKT7(ring, CP_MEM_WRITE, 5);
>>> OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr)));
>>> OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr)));
>>> OUT_RING(ring, lower_32_bits(ttbr));
>>> - OUT_RING(ring, (asid << 16) | upper_32_bits(ttbr));
>>> + OUT_RING(ring, upper_32_bits(ttbr));
>>> + OUT_RING(ring, ctx->seqno);
>>> /*
>>> * Sync both threads after switching pagetables and enable BR only
>>> @@ -268,6 +363,43 @@ static void a6xx_submit(struct msm_gpu *gpu,
>>> struct msm_gem_submit *submit)
>>> a6xx_flush(gpu, ring);
>>> }
> ...
>>> + struct a6xx_preempt_record *record_ptr =
>>> + a6xx_gpu->preempt[ring->id] + PREEMPT_OFFSET_PRIV_NON_SECURE;
>>> + u64 ttbr0 = ring->memptrs->ttbr0;
>>> + u32 context_idr = ring->memptrs->context_idr;
>>> +
>>> + smmu_info_ptr->ttbr0 = ttbr0;
>>> + smmu_info_ptr->context_idr = context_idr;
>>> + record_ptr->wptr = get_wptr(ring);
>>> +
>>> + /*
>>> + * The GPU will write the wptr we set above when we preempt. Reset
>>> + * skip_inline_wptr to make sure that we don't write WPTR to the
>>> same
>>> + * thing twice. It's still possible subsequent submissions will
>>> update
>>> + * wptr again, in which case they will set the flag to true.
>>> This has
>>> + * to be protected by the lock for setting the flag and updating
>>> wptr
>>> + * to be atomic.
>>> + */
>>> + ring->skip_inline_wptr = false;
>>> +
>>> + spin_unlock_irqrestore(&ring->preempt_lock, flags);
>>> +
>>> + gpu_write64(gpu,
>>> + REG_A6XX_CP_CONTEXT_SWITCH_SMMU_INFO,
>>> + a6xx_gpu->preempt_iova[ring->id] + PREEMPT_OFFSET_SMMU_INFO);
>>> +
>>> + gpu_write64(gpu,
>>> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR,
>>> + a6xx_gpu->preempt_iova[ring->id] +
>>> PREEMPT_OFFSET_PRIV_NON_SECURE);
>>> +
>>> + preempt_offset_priv_secure =
>>> +
>>> PREEMPT_OFFSET_PRIV_SECURE(adreno_gpu->info->preempt_record_size);
>>> + gpu_write64(gpu,
>>> + REG_A6XX_CP_CONTEXT_SWITCH_PRIV_SECURE_RESTORE_ADDR,
>>> + a6xx_gpu->preempt_iova[ring->id] + preempt_offset_priv_secure);
>>
>> Secure buffers are not supported currently, so we can skip this and the
>> context record allocation. Anyway this has to be a separate buffer
>> mapped in secure pagetable which don't currently have. We can skip the
>> same in pseudo register packet too.
>>
>
> Mmm it would appear that not setting it causes an hang very early. I'll
> see if I can find out more about what is going on.
Actually it was a mistake I had made when testing. The secure record
will be gone from the next revision.
>
>>> +
>>> + a6xx_gpu->next_ring = ring;
>>> +
> ...
>>> struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
>>>
>>> --
>>> 2.46.0
>>>
>
> Best regards,
Best regards,
--
Antonino Maniscalco <antomani103@gmail.com>
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v3 05/10] drm/msm/A6xx: Sync relevant adreno_pm4.xml changes
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
` (3 preceding siblings ...)
2024-09-05 14:51 ` [PATCH v3 04/10] drm/msm/A6xx: Implement preemption for A7XX targets Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-05 14:51 ` [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption Antonino Maniscalco
` (5 subsequent siblings)
10 siblings, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco
In mesa CP_SET_CTXSWITCH_IB is renamed to CP_SET_AMBLE and some other
names are changed to match KGSL. Import those changes.
The changes have not been merged yet in mesa but are necessary for this
series.
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
---
.../gpu/drm/msm/registers/adreno/adreno_pm4.xml | 39 ++++++++++------------
1 file changed, 17 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
index cab01af55d22..55a35182858c 100644
--- a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+++ b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
@@ -581,8 +581,7 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
and forcibly switch to the indicated context.
</doc>
<value name="CP_CONTEXT_SWITCH" value="0x54" variants="A6XX"/>
- <!-- Note, kgsl calls this CP_SET_AMBLE: -->
- <value name="CP_SET_CTXSWITCH_IB" value="0x55" variants="A6XX-"/>
+ <value name="CP_SET_AMBLE" value="0x55" variants="A6XX-"/>
<!--
Seems to always have the payload:
@@ -2013,42 +2012,38 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
</reg32>
</domain>
-<domain name="CP_SET_CTXSWITCH_IB" width="32">
+<domain name="CP_SET_AMBLE" width="32">
<doc>
- Used by the userspace driver to set various IB's which are
- executed during context save/restore for handling
- state that isn't restored by the
- context switch routine itself.
- </doc>
- <enum name="ctxswitch_ib">
- <value name="RESTORE_IB" value="0">
+ Used by the userspace and kernel drivers to set various IB's
+ which are executed during context save/restore for handling
+ state that isn't restored by the context switch routine itself.
+ </doc>
+ <enum name="amble_type">
+ <value name="PREAMBLE_AMBLE_TYPE" value="0">
<doc>Executed unconditionally when switching back to the context.</doc>
</value>
- <value name="YIELD_RESTORE_IB" value="1">
+ <value name="BIN_PREAMBLE_AMBLE_TYPE" value="1">
<doc>
Executed when switching back after switching
away during execution of
- a CP_SET_MARKER packet with RM6_YIELD as the
- payload *and* the normal save routine was
- bypassed for a shorter one. I think this is
- connected to the "skipsaverestore" bit set by
- the kernel when preempting.
+ a CP_SET_MARKER packet with RM6_BIN_RENDER_END as the
+ payload *and* skipsaverestore is set. This is
+ expected to restore static register values not
+ saved when skipsaverestore is set.
</doc>
</value>
- <value name="SAVE_IB" value="2">
+ <value name="POSTAMBLE_AMBLE_TYPE" value="2">
<doc>
Executed when switching away from the context,
except for context switches initiated via
CP_YIELD.
</doc>
</value>
- <value name="RB_SAVE_IB" value="3">
+ <value name="KMD_AMBLE_TYPE" value="3">
<doc>
This can only be set by the RB (i.e. the kernel)
and executes with protected mode off, but
- is otherwise similar to SAVE_IB.
-
- Note, kgsl calls this CP_KMD_AMBLE_TYPE
+ is otherwise similar to POSTAMBLE_AMBLE_TYPE.
</doc>
</value>
</enum>
@@ -2060,7 +2055,7 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
</reg32>
<reg32 offset="2" name="2">
<bitfield name="DWORDS" low="0" high="19" type="uint"/>
- <bitfield name="TYPE" low="20" high="21" type="ctxswitch_ib"/>
+ <bitfield name="TYPE" low="20" high="21" type="amble_type"/>
</reg32>
</domain>
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
` (4 preceding siblings ...)
2024-09-05 14:51 ` [PATCH v3 05/10] drm/msm/A6xx: Sync relevant adreno_pm4.xml changes Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-06 20:08 ` Akhil P Oommen
2024-09-05 14:51 ` [PATCH v3 07/10] drm/msm/A6xx: Add traces for preemption Antonino Maniscalco
` (4 subsequent siblings)
10 siblings, 1 reply; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco
Use the postamble to reset perf counters when switching between rings,
except when sysprof is enabled, analogously to how they are reset
between submissions when switching pagetables.
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 20 ++++++++++++++++++-
drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 5 +++++
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 32 +++++++++++++++++++++++++++++++
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 7 +++++--
4 files changed, 61 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index ed0b138a2d66..710ec3ce2923 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -366,7 +366,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
{
- u64 preempt_offset_priv_secure;
+ bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
+ u64 preempt_offset_priv_secure, preempt_postamble;
OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
@@ -398,6 +399,23 @@ static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
/* seems OK to set to 0 to disable it */
OUT_RING(ring, 0);
OUT_RING(ring, 0);
+
+ /* if not profiling set postamble to clear perfcounters, else clear it */
+ if (!sysprof && a6xx_gpu->preempt_postamble_len) {
+ preempt_postamble = a6xx_gpu->preempt_postamble_iova;
+
+ OUT_PKT7(ring, CP_SET_AMBLE, 3);
+ OUT_RING(ring, lower_32_bits(preempt_postamble));
+ OUT_RING(ring, upper_32_bits(preempt_postamble));
+ OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(
+ a6xx_gpu->preempt_postamble_len) |
+ CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
+ } else {
+ OUT_PKT7(ring, CP_SET_AMBLE, 3);
+ OUT_RING(ring, 0);
+ OUT_RING(ring, 0);
+ OUT_RING(ring, CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
+ }
}
static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
index da10060e38dc..b009732c08c5 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
@@ -71,6 +71,11 @@ struct a6xx_gpu {
bool uses_gmem;
bool skip_save_restore;
+ struct drm_gem_object *preempt_postamble_bo;
+ void *preempt_postamble_ptr;
+ uint64_t preempt_postamble_iova;
+ uint64_t preempt_postamble_len;
+
struct a6xx_gmu gmu;
struct drm_gem_object *shadow_bo;
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
index 1caff76aca6e..ec44f44d925f 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
@@ -346,6 +346,28 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
return 0;
}
+static void preempt_prepare_postamble(struct a6xx_gpu *a6xx_gpu)
+{
+ u32 *postamble = a6xx_gpu->preempt_postamble_ptr;
+ u32 count = 0;
+
+ postamble[count++] = PKT7(CP_REG_RMW, 3);
+ postamble[count++] = REG_A6XX_RBBM_PERFCTR_SRAM_INIT_CMD;
+ postamble[count++] = 0;
+ postamble[count++] = 1;
+
+ postamble[count++] = PKT7(CP_WAIT_REG_MEM, 6);
+ postamble[count++] = CP_WAIT_REG_MEM_0_FUNCTION(WRITE_EQ);
+ postamble[count++] = CP_WAIT_REG_MEM_1_POLL_ADDR_LO(
+ REG_A6XX_RBBM_PERFCTR_SRAM_INIT_STATUS);
+ postamble[count++] = CP_WAIT_REG_MEM_2_POLL_ADDR_HI(0);
+ postamble[count++] = CP_WAIT_REG_MEM_3_REF(0x1);
+ postamble[count++] = CP_WAIT_REG_MEM_4_MASK(0x1);
+ postamble[count++] = CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0);
+
+ a6xx_gpu->preempt_postamble_len = count;
+}
+
void a6xx_preempt_fini(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
@@ -376,6 +398,16 @@ void a6xx_preempt_init(struct msm_gpu *gpu)
a6xx_gpu->uses_gmem = 1;
a6xx_gpu->skip_save_restore = 1;
+ a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev,
+ PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV,
+ gpu->aspace, &a6xx_gpu->preempt_postamble_bo,
+ &a6xx_gpu->preempt_postamble_iova);
+
+ preempt_prepare_postamble(a6xx_gpu);
+
+ if (IS_ERR(a6xx_gpu->preempt_postamble_ptr))
+ goto fail;
+
timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
return;
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
index 6b1888280a83..87098567483b 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
@@ -610,12 +610,15 @@ OUT_PKT4(struct msm_ringbuffer *ring, uint16_t regindx, uint16_t cnt)
OUT_RING(ring, PKT4(regindx, cnt));
}
+#define PKT7(opcode, cnt) \
+ (CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) | \
+ ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23))
+
static inline void
OUT_PKT7(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt)
{
adreno_wait_ring(ring, cnt + 1);
- OUT_RING(ring, CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) |
- ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23));
+ OUT_RING(ring, PKT7(opcode, cnt));
}
struct msm_gpu *a2xx_gpu_init(struct drm_device *dev);
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* Re: [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption
2024-09-05 14:51 ` [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption Antonino Maniscalco
@ 2024-09-06 20:08 ` Akhil P Oommen
2024-09-09 15:07 ` Antonino Maniscalco
0 siblings, 1 reply; 32+ messages in thread
From: Akhil P Oommen @ 2024-09-06 20:08 UTC (permalink / raw)
To: Antonino Maniscalco
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc
On Thu, Sep 05, 2024 at 04:51:24PM +0200, Antonino Maniscalco wrote:
> Use the postamble to reset perf counters when switching between rings,
> except when sysprof is enabled, analogously to how they are reset
> between submissions when switching pagetables.
>
> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> ---
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 20 ++++++++++++++++++-
> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 5 +++++
> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 32 +++++++++++++++++++++++++++++++
> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 7 +++++--
> 4 files changed, 61 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> index ed0b138a2d66..710ec3ce2923 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> @@ -366,7 +366,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
> {
> - u64 preempt_offset_priv_secure;
> + bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
> + u64 preempt_offset_priv_secure, preempt_postamble;
>
> OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
>
> @@ -398,6 +399,23 @@ static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> /* seems OK to set to 0 to disable it */
> OUT_RING(ring, 0);
> OUT_RING(ring, 0);
> +
> + /* if not profiling set postamble to clear perfcounters, else clear it */
> + if (!sysprof && a6xx_gpu->preempt_postamble_len) {
> + preempt_postamble = a6xx_gpu->preempt_postamble_iova;
> +
> + OUT_PKT7(ring, CP_SET_AMBLE, 3);
> + OUT_RING(ring, lower_32_bits(preempt_postamble));
> + OUT_RING(ring, upper_32_bits(preempt_postamble));
> + OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(
> + a6xx_gpu->preempt_postamble_len) |
> + CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
> + } else {
Why do we need this else part?
> + OUT_PKT7(ring, CP_SET_AMBLE, 3);
> + OUT_RING(ring, 0);
> + OUT_RING(ring, 0);
> + OUT_RING(ring, CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
> + }
> }
>
> static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> index da10060e38dc..b009732c08c5 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> @@ -71,6 +71,11 @@ struct a6xx_gpu {
> bool uses_gmem;
> bool skip_save_restore;
>
> + struct drm_gem_object *preempt_postamble_bo;
> + void *preempt_postamble_ptr;
> + uint64_t preempt_postamble_iova;
> + uint64_t preempt_postamble_len;
> +
> struct a6xx_gmu gmu;
>
> struct drm_gem_object *shadow_bo;
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> index 1caff76aca6e..ec44f44d925f 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> @@ -346,6 +346,28 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
> return 0;
> }
>
> +static void preempt_prepare_postamble(struct a6xx_gpu *a6xx_gpu)
> +{
> + u32 *postamble = a6xx_gpu->preempt_postamble_ptr;
> + u32 count = 0;
> +
> + postamble[count++] = PKT7(CP_REG_RMW, 3);
> + postamble[count++] = REG_A6XX_RBBM_PERFCTR_SRAM_INIT_CMD;
> + postamble[count++] = 0;
> + postamble[count++] = 1;
> +
> + postamble[count++] = PKT7(CP_WAIT_REG_MEM, 6);
> + postamble[count++] = CP_WAIT_REG_MEM_0_FUNCTION(WRITE_EQ);
> + postamble[count++] = CP_WAIT_REG_MEM_1_POLL_ADDR_LO(
> + REG_A6XX_RBBM_PERFCTR_SRAM_INIT_STATUS);
> + postamble[count++] = CP_WAIT_REG_MEM_2_POLL_ADDR_HI(0);
> + postamble[count++] = CP_WAIT_REG_MEM_3_REF(0x1);
> + postamble[count++] = CP_WAIT_REG_MEM_4_MASK(0x1);
> + postamble[count++] = CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0);
Isn't it better to just replace this with NOP packets when sysprof is
enabled, just before triggering preemption? It will help to have an
immediate effect.
-Akhil
> +
> + a6xx_gpu->preempt_postamble_len = count;
> +}
> +
> void a6xx_preempt_fini(struct msm_gpu *gpu)
> {
> struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> @@ -376,6 +398,16 @@ void a6xx_preempt_init(struct msm_gpu *gpu)
> a6xx_gpu->uses_gmem = 1;
> a6xx_gpu->skip_save_restore = 1;
>
> + a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev,
> + PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV,
> + gpu->aspace, &a6xx_gpu->preempt_postamble_bo,
> + &a6xx_gpu->preempt_postamble_iova);
> +
> + preempt_prepare_postamble(a6xx_gpu);
> +
> + if (IS_ERR(a6xx_gpu->preempt_postamble_ptr))
> + goto fail;
> +
> timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
>
> return;
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> index 6b1888280a83..87098567483b 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> @@ -610,12 +610,15 @@ OUT_PKT4(struct msm_ringbuffer *ring, uint16_t regindx, uint16_t cnt)
> OUT_RING(ring, PKT4(regindx, cnt));
> }
>
> +#define PKT7(opcode, cnt) \
> + (CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) | \
> + ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23))
> +
> static inline void
> OUT_PKT7(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt)
> {
> adreno_wait_ring(ring, cnt + 1);
> - OUT_RING(ring, CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) |
> - ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23));
> + OUT_RING(ring, PKT7(opcode, cnt));
> }
>
> struct msm_gpu *a2xx_gpu_init(struct drm_device *dev);
>
> --
> 2.46.0
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption
2024-09-06 20:08 ` Akhil P Oommen
@ 2024-09-09 15:07 ` Antonino Maniscalco
2024-09-10 21:34 ` Akhil P Oommen
0 siblings, 1 reply; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-09 15:07 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc
On 9/6/24 10:08 PM, Akhil P Oommen wrote:
> On Thu, Sep 05, 2024 at 04:51:24PM +0200, Antonino Maniscalco wrote:
>> Use the postamble to reset perf counters when switching between rings,
>> except when sysprof is enabled, analogously to how they are reset
>> between submissions when switching pagetables.
>>
>> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
>> ---
>> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 20 ++++++++++++++++++-
>> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 5 +++++
>> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 32 +++++++++++++++++++++++++++++++
>> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 7 +++++--
>> 4 files changed, 61 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>> index ed0b138a2d66..710ec3ce2923 100644
>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>> @@ -366,7 +366,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>> static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
>> struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
>> {
>> - u64 preempt_offset_priv_secure;
>> + bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
>> + u64 preempt_offset_priv_secure, preempt_postamble;
>>
>> OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
>>
>> @@ -398,6 +399,23 @@ static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
>> /* seems OK to set to 0 to disable it */
>> OUT_RING(ring, 0);
>> OUT_RING(ring, 0);
>> +
>> + /* if not profiling set postamble to clear perfcounters, else clear it */
>> + if (!sysprof && a6xx_gpu->preempt_postamble_len) {
>> + preempt_postamble = a6xx_gpu->preempt_postamble_iova;
>> +
>> + OUT_PKT7(ring, CP_SET_AMBLE, 3);
>> + OUT_RING(ring, lower_32_bits(preempt_postamble));
>> + OUT_RING(ring, upper_32_bits(preempt_postamble));
>> + OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(
>> + a6xx_gpu->preempt_postamble_len) |
>> + CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
>> + } else {
>
> Why do we need this else part?
Wouldn't the postmable remain set if we don't explicitly set it to 0?
>
>> + OUT_PKT7(ring, CP_SET_AMBLE, 3);
>> + OUT_RING(ring, 0);
>> + OUT_RING(ring, 0);
>> + OUT_RING(ring, CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
>> + }
>> }
>>
>> static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>> index da10060e38dc..b009732c08c5 100644
>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>> @@ -71,6 +71,11 @@ struct a6xx_gpu {
>> bool uses_gmem;
>> bool skip_save_restore;
>>
>> + struct drm_gem_object *preempt_postamble_bo;
>> + void *preempt_postamble_ptr;
>> + uint64_t preempt_postamble_iova;
>> + uint64_t preempt_postamble_len;
>> +
>> struct a6xx_gmu gmu;
>>
>> struct drm_gem_object *shadow_bo;
>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>> index 1caff76aca6e..ec44f44d925f 100644
>> --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>> @@ -346,6 +346,28 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
>> return 0;
>> }
>>
>> +static void preempt_prepare_postamble(struct a6xx_gpu *a6xx_gpu)
>> +{
>> + u32 *postamble = a6xx_gpu->preempt_postamble_ptr;
>> + u32 count = 0;
>> +
>> + postamble[count++] = PKT7(CP_REG_RMW, 3);
>> + postamble[count++] = REG_A6XX_RBBM_PERFCTR_SRAM_INIT_CMD;
>> + postamble[count++] = 0;
>> + postamble[count++] = 1;
>> +
>> + postamble[count++] = PKT7(CP_WAIT_REG_MEM, 6);
>> + postamble[count++] = CP_WAIT_REG_MEM_0_FUNCTION(WRITE_EQ);
>> + postamble[count++] = CP_WAIT_REG_MEM_1_POLL_ADDR_LO(
>> + REG_A6XX_RBBM_PERFCTR_SRAM_INIT_STATUS);
>> + postamble[count++] = CP_WAIT_REG_MEM_2_POLL_ADDR_HI(0);
>> + postamble[count++] = CP_WAIT_REG_MEM_3_REF(0x1);
>> + postamble[count++] = CP_WAIT_REG_MEM_4_MASK(0x1);
>> + postamble[count++] = CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0);
>
> Isn't it better to just replace this with NOP packets when sysprof is
> enabled, just before triggering preemption? It will help to have an
> immediate effect.
>
> -Akhil
>
Mmm, this being a postamble I wonder whether we have the guarantee that
it finishes execution before the IRQ is called so updating it doesn't
race with the CP executing it.
>> +
>> + a6xx_gpu->preempt_postamble_len = count;
>> +}
>> +
>> void a6xx_preempt_fini(struct msm_gpu *gpu)
>> {
>> struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>> @@ -376,6 +398,16 @@ void a6xx_preempt_init(struct msm_gpu *gpu)
>> a6xx_gpu->uses_gmem = 1;
>> a6xx_gpu->skip_save_restore = 1;
>>
>> + a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev,
>> + PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV,
>> + gpu->aspace, &a6xx_gpu->preempt_postamble_bo,
>> + &a6xx_gpu->preempt_postamble_iova);
>> +
>> + preempt_prepare_postamble(a6xx_gpu);
>> +
>> + if (IS_ERR(a6xx_gpu->preempt_postamble_ptr))
>> + goto fail;
>> +
>> timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
>>
>> return;
>> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
>> index 6b1888280a83..87098567483b 100644
>> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
>> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
>> @@ -610,12 +610,15 @@ OUT_PKT4(struct msm_ringbuffer *ring, uint16_t regindx, uint16_t cnt)
>> OUT_RING(ring, PKT4(regindx, cnt));
>> }
>>
>> +#define PKT7(opcode, cnt) \
>> + (CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) | \
>> + ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23))
>> +
>> static inline void
>> OUT_PKT7(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt)
>> {
>> adreno_wait_ring(ring, cnt + 1);
>> - OUT_RING(ring, CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) |
>> - ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23));
>> + OUT_RING(ring, PKT7(opcode, cnt));
>> }
>>
>> struct msm_gpu *a2xx_gpu_init(struct drm_device *dev);
>>
>> --
>> 2.46.0
>>
Best regards,
--
Antonino Maniscalco <antomani103@gmail.com>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption
2024-09-09 15:07 ` Antonino Maniscalco
@ 2024-09-10 21:34 ` Akhil P Oommen
2024-09-10 22:35 ` Antonino Maniscalco
0 siblings, 1 reply; 32+ messages in thread
From: Akhil P Oommen @ 2024-09-10 21:34 UTC (permalink / raw)
To: Antonino Maniscalco
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc
On Mon, Sep 09, 2024 at 05:07:42PM +0200, Antonino Maniscalco wrote:
> On 9/6/24 10:08 PM, Akhil P Oommen wrote:
> > On Thu, Sep 05, 2024 at 04:51:24PM +0200, Antonino Maniscalco wrote:
> > > Use the postamble to reset perf counters when switching between rings,
> > > except when sysprof is enabled, analogously to how they are reset
> > > between submissions when switching pagetables.
> > >
> > > Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> > > ---
> > > drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 20 ++++++++++++++++++-
> > > drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 5 +++++
> > > drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 32 +++++++++++++++++++++++++++++++
> > > drivers/gpu/drm/msm/adreno/adreno_gpu.h | 7 +++++--
> > > 4 files changed, 61 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > index ed0b138a2d66..710ec3ce2923 100644
> > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > @@ -366,7 +366,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> > > struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
> > > {
> > > - u64 preempt_offset_priv_secure;
> > > + bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
> > > + u64 preempt_offset_priv_secure, preempt_postamble;
> > > OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
> > > @@ -398,6 +399,23 @@ static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> > > /* seems OK to set to 0 to disable it */
> > > OUT_RING(ring, 0);
> > > OUT_RING(ring, 0);
> > > +
> > > + /* if not profiling set postamble to clear perfcounters, else clear it */
> > > + if (!sysprof && a6xx_gpu->preempt_postamble_len) {
Setting len = 0 is enough to skip processing postamble packets. So how
about a simpler:
len = a6xx_gpu->preempt_postamble_len;
if (sysprof)
len = 0;
OUT_PKT7(ring, CP_SET_AMBLE, 3);
OUT_RING(ring, lower_32_bits(preempt_postamble));
OUT_RING(ring, upper_32_bits(preempt_postamble));
OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(len) |
CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
> > > + preempt_postamble = a6xx_gpu->preempt_postamble_iova;
> > > +
> > > + OUT_PKT7(ring, CP_SET_AMBLE, 3);
> > > + OUT_RING(ring, lower_32_bits(preempt_postamble));
> > > + OUT_RING(ring, upper_32_bits(preempt_postamble));
> > > + OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(
> > > + a6xx_gpu->preempt_postamble_len) |
> > > + CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
> > > + } else {
> >
> > Why do we need this else part?
>
> Wouldn't the postmable remain set if we don't explicitly set it to 0?
Aah, that is a genuine concern. I am not sure! Lets keep it.
>
> >
> > > + OUT_PKT7(ring, CP_SET_AMBLE, 3);
> > > + OUT_RING(ring, 0);
> > > + OUT_RING(ring, 0);
> > > + OUT_RING(ring, CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
> > > + }
> > > }
> > > static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > index da10060e38dc..b009732c08c5 100644
> > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > @@ -71,6 +71,11 @@ struct a6xx_gpu {
> > > bool uses_gmem;
> > > bool skip_save_restore;
> > > + struct drm_gem_object *preempt_postamble_bo;
> > > + void *preempt_postamble_ptr;
> > > + uint64_t preempt_postamble_iova;
> > > + uint64_t preempt_postamble_len;
> > > +
> > > struct a6xx_gmu gmu;
> > > struct drm_gem_object *shadow_bo;
> > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > index 1caff76aca6e..ec44f44d925f 100644
> > > --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > @@ -346,6 +346,28 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
> > > return 0;
> > > }
> > > +static void preempt_prepare_postamble(struct a6xx_gpu *a6xx_gpu)
> > > +{
> > > + u32 *postamble = a6xx_gpu->preempt_postamble_ptr;
> > > + u32 count = 0;
> > > +
> > > + postamble[count++] = PKT7(CP_REG_RMW, 3);
> > > + postamble[count++] = REG_A6XX_RBBM_PERFCTR_SRAM_INIT_CMD;
> > > + postamble[count++] = 0;
> > > + postamble[count++] = 1;
> > > +
> > > + postamble[count++] = PKT7(CP_WAIT_REG_MEM, 6);
> > > + postamble[count++] = CP_WAIT_REG_MEM_0_FUNCTION(WRITE_EQ);
> > > + postamble[count++] = CP_WAIT_REG_MEM_1_POLL_ADDR_LO(
> > > + REG_A6XX_RBBM_PERFCTR_SRAM_INIT_STATUS);
> > > + postamble[count++] = CP_WAIT_REG_MEM_2_POLL_ADDR_HI(0);
> > > + postamble[count++] = CP_WAIT_REG_MEM_3_REF(0x1);
> > > + postamble[count++] = CP_WAIT_REG_MEM_4_MASK(0x1);
> > > + postamble[count++] = CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0);
> >
> > Isn't it better to just replace this with NOP packets when sysprof is
> > enabled, just before triggering preemption? It will help to have an
> > immediate effect.
> >
> > -Akhil
> >
>
> Mmm, this being a postamble I wonder whether we have the guarantee that it
> finishes execution before the IRQ is called so updating it doesn't race with
> the CP executing it.
Yes, it will be complete. But on a second thought now, this suggestion from me
looks like an overkill.
-Akhil.
>
> > > +
> > > + a6xx_gpu->preempt_postamble_len = count;
> > > +}
> > > +
> > > void a6xx_preempt_fini(struct msm_gpu *gpu)
> > > {
> > > struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > @@ -376,6 +398,16 @@ void a6xx_preempt_init(struct msm_gpu *gpu)
> > > a6xx_gpu->uses_gmem = 1;
> > > a6xx_gpu->skip_save_restore = 1;
> > > + a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev,
> > > + PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV,
> > > + gpu->aspace, &a6xx_gpu->preempt_postamble_bo,
> > > + &a6xx_gpu->preempt_postamble_iova);
> > > +
> > > + preempt_prepare_postamble(a6xx_gpu);
> > > +
> > > + if (IS_ERR(a6xx_gpu->preempt_postamble_ptr))
> > > + goto fail;
> > > +
> > > timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
> > > return;
> > > diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> > > index 6b1888280a83..87098567483b 100644
> > > --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> > > +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> > > @@ -610,12 +610,15 @@ OUT_PKT4(struct msm_ringbuffer *ring, uint16_t regindx, uint16_t cnt)
> > > OUT_RING(ring, PKT4(regindx, cnt));
> > > }
> > > +#define PKT7(opcode, cnt) \
> > > + (CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) | \
> > > + ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23))
> > > +
> > > static inline void
> > > OUT_PKT7(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt)
> > > {
> > > adreno_wait_ring(ring, cnt + 1);
> > > - OUT_RING(ring, CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) |
> > > - ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23));
> > > + OUT_RING(ring, PKT7(opcode, cnt));
> > > }
> > > struct msm_gpu *a2xx_gpu_init(struct drm_device *dev);
> > >
> > > --
> > > 2.46.0
> > >
>
> Best regards,
> --
> Antonino Maniscalco <antomani103@gmail.com>
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption
2024-09-10 21:34 ` Akhil P Oommen
@ 2024-09-10 22:35 ` Antonino Maniscalco
2024-09-12 7:12 ` Akhil P Oommen
0 siblings, 1 reply; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-10 22:35 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc
On 9/10/24 11:34 PM, Akhil P Oommen wrote:
> On Mon, Sep 09, 2024 at 05:07:42PM +0200, Antonino Maniscalco wrote:
>> On 9/6/24 10:08 PM, Akhil P Oommen wrote:
>>> On Thu, Sep 05, 2024 at 04:51:24PM +0200, Antonino Maniscalco wrote:
>>>> Use the postamble to reset perf counters when switching between rings,
>>>> except when sysprof is enabled, analogously to how they are reset
>>>> between submissions when switching pagetables.
>>>>
>>>> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
>>>> ---
>>>> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 20 ++++++++++++++++++-
>>>> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 5 +++++
>>>> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 32 +++++++++++++++++++++++++++++++
>>>> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 7 +++++--
>>>> 4 files changed, 61 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>> index ed0b138a2d66..710ec3ce2923 100644
>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>> @@ -366,7 +366,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>>>> static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
>>>> struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
>>>> {
>>>> - u64 preempt_offset_priv_secure;
>>>> + bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
>>>> + u64 preempt_offset_priv_secure, preempt_postamble;
>>>> OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
>>>> @@ -398,6 +399,23 @@ static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
>>>> /* seems OK to set to 0 to disable it */
>>>> OUT_RING(ring, 0);
>>>> OUT_RING(ring, 0);
>>>> +
>>>> + /* if not profiling set postamble to clear perfcounters, else clear it */
>>>> + if (!sysprof && a6xx_gpu->preempt_postamble_len) {
>
> Setting len = 0 is enough to skip processing postamble packets. So how
> about a simpler:
>
> len = a6xx_gpu->preempt_postamble_len;
> if (sysprof)
> len = 0;
>
> OUT_PKT7(ring, CP_SET_AMBLE, 3);
> OUT_RING(ring, lower_32_bits(preempt_postamble));
> OUT_RING(ring, upper_32_bits(preempt_postamble));
> OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(len) |
> CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
>
>>>> + preempt_postamble = a6xx_gpu->preempt_postamble_iova;
>>>> +
>>>> + OUT_PKT7(ring, CP_SET_AMBLE, 3);
>>>> + OUT_RING(ring, lower_32_bits(preempt_postamble));
>>>> + OUT_RING(ring, upper_32_bits(preempt_postamble));
>>>> + OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(
>>>> + a6xx_gpu->preempt_postamble_len) |
>>>> + CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
>>>> + } else {
>>>
>>> Why do we need this else part?
>>
>> Wouldn't the postmable remain set if we don't explicitly set it to 0?
>
> Aah, that is a genuine concern. I am not sure! Lets keep it.
>
>>
>>>
>>>> + OUT_PKT7(ring, CP_SET_AMBLE, 3);
>>>> + OUT_RING(ring, 0);
>>>> + OUT_RING(ring, 0);
>>>> + OUT_RING(ring, CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
>>>> + }
>>>> }
>>>> static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>>>> index da10060e38dc..b009732c08c5 100644
>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>>>> @@ -71,6 +71,11 @@ struct a6xx_gpu {
>>>> bool uses_gmem;
>>>> bool skip_save_restore;
>>>> + struct drm_gem_object *preempt_postamble_bo;
>>>> + void *preempt_postamble_ptr;
>>>> + uint64_t preempt_postamble_iova;
>>>> + uint64_t preempt_postamble_len;
>>>> +
>>>> struct a6xx_gmu gmu;
>>>> struct drm_gem_object *shadow_bo;
>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>>>> index 1caff76aca6e..ec44f44d925f 100644
>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>>>> @@ -346,6 +346,28 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
>>>> return 0;
>>>> }
>>>> +static void preempt_prepare_postamble(struct a6xx_gpu *a6xx_gpu)
>>>> +{
>>>> + u32 *postamble = a6xx_gpu->preempt_postamble_ptr;
>>>> + u32 count = 0;
>>>> +
>>>> + postamble[count++] = PKT7(CP_REG_RMW, 3);
>>>> + postamble[count++] = REG_A6XX_RBBM_PERFCTR_SRAM_INIT_CMD;
>>>> + postamble[count++] = 0;
>>>> + postamble[count++] = 1;
>>>> +
>>>> + postamble[count++] = PKT7(CP_WAIT_REG_MEM, 6);
>>>> + postamble[count++] = CP_WAIT_REG_MEM_0_FUNCTION(WRITE_EQ);
>>>> + postamble[count++] = CP_WAIT_REG_MEM_1_POLL_ADDR_LO(
>>>> + REG_A6XX_RBBM_PERFCTR_SRAM_INIT_STATUS);
>>>> + postamble[count++] = CP_WAIT_REG_MEM_2_POLL_ADDR_HI(0);
>>>> + postamble[count++] = CP_WAIT_REG_MEM_3_REF(0x1);
>>>> + postamble[count++] = CP_WAIT_REG_MEM_4_MASK(0x1);
>>>> + postamble[count++] = CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0);
>>>
>>> Isn't it better to just replace this with NOP packets when sysprof is
>>> enabled, just before triggering preemption? It will help to have an
>>> immediate effect.
>>>
>>> -Akhil
>>>
>>
>> Mmm, this being a postamble I wonder whether we have the guarantee that it
>> finishes execution before the IRQ is called so updating it doesn't race with
>> the CP executing it.
>
> Yes, it will be complete. But on a second thought now, this suggestion from me
> looks like an overkill.
Thanks for confirming! I have actually already implemented something
similar to what you proposed
https://gitlab.com/pac85/inux/-/commit/8b8ab1d89b0f611cfdbac4c3edba4192be91a7f9
so we can chose between the two. Let me know your prefence.
>
> -Akhil.
>
>>
>>>> +
>>>> + a6xx_gpu->preempt_postamble_len = count;
>>>> +}
>>>> +
>>>> void a6xx_preempt_fini(struct msm_gpu *gpu)
>>>> {
>>>> struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>> @@ -376,6 +398,16 @@ void a6xx_preempt_init(struct msm_gpu *gpu)
>>>> a6xx_gpu->uses_gmem = 1;
>>>> a6xx_gpu->skip_save_restore = 1;
>>>> + a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev,
>>>> + PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV,
>>>> + gpu->aspace, &a6xx_gpu->preempt_postamble_bo,
>>>> + &a6xx_gpu->preempt_postamble_iova);
>>>> +
>>>> + preempt_prepare_postamble(a6xx_gpu);
>>>> +
>>>> + if (IS_ERR(a6xx_gpu->preempt_postamble_ptr))
>>>> + goto fail;
>>>> +
>>>> timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
>>>> return;
>>>> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
>>>> index 6b1888280a83..87098567483b 100644
>>>> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
>>>> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
>>>> @@ -610,12 +610,15 @@ OUT_PKT4(struct msm_ringbuffer *ring, uint16_t regindx, uint16_t cnt)
>>>> OUT_RING(ring, PKT4(regindx, cnt));
>>>> }
>>>> +#define PKT7(opcode, cnt) \
>>>> + (CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) | \
>>>> + ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23))
>>>> +
>>>> static inline void
>>>> OUT_PKT7(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt)
>>>> {
>>>> adreno_wait_ring(ring, cnt + 1);
>>>> - OUT_RING(ring, CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) |
>>>> - ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23));
>>>> + OUT_RING(ring, PKT7(opcode, cnt));
>>>> }
>>>> struct msm_gpu *a2xx_gpu_init(struct drm_device *dev);
>>>>
>>>> --
>>>> 2.46.0
>>>>
>>
>> Best regards,
>> --
>> Antonino Maniscalco <antomani103@gmail.com>
>>
Best regards,
--
Antonino Maniscalco <antomani103@gmail.com>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption
2024-09-10 22:35 ` Antonino Maniscalco
@ 2024-09-12 7:12 ` Akhil P Oommen
2024-09-12 11:15 ` Antonino Maniscalco
0 siblings, 1 reply; 32+ messages in thread
From: Akhil P Oommen @ 2024-09-12 7:12 UTC (permalink / raw)
To: Antonino Maniscalco
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc
On Wed, Sep 11, 2024 at 12:35:08AM +0200, Antonino Maniscalco wrote:
> On 9/10/24 11:34 PM, Akhil P Oommen wrote:
> > On Mon, Sep 09, 2024 at 05:07:42PM +0200, Antonino Maniscalco wrote:
> > > On 9/6/24 10:08 PM, Akhil P Oommen wrote:
> > > > On Thu, Sep 05, 2024 at 04:51:24PM +0200, Antonino Maniscalco wrote:
> > > > > Use the postamble to reset perf counters when switching between rings,
> > > > > except when sysprof is enabled, analogously to how they are reset
> > > > > between submissions when switching pagetables.
> > > > >
> > > > > Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> > > > > ---
> > > > > drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 20 ++++++++++++++++++-
> > > > > drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 5 +++++
> > > > > drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 32 +++++++++++++++++++++++++++++++
> > > > > drivers/gpu/drm/msm/adreno/adreno_gpu.h | 7 +++++--
> > > > > 4 files changed, 61 insertions(+), 3 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > > > index ed0b138a2d66..710ec3ce2923 100644
> > > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> > > > > @@ -366,7 +366,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > > > static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> > > > > struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
> > > > > {
> > > > > - u64 preempt_offset_priv_secure;
> > > > > + bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
> > > > > + u64 preempt_offset_priv_secure, preempt_postamble;
> > > > > OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
> > > > > @@ -398,6 +399,23 @@ static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
> > > > > /* seems OK to set to 0 to disable it */
> > > > > OUT_RING(ring, 0);
> > > > > OUT_RING(ring, 0);
> > > > > +
> > > > > + /* if not profiling set postamble to clear perfcounters, else clear it */
> > > > > + if (!sysprof && a6xx_gpu->preempt_postamble_len) {
> >
> > Setting len = 0 is enough to skip processing postamble packets. So how
> > about a simpler:
> >
> > len = a6xx_gpu->preempt_postamble_len;
> > if (sysprof)
> > len = 0;
> >
> > OUT_PKT7(ring, CP_SET_AMBLE, 3);
> > OUT_RING(ring, lower_32_bits(preempt_postamble));
> > OUT_RING(ring, upper_32_bits(preempt_postamble));
> > OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(len) |
> > CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
> >
> > > > > + preempt_postamble = a6xx_gpu->preempt_postamble_iova;
> > > > > +
> > > > > + OUT_PKT7(ring, CP_SET_AMBLE, 3);
> > > > > + OUT_RING(ring, lower_32_bits(preempt_postamble));
> > > > > + OUT_RING(ring, upper_32_bits(preempt_postamble));
> > > > > + OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(
> > > > > + a6xx_gpu->preempt_postamble_len) |
> > > > > + CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
> > > > > + } else {
> > > >
> > > > Why do we need this else part?
> > >
> > > Wouldn't the postmable remain set if we don't explicitly set it to 0?
> >
> > Aah, that is a genuine concern. I am not sure! Lets keep it.
> >
> > >
> > > >
> > > > > + OUT_PKT7(ring, CP_SET_AMBLE, 3);
> > > > > + OUT_RING(ring, 0);
> > > > > + OUT_RING(ring, 0);
> > > > > + OUT_RING(ring, CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
> > > > > + }
> > > > > }
> > > > > static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
> > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > > > index da10060e38dc..b009732c08c5 100644
> > > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
> > > > > @@ -71,6 +71,11 @@ struct a6xx_gpu {
> > > > > bool uses_gmem;
> > > > > bool skip_save_restore;
> > > > > + struct drm_gem_object *preempt_postamble_bo;
> > > > > + void *preempt_postamble_ptr;
> > > > > + uint64_t preempt_postamble_iova;
> > > > > + uint64_t preempt_postamble_len;
> > > > > +
> > > > > struct a6xx_gmu gmu;
> > > > > struct drm_gem_object *shadow_bo;
> > > > > diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > > > index 1caff76aca6e..ec44f44d925f 100644
> > > > > --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > > > +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> > > > > @@ -346,6 +346,28 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
> > > > > return 0;
> > > > > }
> > > > > +static void preempt_prepare_postamble(struct a6xx_gpu *a6xx_gpu)
> > > > > +{
> > > > > + u32 *postamble = a6xx_gpu->preempt_postamble_ptr;
> > > > > + u32 count = 0;
> > > > > +
> > > > > + postamble[count++] = PKT7(CP_REG_RMW, 3);
> > > > > + postamble[count++] = REG_A6XX_RBBM_PERFCTR_SRAM_INIT_CMD;
> > > > > + postamble[count++] = 0;
> > > > > + postamble[count++] = 1;
> > > > > +
> > > > > + postamble[count++] = PKT7(CP_WAIT_REG_MEM, 6);
> > > > > + postamble[count++] = CP_WAIT_REG_MEM_0_FUNCTION(WRITE_EQ);
> > > > > + postamble[count++] = CP_WAIT_REG_MEM_1_POLL_ADDR_LO(
> > > > > + REG_A6XX_RBBM_PERFCTR_SRAM_INIT_STATUS);
> > > > > + postamble[count++] = CP_WAIT_REG_MEM_2_POLL_ADDR_HI(0);
> > > > > + postamble[count++] = CP_WAIT_REG_MEM_3_REF(0x1);
> > > > > + postamble[count++] = CP_WAIT_REG_MEM_4_MASK(0x1);
> > > > > + postamble[count++] = CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0);
> > > >
> > > > Isn't it better to just replace this with NOP packets when sysprof is
> > > > enabled, just before triggering preemption? It will help to have an
> > > > immediate effect.
> > > >
> > > > -Akhil
> > > >
> > >
> > > Mmm, this being a postamble I wonder whether we have the guarantee that it
> > > finishes execution before the IRQ is called so updating it doesn't race with
> > > the CP executing it.
> >
> > Yes, it will be complete. But on a second thought now, this suggestion from me
> > looks like an overkill.
>
> Thanks for confirming! I have actually already implemented something similar
> to what you proposed https://gitlab.com/pac85/inux/-/commit/8b8ab1d89b0f611cfdbac4c3edba4192be91a7f9
> so we can chose between the two. Let me know your prefence.
That looks fine. Can we try to simplify that patch further? We can lean
towards readability instead of saving few writes. I don't think there
will be frequent sysprof toggles.
-Akhil
>
> >
> > -Akhil.
> >
> > >
> > > > > +
> > > > > + a6xx_gpu->preempt_postamble_len = count;
> > > > > +}
> > > > > +
> > > > > void a6xx_preempt_fini(struct msm_gpu *gpu)
> > > > > {
> > > > > struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> > > > > @@ -376,6 +398,16 @@ void a6xx_preempt_init(struct msm_gpu *gpu)
> > > > > a6xx_gpu->uses_gmem = 1;
> > > > > a6xx_gpu->skip_save_restore = 1;
> > > > > + a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev,
> > > > > + PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV,
> > > > > + gpu->aspace, &a6xx_gpu->preempt_postamble_bo,
> > > > > + &a6xx_gpu->preempt_postamble_iova);
> > > > > +
> > > > > + preempt_prepare_postamble(a6xx_gpu);
> > > > > +
> > > > > + if (IS_ERR(a6xx_gpu->preempt_postamble_ptr))
> > > > > + goto fail;
> > > > > +
> > > > > timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
> > > > > return;
> > > > > diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> > > > > index 6b1888280a83..87098567483b 100644
> > > > > --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> > > > > +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> > > > > @@ -610,12 +610,15 @@ OUT_PKT4(struct msm_ringbuffer *ring, uint16_t regindx, uint16_t cnt)
> > > > > OUT_RING(ring, PKT4(regindx, cnt));
> > > > > }
> > > > > +#define PKT7(opcode, cnt) \
> > > > > + (CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) | \
> > > > > + ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23))
> > > > > +
> > > > > static inline void
> > > > > OUT_PKT7(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt)
> > > > > {
> > > > > adreno_wait_ring(ring, cnt + 1);
> > > > > - OUT_RING(ring, CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) |
> > > > > - ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23));
> > > > > + OUT_RING(ring, PKT7(opcode, cnt));
> > > > > }
> > > > > struct msm_gpu *a2xx_gpu_init(struct drm_device *dev);
> > > > >
> > > > > --
> > > > > 2.46.0
> > > > >
> > >
> > > Best regards,
> > > --
> > > Antonino Maniscalco <antomani103@gmail.com>
> > >
>
> Best regards,
> --
> Antonino Maniscalco <antomani103@gmail.com>
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption
2024-09-12 7:12 ` Akhil P Oommen
@ 2024-09-12 11:15 ` Antonino Maniscalco
0 siblings, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-12 11:15 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc
On 9/12/24 9:12 AM, Akhil P Oommen wrote:
> On Wed, Sep 11, 2024 at 12:35:08AM +0200, Antonino Maniscalco wrote:
>> On 9/10/24 11:34 PM, Akhil P Oommen wrote:
>>> On Mon, Sep 09, 2024 at 05:07:42PM +0200, Antonino Maniscalco wrote:
>>>> On 9/6/24 10:08 PM, Akhil P Oommen wrote:
>>>>> On Thu, Sep 05, 2024 at 04:51:24PM +0200, Antonino Maniscalco wrote:
>>>>>> Use the postamble to reset perf counters when switching between rings,
>>>>>> except when sysprof is enabled, analogously to how they are reset
>>>>>> between submissions when switching pagetables.
>>>>>>
>>>>>> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
>>>>>> ---
>>>>>> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 20 ++++++++++++++++++-
>>>>>> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 5 +++++
>>>>>> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 32 +++++++++++++++++++++++++++++++
>>>>>> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 7 +++++--
>>>>>> 4 files changed, 61 insertions(+), 3 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>>>> index ed0b138a2d66..710ec3ce2923 100644
>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
>>>>>> @@ -366,7 +366,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>>>>>> static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
>>>>>> struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue)
>>>>>> {
>>>>>> - u64 preempt_offset_priv_secure;
>>>>>> + bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
>>>>>> + u64 preempt_offset_priv_secure, preempt_postamble;
>>>>>> OUT_PKT7(ring, CP_SET_PSEUDO_REG, 15);
>>>>>> @@ -398,6 +399,23 @@ static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring,
>>>>>> /* seems OK to set to 0 to disable it */
>>>>>> OUT_RING(ring, 0);
>>>>>> OUT_RING(ring, 0);
>>>>>> +
>>>>>> + /* if not profiling set postamble to clear perfcounters, else clear it */
>>>>>> + if (!sysprof && a6xx_gpu->preempt_postamble_len) {
>>>
>>> Setting len = 0 is enough to skip processing postamble packets. So how
>>> about a simpler:
>>>
>>> len = a6xx_gpu->preempt_postamble_len;
>>> if (sysprof)
>>> len = 0;
>>>
>>> OUT_PKT7(ring, CP_SET_AMBLE, 3);
>>> OUT_RING(ring, lower_32_bits(preempt_postamble));
>>> OUT_RING(ring, upper_32_bits(preempt_postamble));
>>> OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(len) |
>>> CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
>>>
>>>>>> + preempt_postamble = a6xx_gpu->preempt_postamble_iova;
>>>>>> +
>>>>>> + OUT_PKT7(ring, CP_SET_AMBLE, 3);
>>>>>> + OUT_RING(ring, lower_32_bits(preempt_postamble));
>>>>>> + OUT_RING(ring, upper_32_bits(preempt_postamble));
>>>>>> + OUT_RING(ring, CP_SET_AMBLE_2_DWORDS(
>>>>>> + a6xx_gpu->preempt_postamble_len) |
>>>>>> + CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
>>>>>> + } else {
>>>>>
>>>>> Why do we need this else part?
>>>>
>>>> Wouldn't the postmable remain set if we don't explicitly set it to 0?
>>>
>>> Aah, that is a genuine concern. I am not sure! Lets keep it.
>>>
>>>>
>>>>>
>>>>>> + OUT_PKT7(ring, CP_SET_AMBLE, 3);
>>>>>> + OUT_RING(ring, 0);
>>>>>> + OUT_RING(ring, 0);
>>>>>> + OUT_RING(ring, CP_SET_AMBLE_2_TYPE(KMD_AMBLE_TYPE));
>>>>>> + }
>>>>>> }
>>>>>> static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>>>>>> index da10060e38dc..b009732c08c5 100644
>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h
>>>>>> @@ -71,6 +71,11 @@ struct a6xx_gpu {
>>>>>> bool uses_gmem;
>>>>>> bool skip_save_restore;
>>>>>> + struct drm_gem_object *preempt_postamble_bo;
>>>>>> + void *preempt_postamble_ptr;
>>>>>> + uint64_t preempt_postamble_iova;
>>>>>> + uint64_t preempt_postamble_len;
>>>>>> +
>>>>>> struct a6xx_gmu gmu;
>>>>>> struct drm_gem_object *shadow_bo;
>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>>>>>> index 1caff76aca6e..ec44f44d925f 100644
>>>>>> --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>>>>>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>>>>>> @@ -346,6 +346,28 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
>>>>>> return 0;
>>>>>> }
>>>>>> +static void preempt_prepare_postamble(struct a6xx_gpu *a6xx_gpu)
>>>>>> +{
>>>>>> + u32 *postamble = a6xx_gpu->preempt_postamble_ptr;
>>>>>> + u32 count = 0;
>>>>>> +
>>>>>> + postamble[count++] = PKT7(CP_REG_RMW, 3);
>>>>>> + postamble[count++] = REG_A6XX_RBBM_PERFCTR_SRAM_INIT_CMD;
>>>>>> + postamble[count++] = 0;
>>>>>> + postamble[count++] = 1;
>>>>>> +
>>>>>> + postamble[count++] = PKT7(CP_WAIT_REG_MEM, 6);
>>>>>> + postamble[count++] = CP_WAIT_REG_MEM_0_FUNCTION(WRITE_EQ);
>>>>>> + postamble[count++] = CP_WAIT_REG_MEM_1_POLL_ADDR_LO(
>>>>>> + REG_A6XX_RBBM_PERFCTR_SRAM_INIT_STATUS);
>>>>>> + postamble[count++] = CP_WAIT_REG_MEM_2_POLL_ADDR_HI(0);
>>>>>> + postamble[count++] = CP_WAIT_REG_MEM_3_REF(0x1);
>>>>>> + postamble[count++] = CP_WAIT_REG_MEM_4_MASK(0x1);
>>>>>> + postamble[count++] = CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0);
>>>>>
>>>>> Isn't it better to just replace this with NOP packets when sysprof is
>>>>> enabled, just before triggering preemption? It will help to have an
>>>>> immediate effect.
>>>>>
>>>>> -Akhil
>>>>>
>>>>
>>>> Mmm, this being a postamble I wonder whether we have the guarantee that it
>>>> finishes execution before the IRQ is called so updating it doesn't race with
>>>> the CP executing it.
>>>
>>> Yes, it will be complete. But on a second thought now, this suggestion from me
>>> looks like an overkill.
>>
>> Thanks for confirming! I have actually already implemented something similar
>> to what you proposed https://gitlab.com/pac85/inux/-/commit/8b8ab1d89b0f611cfdbac4c3edba4192be91a7f9
>> so we can chose between the two. Let me know your prefence.
>
> That looks fine. Can we try to simplify that patch further? We can lean
> towards readability instead of saving few writes. I don't think there
> will be frequent sysprof toggles.
>
Sure yeah, I removed the patch argument on preempt_prepare_postamble so
when we enable the postamble we just re-emit the entire IB.
> -Akhil
>
>>
>>>
>>> -Akhil.
>>>
>>>>
>>>>>> +
>>>>>> + a6xx_gpu->preempt_postamble_len = count;
>>>>>> +}
>>>>>> +
>>>>>> void a6xx_preempt_fini(struct msm_gpu *gpu)
>>>>>> {
>>>>>> struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
>>>>>> @@ -376,6 +398,16 @@ void a6xx_preempt_init(struct msm_gpu *gpu)
>>>>>> a6xx_gpu->uses_gmem = 1;
>>>>>> a6xx_gpu->skip_save_restore = 1;
>>>>>> + a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev,
>>>>>> + PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV,
>>>>>> + gpu->aspace, &a6xx_gpu->preempt_postamble_bo,
>>>>>> + &a6xx_gpu->preempt_postamble_iova);
>>>>>> +
>>>>>> + preempt_prepare_postamble(a6xx_gpu);
>>>>>> +
>>>>>> + if (IS_ERR(a6xx_gpu->preempt_postamble_ptr))
>>>>>> + goto fail;
>>>>>> +
>>>>>> timer_setup(&a6xx_gpu->preempt_timer, a6xx_preempt_timer, 0);
>>>>>> return;
>>>>>> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
>>>>>> index 6b1888280a83..87098567483b 100644
>>>>>> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
>>>>>> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
>>>>>> @@ -610,12 +610,15 @@ OUT_PKT4(struct msm_ringbuffer *ring, uint16_t regindx, uint16_t cnt)
>>>>>> OUT_RING(ring, PKT4(regindx, cnt));
>>>>>> }
>>>>>> +#define PKT7(opcode, cnt) \
>>>>>> + (CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) | \
>>>>>> + ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23))
>>>>>> +
>>>>>> static inline void
>>>>>> OUT_PKT7(struct msm_ringbuffer *ring, uint8_t opcode, uint16_t cnt)
>>>>>> {
>>>>>> adreno_wait_ring(ring, cnt + 1);
>>>>>> - OUT_RING(ring, CP_TYPE7_PKT | (cnt << 0) | (PM4_PARITY(cnt) << 15) |
>>>>>> - ((opcode & 0x7F) << 16) | (PM4_PARITY(opcode) << 23));
>>>>>> + OUT_RING(ring, PKT7(opcode, cnt));
>>>>>> }
>>>>>> struct msm_gpu *a2xx_gpu_init(struct drm_device *dev);
>>>>>>
>>>>>> --
>>>>>> 2.46.0
>>>>>>
>>>>
>>>> Best regards,
>>>> --
>>>> Antonino Maniscalco <antomani103@gmail.com>
>>>>
>>
>> Best regards,
>> --
>> Antonino Maniscalco <antomani103@gmail.com>
>>
Best regards,
--
Antonino Maniscalco <antomani103@gmail.com>
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v3 07/10] drm/msm/A6xx: Add traces for preemption
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
` (5 preceding siblings ...)
2024-09-05 14:51 ` [PATCH v3 06/10] drm/msm/A6xx: Use posamble to reset counters on preemption Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-06 20:11 ` Akhil P Oommen
2024-09-05 14:51 ` [PATCH v3 08/10] drm/msm/A6XX: Add a flag to allow preemption to submitqueue_create Antonino Maniscalco
` (3 subsequent siblings)
10 siblings, 1 reply; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco, Neil Armstrong
Add trace points corresponding to preemption being triggered and being
completed for latency measurement purposes.
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
---
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 7 +++++++
drivers/gpu/drm/msm/msm_gpu_trace.h | 28 ++++++++++++++++++++++++++++
2 files changed, 35 insertions(+)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
index ec44f44d925f..ca9d36c107f2 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
@@ -7,6 +7,7 @@
#include "a6xx_gpu.h"
#include "a6xx_gmu.xml.h"
#include "msm_mmu.h"
+#include "msm_gpu_trace.h"
/*
* Try to transition the preemption state from old to new. Return
@@ -143,6 +144,8 @@ void a6xx_preempt_irq(struct msm_gpu *gpu)
set_preempt_state(a6xx_gpu, PREEMPT_NONE);
+ trace_msm_gpu_preemption_irq(a6xx_gpu->cur_ring->id);
+
/*
* Retrigger preemption to avoid a deadlock that might occur when preemption
* is skipped due to it being already in flight when requested.
@@ -264,6 +267,10 @@ void a6xx_preempt_trigger(struct msm_gpu *gpu)
*/
ring->skip_inline_wptr = false;
+ trace_msm_gpu_preemption_trigger(
+ a6xx_gpu->cur_ring ? a6xx_gpu->cur_ring->id : -1,
+ ring ? ring->id : -1);
+
spin_unlock_irqrestore(&ring->preempt_lock, flags);
gpu_write64(gpu,
diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h
index ac40d857bc45..7f863282db0d 100644
--- a/drivers/gpu/drm/msm/msm_gpu_trace.h
+++ b/drivers/gpu/drm/msm/msm_gpu_trace.h
@@ -177,6 +177,34 @@ TRACE_EVENT(msm_gpu_resume,
TP_printk("%u", __entry->dummy)
);
+TRACE_EVENT(msm_gpu_preemption_trigger,
+ TP_PROTO(int ring_id_from, int ring_id_to),
+ TP_ARGS(ring_id_from, ring_id_to),
+ TP_STRUCT__entry(
+ __field(int, ring_id_from)
+ __field(int, ring_id_to)
+ ),
+ TP_fast_assign(
+ __entry->ring_id_from = ring_id_from;
+ __entry->ring_id_to = ring_id_to;
+ ),
+ TP_printk("preempting %u -> %u",
+ __entry->ring_id_from,
+ __entry->ring_id_to)
+);
+
+TRACE_EVENT(msm_gpu_preemption_irq,
+ TP_PROTO(u32 ring_id),
+ TP_ARGS(ring_id),
+ TP_STRUCT__entry(
+ __field(u32, ring_id)
+ ),
+ TP_fast_assign(
+ __entry->ring_id = ring_id;
+ ),
+ TP_printk("preempted to %u", __entry->ring_id)
+);
+
#endif
#undef TRACE_INCLUDE_PATH
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* Re: [PATCH v3 07/10] drm/msm/A6xx: Add traces for preemption
2024-09-05 14:51 ` [PATCH v3 07/10] drm/msm/A6xx: Add traces for preemption Antonino Maniscalco
@ 2024-09-06 20:11 ` Akhil P Oommen
2024-09-09 15:08 ` Antonino Maniscalco
0 siblings, 1 reply; 32+ messages in thread
From: Akhil P Oommen @ 2024-09-06 20:11 UTC (permalink / raw)
To: Antonino Maniscalco
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Neil Armstrong
On Thu, Sep 05, 2024 at 04:51:25PM +0200, Antonino Maniscalco wrote:
> Add trace points corresponding to preemption being triggered and being
> completed for latency measurement purposes.
>
> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
> Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
> ---
> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 7 +++++++
> drivers/gpu/drm/msm/msm_gpu_trace.h | 28 ++++++++++++++++++++++++++++
> 2 files changed, 35 insertions(+)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> index ec44f44d925f..ca9d36c107f2 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
> @@ -7,6 +7,7 @@
> #include "a6xx_gpu.h"
> #include "a6xx_gmu.xml.h"
> #include "msm_mmu.h"
> +#include "msm_gpu_trace.h"
>
> /*
> * Try to transition the preemption state from old to new. Return
> @@ -143,6 +144,8 @@ void a6xx_preempt_irq(struct msm_gpu *gpu)
>
> set_preempt_state(a6xx_gpu, PREEMPT_NONE);
>
> + trace_msm_gpu_preemption_irq(a6xx_gpu->cur_ring->id);
> +
> /*
> * Retrigger preemption to avoid a deadlock that might occur when preemption
> * is skipped due to it being already in flight when requested.
> @@ -264,6 +267,10 @@ void a6xx_preempt_trigger(struct msm_gpu *gpu)
> */
> ring->skip_inline_wptr = false;
>
> + trace_msm_gpu_preemption_trigger(
> + a6xx_gpu->cur_ring ? a6xx_gpu->cur_ring->id : -1,
Can't we avoid this check?
-Akhil.
> + ring ? ring->id : -1);
> +
> spin_unlock_irqrestore(&ring->preempt_lock, flags);
>
> gpu_write64(gpu,
> diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h
> index ac40d857bc45..7f863282db0d 100644
> --- a/drivers/gpu/drm/msm/msm_gpu_trace.h
> +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h
> @@ -177,6 +177,34 @@ TRACE_EVENT(msm_gpu_resume,
> TP_printk("%u", __entry->dummy)
> );
>
> +TRACE_EVENT(msm_gpu_preemption_trigger,
> + TP_PROTO(int ring_id_from, int ring_id_to),
> + TP_ARGS(ring_id_from, ring_id_to),
> + TP_STRUCT__entry(
> + __field(int, ring_id_from)
> + __field(int, ring_id_to)
> + ),
> + TP_fast_assign(
> + __entry->ring_id_from = ring_id_from;
> + __entry->ring_id_to = ring_id_to;
> + ),
> + TP_printk("preempting %u -> %u",
> + __entry->ring_id_from,
> + __entry->ring_id_to)
> +);
> +
> +TRACE_EVENT(msm_gpu_preemption_irq,
> + TP_PROTO(u32 ring_id),
> + TP_ARGS(ring_id),
> + TP_STRUCT__entry(
> + __field(u32, ring_id)
> + ),
> + TP_fast_assign(
> + __entry->ring_id = ring_id;
> + ),
> + TP_printk("preempted to %u", __entry->ring_id)
> +);
> +
> #endif
>
> #undef TRACE_INCLUDE_PATH
>
> --
> 2.46.0
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 07/10] drm/msm/A6xx: Add traces for preemption
2024-09-06 20:11 ` Akhil P Oommen
@ 2024-09-09 15:08 ` Antonino Maniscalco
0 siblings, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-09 15:08 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Neil Armstrong
On 9/6/24 10:11 PM, Akhil P Oommen wrote:
> On Thu, Sep 05, 2024 at 04:51:25PM +0200, Antonino Maniscalco wrote:
>> Add trace points corresponding to preemption being triggered and being
>> completed for latency measurement purposes.
>>
>> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
>> Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
>> ---
>> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 7 +++++++
>> drivers/gpu/drm/msm/msm_gpu_trace.h | 28 ++++++++++++++++++++++++++++
>> 2 files changed, 35 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>> index ec44f44d925f..ca9d36c107f2 100644
>> --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>> +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
>> @@ -7,6 +7,7 @@
>> #include "a6xx_gpu.h"
>> #include "a6xx_gmu.xml.h"
>> #include "msm_mmu.h"
>> +#include "msm_gpu_trace.h"
>>
>> /*
>> * Try to transition the preemption state from old to new. Return
>> @@ -143,6 +144,8 @@ void a6xx_preempt_irq(struct msm_gpu *gpu)
>>
>> set_preempt_state(a6xx_gpu, PREEMPT_NONE);
>>
>> + trace_msm_gpu_preemption_irq(a6xx_gpu->cur_ring->id);
>> +
>> /*
>> * Retrigger preemption to avoid a deadlock that might occur when preemption
>> * is skipped due to it being already in flight when requested.
>> @@ -264,6 +267,10 @@ void a6xx_preempt_trigger(struct msm_gpu *gpu)
>> */
>> ring->skip_inline_wptr = false;
>>
>> + trace_msm_gpu_preemption_trigger(
>> + a6xx_gpu->cur_ring ? a6xx_gpu->cur_ring->id : -1,
>
> Can't we avoid this check?
>
Sorry yeah you had requested this change but I had forgotten to do it.
> -Akhil.
>
>> + ring ? ring->id : -1);
>> +
>> spin_unlock_irqrestore(&ring->preempt_lock, flags);
>>
>> gpu_write64(gpu,
>> diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h
>> index ac40d857bc45..7f863282db0d 100644
>> --- a/drivers/gpu/drm/msm/msm_gpu_trace.h
>> +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h
>> @@ -177,6 +177,34 @@ TRACE_EVENT(msm_gpu_resume,
>> TP_printk("%u", __entry->dummy)
>> );
>>
>> +TRACE_EVENT(msm_gpu_preemption_trigger,
>> + TP_PROTO(int ring_id_from, int ring_id_to),
>> + TP_ARGS(ring_id_from, ring_id_to),
>> + TP_STRUCT__entry(
>> + __field(int, ring_id_from)
>> + __field(int, ring_id_to)
>> + ),
>> + TP_fast_assign(
>> + __entry->ring_id_from = ring_id_from;
>> + __entry->ring_id_to = ring_id_to;
>> + ),
>> + TP_printk("preempting %u -> %u",
>> + __entry->ring_id_from,
>> + __entry->ring_id_to)
>> +);
>> +
>> +TRACE_EVENT(msm_gpu_preemption_irq,
>> + TP_PROTO(u32 ring_id),
>> + TP_ARGS(ring_id),
>> + TP_STRUCT__entry(
>> + __field(u32, ring_id)
>> + ),
>> + TP_fast_assign(
>> + __entry->ring_id = ring_id;
>> + ),
>> + TP_printk("preempted to %u", __entry->ring_id)
>> +);
>> +
>> #endif
>>
>> #undef TRACE_INCLUDE_PATH
>>
>> --
>> 2.46.0
>>
Best regards,
--
Antonino Maniscalco <antomani103@gmail.com>
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v3 08/10] drm/msm/A6XX: Add a flag to allow preemption to submitqueue_create
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
` (6 preceding siblings ...)
2024-09-05 14:51 ` [PATCH v3 07/10] drm/msm/A6xx: Add traces for preemption Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-05 14:51 ` [PATCH v3 09/10] drm/msm/A6xx: Enable preemption for A750 Antonino Maniscalco
` (2 subsequent siblings)
10 siblings, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco, Neil Armstrong
Some userspace changes are necessary so add a flag for userspace to
advertise support for preemption when creating the submitqueue.
When this flag is not set preemption will not be allowed in the middle
of the submitted IBs therefore mantaining compatibility with older
userspace.
The flag is rejected if preemption is not supported on the target, this
allows userspace to know whether preemption is supported.
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 12 ++++++++----
drivers/gpu/drm/msm/msm_submitqueue.c | 3 +++
include/uapi/drm/msm_drm.h | 5 ++++-
3 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 710ec3ce2923..512ff443bd2e 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -453,8 +453,10 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
OUT_PKT7(ring, CP_SET_MARKER, 1);
OUT_RING(ring, 0x101); /* IFPC disable */
- OUT_PKT7(ring, CP_SET_MARKER, 1);
- OUT_RING(ring, 0x00d); /* IB1LIST start */
+ if (submit->queue->flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT) {
+ OUT_PKT7(ring, CP_SET_MARKER, 1);
+ OUT_RING(ring, 0x00d); /* IB1LIST start */
+ }
/* Submit the commands */
for (i = 0; i < submit->nr_cmds; i++) {
@@ -485,8 +487,10 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
update_shadow_rptr(gpu, ring);
}
- OUT_PKT7(ring, CP_SET_MARKER, 1);
- OUT_RING(ring, 0x00e); /* IB1LIST end */
+ if (submit->queue->flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT) {
+ OUT_PKT7(ring, CP_SET_MARKER, 1);
+ OUT_RING(ring, 0x00e); /* IB1LIST end */
+ }
get_stats_counter(ring, REG_A7XX_RBBM_PERFCTR_CP(0),
rbmemptr_stats(ring, index, cpcycles_end));
diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c
index 0e803125a325..9b3ffca3f3b4 100644
--- a/drivers/gpu/drm/msm/msm_submitqueue.c
+++ b/drivers/gpu/drm/msm/msm_submitqueue.c
@@ -170,6 +170,9 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx,
if (!priv->gpu)
return -ENODEV;
+ if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && priv->gpu->nr_rings == 1)
+ return -EINVAL;
+
ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio);
if (ret)
return ret;
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index 3fca72f73861..f37858db34e6 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -345,7 +345,10 @@ struct drm_msm_gem_madvise {
* backwards compatibility as a "default" submitqueue
*/
-#define MSM_SUBMITQUEUE_FLAGS (0)
+#define MSM_SUBMITQUEUE_ALLOW_PREEMPT 0x00000001
+#define MSM_SUBMITQUEUE_FLAGS ( \
+ MSM_SUBMITQUEUE_ALLOW_PREEMPT | \
+ 0)
/*
* The submitqueue priority should be between 0 and MSM_PARAM_PRIORITIES-1,
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* [PATCH v3 09/10] drm/msm/A6xx: Enable preemption for A750
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
` (7 preceding siblings ...)
2024-09-05 14:51 ` [PATCH v3 08/10] drm/msm/A6XX: Add a flag to allow preemption to submitqueue_create Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-05 14:51 ` [PATCH v3 10/10] Documentation: document adreno preemption Antonino Maniscalco
2024-09-06 19:58 ` [PATCH v3 00/10] Preemption support for A7XX Akhil P Oommen
10 siblings, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco, Neil Armstrong
Initialize with 4 rings to enable preemption.
For now only on A750 as other targets require testing.
Add the "preemption_enabled" module parameter to override this for other
A7xx targets.
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-QRD
---
drivers/gpu/drm/msm/adreno/a6xx_catalog.c | 3 ++-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 6 +++++-
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 +
drivers/gpu/drm/msm/msm_drv.c | 4 ++++
4 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
index 316f23ca9167..0e3041b29419 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_catalog.c
@@ -1240,7 +1240,8 @@ static const struct adreno_info a7xx_gpus[] = {
.gmem = 3 * SZ_1M,
.inactive_period = DRM_MSM_INACTIVE_PERIOD,
.quirks = ADRENO_QUIRK_HAS_CACHED_COHERENT |
- ADRENO_QUIRK_HAS_HW_APRIV,
+ ADRENO_QUIRK_HAS_HW_APRIV |
+ ADRENO_QUIRK_PREEMPTION,
.init = a6xx_gpu_init,
.zapfw = "gen70900_zap.mbn",
.a6xx = &(const struct a6xx_info) {
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 512ff443bd2e..8b4fa17f6003 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -2547,6 +2547,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
struct a6xx_gpu *a6xx_gpu;
struct adreno_gpu *adreno_gpu;
struct msm_gpu *gpu;
+ extern int enable_preemption;
bool is_a7xx;
int ret;
@@ -2585,7 +2586,10 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
return ERR_PTR(ret);
}
- if (is_a7xx)
+ if ((enable_preemption == 1) || (enable_preemption == -1 &&
+ (config->info->quirks & ADRENO_QUIRK_PREEMPTION)))
+ ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs_a7xx, 4);
+ else if (is_a7xx)
ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs_a7xx, 1);
else if (adreno_has_gmu_wrapper(adreno_gpu))
ret = adreno_gpu_init(dev, pdev, adreno_gpu, &funcs_gmuwrapper, 1);
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
index 87098567483b..d1cd53f05de6 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
@@ -56,6 +56,7 @@ enum adreno_family {
#define ADRENO_QUIRK_LMLOADKILL_DISABLE BIT(2)
#define ADRENO_QUIRK_HAS_HW_APRIV BIT(3)
#define ADRENO_QUIRK_HAS_CACHED_COHERENT BIT(4)
+#define ADRENO_QUIRK_PREEMPTION BIT(5)
/* Helper for formating the chip_id in the way that userspace tools like
* crashdec expect.
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 9c33f4e3f822..7c64b20f5e3b 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -58,6 +58,10 @@ static bool modeset = true;
MODULE_PARM_DESC(modeset, "Use kernel modesetting [KMS] (1=on (default), 0=disable)");
module_param(modeset, bool, 0600);
+int enable_preemption = -1;
+MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=on , 0=disable, -1=auto (default))");
+module_param(enable_preemption, int, 0600);
+
#ifdef CONFIG_FAULT_INJECTION
DECLARE_FAULT_ATTR(fail_gem_alloc);
DECLARE_FAULT_ATTR(fail_gem_iova);
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* [PATCH v3 10/10] Documentation: document adreno preemption
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
` (8 preceding siblings ...)
2024-09-05 14:51 ` [PATCH v3 09/10] drm/msm/A6xx: Enable preemption for A750 Antonino Maniscalco
@ 2024-09-05 14:51 ` Antonino Maniscalco
2024-09-06 19:58 ` [PATCH v3 00/10] Preemption support for A7XX Akhil P Oommen
10 siblings, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-05 14:51 UTC (permalink / raw)
To: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet
Cc: linux-arm-msm, dri-devel, freedreno, linux-kernel, linux-doc,
Antonino Maniscalco
Add documentation about the preemption feature supported by the msm
driver.
Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
---
Documentation/gpu/msm-preemption.rst | 98 ++++++++++++++++++++++++++++++++++++
1 file changed, 98 insertions(+)
diff --git a/Documentation/gpu/msm-preemption.rst b/Documentation/gpu/msm-preemption.rst
new file mode 100644
index 000000000000..c1203524da2e
--- /dev/null
+++ b/Documentation/gpu/msm-preemption.rst
@@ -0,0 +1,98 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+:orphan:
+
+=============
+MSM Preemtion
+=============
+
+Preemption allows Adreno GPUs to switch to an higher priority ring when work is
+pushed to it, reducing latency for high priority submissions.
+
+When preemption is enabled 4 rings are initialized, corresponding to different
+priority levels. Having multiple rings is purely a software concept as the GPU
+only has registers to keep track of one graphics ring.
+The kernel is able to switch which ring is currently being processed by
+requesting preemption. When certain conditions are met, depending on the
+priority level, the GPU will save its current state in a series of buffers,
+then restores state from a similar set of buffers specified by the kernel. It
+then resumes execution and fires an IRQ to let the kernel know the context
+switch has completed.
+
+This mechanism can be used by the kernel to switch between rings. Whenever a
+submission occurs the kernel finds the highest priority ring which isn't empty
+and preempts to it if said ring is not the one being currently executed. This is
+also done whenever a submission completes to make sure execution resumes on a
+lower priority ring when a higher priority ring is done.
+
+Preemption levels
+-----------------
+
+Preemption can only occur at certain boundaries. The exact conditions can be
+configured by changing the preemption level, this allows to compromise between
+latency (ie. the time that passes between when the kernel requests preemption
+and when the SQE begins saving state) and overhead (the amount of state that
+needs to be saved).
+
+The GPU offers 3 levels:
+
+Level 0
+ Preemption only occurs at the submission level. This requires the least amount
+ of state to be saved as the execution of userspace submitted IBs is never
+ interrupted, however it offers very little benefit compared to not enabling
+ preemption of any kind.
+
+Level 1
+ Preemption occurs at either bin level, if using GMEM rendering, or draw level
+ in the sysmem rendering case.
+
+Level 2
+ Preemption occurs at draw level.
+
+Level 1 is the mode that is used by the msm driver.
+
+Additionally the GPU allows to specify a `skip_save_restore` option. This
+disables the saving and restoring of certain registers which lowers the
+overhead. When enabling this userspace is expected to set the state that isn't
+preserved whenever preemption occurs which is done by specifying preamble and
+postambles. Those are IBs that are executed before and after
+preemption.
+
+Preemption buffers
+------------------
+
+A series of buffers are necessary to store the state of rings while they are not
+being executed. There are different kinds of preemption records and most of
+those require one buffer per ring. This is because preemption never occurs
+between submissions on the same ring, which always run in sequence when the ring
+is active. This means that only one context per ring is effectively active.
+
+SMMU_INFO
+ This buffer contains info about the current SMMU configuration such as the
+ ttbr0 register. The SQE firmware isn't actually able to save this record.
+ As a result SMMU info must be saved manually from the CP to a buffer and the
+ SMMU record updated with info from said buffer before triggering
+ preemption.
+
+NON_SECURE
+ This is the main preemption record where most state is saved. It is mostly
+ opaque to the kernel except for the first few words that must be initialized
+ by the kernel.
+
+SECURE
+ This saves state related to the GPU's secure mode.
+
+NON_PRIV
+ The intended purpose of this record is unknown. The SQE firmware actually
+ ignores it and therefore msm doesn't handle it.
+
+COUNTER
+ This record is used to save and restore performance counters.
+
+Handling the permissions of those buffers is critical for security. All but the
+NON_PRIV records need to be inaccessible from userspace, so they must be mapped
+in the kernel address space with the MSM_BO_MAP_PRIV flag.
+For example, making the NON_SECURE record accessible from userspace would allow
+any process to manipulate a saved ring's RPTR which can be used to skip the
+execution of some packets in a ring and execute user commands with higher
+privileges.
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread* Re: [PATCH v3 00/10] Preemption support for A7XX
2024-09-05 14:51 [PATCH v3 00/10] Preemption support for A7XX Antonino Maniscalco
` (9 preceding siblings ...)
2024-09-05 14:51 ` [PATCH v3 10/10] Documentation: document adreno preemption Antonino Maniscalco
@ 2024-09-06 19:58 ` Akhil P Oommen
2024-09-09 15:36 ` Rob Clark
2024-09-12 13:49 ` Antonino Maniscalco
10 siblings, 2 replies; 32+ messages in thread
From: Akhil P Oommen @ 2024-09-06 19:58 UTC (permalink / raw)
To: Antonino Maniscalco
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Neil Armstrong, Sharat Masetty
On Thu, Sep 05, 2024 at 04:51:18PM +0200, Antonino Maniscalco wrote:
> This series implements preemption for A7XX targets, which allows the GPU to
> switch to an higher priority ring when work is pushed to it, reducing latency
> for high priority submissions.
>
> This series enables L1 preemption with skip_save_restore which requires
> the following userspace patches to function:
>
> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30544
>
> A flag is added to `msm_submitqueue_create` to only allow submissions
> from compatible userspace to be preempted, therefore maintaining
> compatibility.
>
> Preemption is currently only enabled by default on A750, it can be
> enabled on other targets through the `enable_preemption` module
> parameter. This is because more testing is required on other targets.
>
> For testing on other HW it is sufficient to set that parameter to a
> value of 1, then using the branch of mesa linked above, `TU_DEBUG=hiprio`
> allows to run any application as high priority therefore preempting
> submissions from other applications.
>
> The `msm_gpu_preemption_trigger` and `msm_gpu_preemption_irq` traces
> added in this series can be used to observe preemption's behavior as
> well as measuring preemption latency.
>
> Some commits from this series are based on a previous series to enable
> preemption on A6XX targets:
>
> https://lkml.kernel.org/1520489185-21828-1-git-send-email-smasetty@codeaurora.org
>
> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
Antonino, can you please test this once with per-process pt disabled to
ensure that is not broken? It is handy sometimes while debugging.
We just need to remove "adreno-smmu" compatible string from gpu smmu
node in DT.
-Akhil.
> ---
> Changes in v3:
> - Added documentation about preemption
> - Use quirks to determine which target supports preemption
> - Add a module parameter to force disabling or enabling preemption
> - Clear postamble when profiling
> - Define A6XX_CP_CONTEXT_SWITCH_CNTL_LEVEL fields in a6xx.xml
> - Make preemption records MAP_PRIV
> - Removed user ctx record (NON_PRIV) and patch 2/9 as it's not needed
> anymore
> - Link to v2: https://lore.kernel.org/r/20240830-preemption-a750-t-v2-0-86aeead2cd80@gmail.com
>
> Changes in v2:
> - Added preept_record_size for X185 in PATCH 3/7
> - Added patches to reset perf counters
> - Dropped unused defines
> - Dropped unused variable (fixes warning)
> - Only enable preemption on a750
> - Reject MSM_SUBMITQUEUE_ALLOW_PREEMPT for unsupported targets
> - Added Akhil's Reviewed-By tags to patches 1/9,2/9,3/9
> - Added Neil's Tested-By tags
> - Added explanation for UAPI changes in commit message
> - Link to v1: https://lore.kernel.org/r/20240815-preemption-a750-t-v1-0-7bda26c34037@gmail.com
>
> ---
> Antonino Maniscalco (10):
> drm/msm: Fix bv_fence being used as bv_rptr
> drm/msm: Add a `preempt_record_size` field
> drm/msm: Add CONTEXT_SWITCH_CNTL bitfields
> drm/msm/A6xx: Implement preemption for A7XX targets
> drm/msm/A6xx: Sync relevant adreno_pm4.xml changes
> drm/msm/A6xx: Use posamble to reset counters on preemption
> drm/msm/A6xx: Add traces for preemption
> drm/msm/A6XX: Add a flag to allow preemption to submitqueue_create
> drm/msm/A6xx: Enable preemption for A750
> Documentation: document adreno preemption
>
> Documentation/gpu/msm-preemption.rst | 98 +++++
> drivers/gpu/drm/msm/Makefile | 1 +
> drivers/gpu/drm/msm/adreno/a6xx_catalog.c | 7 +-
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 331 +++++++++++++++-
> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 166 ++++++++
> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 430 +++++++++++++++++++++
> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 9 +-
> drivers/gpu/drm/msm/msm_drv.c | 4 +
> drivers/gpu/drm/msm/msm_gpu_trace.h | 28 ++
> drivers/gpu/drm/msm/msm_ringbuffer.h | 8 +
> drivers/gpu/drm/msm/msm_submitqueue.c | 3 +
> drivers/gpu/drm/msm/registers/adreno/a6xx.xml | 7 +-
> .../gpu/drm/msm/registers/adreno/adreno_pm4.xml | 39 +-
> include/uapi/drm/msm_drm.h | 5 +-
> 14 files changed, 1094 insertions(+), 42 deletions(-)
> ---
> base-commit: 7c626ce4bae1ac14f60076d00eafe71af30450ba
> change-id: 20240815-preemption-a750-t-fcee9a844b39
>
> Best regards,
> --
> Antonino Maniscalco <antomani103@gmail.com>
>
^ permalink raw reply [flat|nested] 32+ messages in thread* Re: [PATCH v3 00/10] Preemption support for A7XX
2024-09-06 19:58 ` [PATCH v3 00/10] Preemption support for A7XX Akhil P Oommen
@ 2024-09-09 15:36 ` Rob Clark
2024-09-12 13:49 ` Antonino Maniscalco
1 sibling, 0 replies; 32+ messages in thread
From: Rob Clark @ 2024-09-09 15:36 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Antonino Maniscalco, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Neil Armstrong, Sharat Masetty
On Fri, Sep 6, 2024 at 12:59 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
>
> On Thu, Sep 05, 2024 at 04:51:18PM +0200, Antonino Maniscalco wrote:
> > This series implements preemption for A7XX targets, which allows the GPU to
> > switch to an higher priority ring when work is pushed to it, reducing latency
> > for high priority submissions.
> >
> > This series enables L1 preemption with skip_save_restore which requires
> > the following userspace patches to function:
> >
> > https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30544
> >
> > A flag is added to `msm_submitqueue_create` to only allow submissions
> > from compatible userspace to be preempted, therefore maintaining
> > compatibility.
> >
> > Preemption is currently only enabled by default on A750, it can be
> > enabled on other targets through the `enable_preemption` module
> > parameter. This is because more testing is required on other targets.
> >
> > For testing on other HW it is sufficient to set that parameter to a
> > value of 1, then using the branch of mesa linked above, `TU_DEBUG=hiprio`
> > allows to run any application as high priority therefore preempting
> > submissions from other applications.
> >
> > The `msm_gpu_preemption_trigger` and `msm_gpu_preemption_irq` traces
> > added in this series can be used to observe preemption's behavior as
> > well as measuring preemption latency.
> >
> > Some commits from this series are based on a previous series to enable
> > preemption on A6XX targets:
> >
> > https://lkml.kernel.org/1520489185-21828-1-git-send-email-smasetty@codeaurora.org
> >
> > Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
>
> Antonino, can you please test this once with per-process pt disabled to
> ensure that is not broken? It is handy sometimes while debugging.
> We just need to remove "adreno-smmu" compatible string from gpu smmu
> node in DT.
fwiw, I'd be ok supporting preemption on devices that have per-process
pgtables. (And maybe we should be tainting the kernel if per-process
pgtables are disabled on a6xx+)
BR,
-R
> -Akhil.
>
> > ---
> > Changes in v3:
> > - Added documentation about preemption
> > - Use quirks to determine which target supports preemption
> > - Add a module parameter to force disabling or enabling preemption
> > - Clear postamble when profiling
> > - Define A6XX_CP_CONTEXT_SWITCH_CNTL_LEVEL fields in a6xx.xml
> > - Make preemption records MAP_PRIV
> > - Removed user ctx record (NON_PRIV) and patch 2/9 as it's not needed
> > anymore
> > - Link to v2: https://lore.kernel.org/r/20240830-preemption-a750-t-v2-0-86aeead2cd80@gmail.com
> >
> > Changes in v2:
> > - Added preept_record_size for X185 in PATCH 3/7
> > - Added patches to reset perf counters
> > - Dropped unused defines
> > - Dropped unused variable (fixes warning)
> > - Only enable preemption on a750
> > - Reject MSM_SUBMITQUEUE_ALLOW_PREEMPT for unsupported targets
> > - Added Akhil's Reviewed-By tags to patches 1/9,2/9,3/9
> > - Added Neil's Tested-By tags
> > - Added explanation for UAPI changes in commit message
> > - Link to v1: https://lore.kernel.org/r/20240815-preemption-a750-t-v1-0-7bda26c34037@gmail.com
> >
> > ---
> > Antonino Maniscalco (10):
> > drm/msm: Fix bv_fence being used as bv_rptr
> > drm/msm: Add a `preempt_record_size` field
> > drm/msm: Add CONTEXT_SWITCH_CNTL bitfields
> > drm/msm/A6xx: Implement preemption for A7XX targets
> > drm/msm/A6xx: Sync relevant adreno_pm4.xml changes
> > drm/msm/A6xx: Use posamble to reset counters on preemption
> > drm/msm/A6xx: Add traces for preemption
> > drm/msm/A6XX: Add a flag to allow preemption to submitqueue_create
> > drm/msm/A6xx: Enable preemption for A750
> > Documentation: document adreno preemption
> >
> > Documentation/gpu/msm-preemption.rst | 98 +++++
> > drivers/gpu/drm/msm/Makefile | 1 +
> > drivers/gpu/drm/msm/adreno/a6xx_catalog.c | 7 +-
> > drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 331 +++++++++++++++-
> > drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 166 ++++++++
> > drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 430 +++++++++++++++++++++
> > drivers/gpu/drm/msm/adreno/adreno_gpu.h | 9 +-
> > drivers/gpu/drm/msm/msm_drv.c | 4 +
> > drivers/gpu/drm/msm/msm_gpu_trace.h | 28 ++
> > drivers/gpu/drm/msm/msm_ringbuffer.h | 8 +
> > drivers/gpu/drm/msm/msm_submitqueue.c | 3 +
> > drivers/gpu/drm/msm/registers/adreno/a6xx.xml | 7 +-
> > .../gpu/drm/msm/registers/adreno/adreno_pm4.xml | 39 +-
> > include/uapi/drm/msm_drm.h | 5 +-
> > 14 files changed, 1094 insertions(+), 42 deletions(-)
> > ---
> > base-commit: 7c626ce4bae1ac14f60076d00eafe71af30450ba
> > change-id: 20240815-preemption-a750-t-fcee9a844b39
> >
> > Best regards,
> > --
> > Antonino Maniscalco <antomani103@gmail.com>
> >
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v3 00/10] Preemption support for A7XX
2024-09-06 19:58 ` [PATCH v3 00/10] Preemption support for A7XX Akhil P Oommen
2024-09-09 15:36 ` Rob Clark
@ 2024-09-12 13:49 ` Antonino Maniscalco
1 sibling, 0 replies; 32+ messages in thread
From: Antonino Maniscalco @ 2024-09-12 13:49 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, David Airlie, Daniel Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, linux-arm-msm, dri-devel, freedreno,
linux-kernel, linux-doc, Neil Armstrong, Sharat Masetty
On 9/6/24 9:58 PM, Akhil P Oommen wrote:
> On Thu, Sep 05, 2024 at 04:51:18PM +0200, Antonino Maniscalco wrote:
>> This series implements preemption for A7XX targets, which allows the GPU to
>> switch to an higher priority ring when work is pushed to it, reducing latency
>> for high priority submissions.
>>
>> This series enables L1 preemption with skip_save_restore which requires
>> the following userspace patches to function:
>>
>> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30544
>>
>> A flag is added to `msm_submitqueue_create` to only allow submissions
>> from compatible userspace to be preempted, therefore maintaining
>> compatibility.
>>
>> Preemption is currently only enabled by default on A750, it can be
>> enabled on other targets through the `enable_preemption` module
>> parameter. This is because more testing is required on other targets.
>>
>> For testing on other HW it is sufficient to set that parameter to a
>> value of 1, then using the branch of mesa linked above, `TU_DEBUG=hiprio`
>> allows to run any application as high priority therefore preempting
>> submissions from other applications.
>>
>> The `msm_gpu_preemption_trigger` and `msm_gpu_preemption_irq` traces
>> added in this series can be used to observe preemption's behavior as
>> well as measuring preemption latency.
>>
>> Some commits from this series are based on a previous series to enable
>> preemption on A6XX targets:
>>
>> https://lkml.kernel.org/1520489185-21828-1-git-send-email-smasetty@codeaurora.org
>>
>> Signed-off-by: Antonino Maniscalco <antomani103@gmail.com>
>
> Antonino, can you please test this once with per-process pt disabled to
> ensure that is not broken? It is handy sometimes while debugging.
> We just need to remove "adreno-smmu" compatible string from gpu smmu
> node in DT.
>
Removing that from the DT causes a crash inside
`msm_iommu_pagetable_create` as it seems `create_private_address_space`
is assigned unconditionally in a6xx_gpu.c . I tested it with the
following change:
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 9e5a83b885f0..4111f5fd9721 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -832,11 +832,11 @@ msm_gpu_create_private_address_space(struct
msm_gpu *gpu, struct task_struct *ta
* If the target doesn't support private address spaces then return
* the global one
*/
- if (gpu->funcs->create_private_address_space) {
- aspace = gpu->funcs->create_private_address_space(gpu);
- if (!IS_ERR(aspace))
- aspace->pid = get_pid(task_pid(task));
- }
+ /* if (gpu->funcs->create_private_address_space) { */
+ /* aspace = gpu->funcs->create_private_address_space(gpu); */
+ /* if (!IS_ERR(aspace)) */
+ /* aspace->pid = get_pid(task_pid(task)); */
+ /* } */
if (IS_ERR_OR_NULL(aspace))
aspace = msm_gem_address_space_get(gpu->aspace);
and it appears to work.
> -Akhil.
>
>> ---
>> Changes in v3:
>> - Added documentation about preemption
>> - Use quirks to determine which target supports preemption
>> - Add a module parameter to force disabling or enabling preemption
>> - Clear postamble when profiling
>> - Define A6XX_CP_CONTEXT_SWITCH_CNTL_LEVEL fields in a6xx.xml
>> - Make preemption records MAP_PRIV
>> - Removed user ctx record (NON_PRIV) and patch 2/9 as it's not needed
>> anymore
>> - Link to v2: https://lore.kernel.org/r/20240830-preemption-a750-t-v2-0-86aeead2cd80@gmail.com
>>
>> Changes in v2:
>> - Added preept_record_size for X185 in PATCH 3/7
>> - Added patches to reset perf counters
>> - Dropped unused defines
>> - Dropped unused variable (fixes warning)
>> - Only enable preemption on a750
>> - Reject MSM_SUBMITQUEUE_ALLOW_PREEMPT for unsupported targets
>> - Added Akhil's Reviewed-By tags to patches 1/9,2/9,3/9
>> - Added Neil's Tested-By tags
>> - Added explanation for UAPI changes in commit message
>> - Link to v1: https://lore.kernel.org/r/20240815-preemption-a750-t-v1-0-7bda26c34037@gmail.com
>>
>> ---
>> Antonino Maniscalco (10):
>> drm/msm: Fix bv_fence being used as bv_rptr
>> drm/msm: Add a `preempt_record_size` field
>> drm/msm: Add CONTEXT_SWITCH_CNTL bitfields
>> drm/msm/A6xx: Implement preemption for A7XX targets
>> drm/msm/A6xx: Sync relevant adreno_pm4.xml changes
>> drm/msm/A6xx: Use posamble to reset counters on preemption
>> drm/msm/A6xx: Add traces for preemption
>> drm/msm/A6XX: Add a flag to allow preemption to submitqueue_create
>> drm/msm/A6xx: Enable preemption for A750
>> Documentation: document adreno preemption
>>
>> Documentation/gpu/msm-preemption.rst | 98 +++++
>> drivers/gpu/drm/msm/Makefile | 1 +
>> drivers/gpu/drm/msm/adreno/a6xx_catalog.c | 7 +-
>> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 331 +++++++++++++++-
>> drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 166 ++++++++
>> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 430 +++++++++++++++++++++
>> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 9 +-
>> drivers/gpu/drm/msm/msm_drv.c | 4 +
>> drivers/gpu/drm/msm/msm_gpu_trace.h | 28 ++
>> drivers/gpu/drm/msm/msm_ringbuffer.h | 8 +
>> drivers/gpu/drm/msm/msm_submitqueue.c | 3 +
>> drivers/gpu/drm/msm/registers/adreno/a6xx.xml | 7 +-
>> .../gpu/drm/msm/registers/adreno/adreno_pm4.xml | 39 +-
>> include/uapi/drm/msm_drm.h | 5 +-
>> 14 files changed, 1094 insertions(+), 42 deletions(-)
>> ---
>> base-commit: 7c626ce4bae1ac14f60076d00eafe71af30450ba
>> change-id: 20240815-preemption-a750-t-fcee9a844b39
>>
>> Best regards,
>> --
>> Antonino Maniscalco <antomani103@gmail.com>
>>
Best regards,
--
Antonino Maniscalco <antomani103@gmail.com>
^ permalink raw reply related [flat|nested] 32+ messages in thread