* [PATCH v9 0/4] Add a few tracepoints to panthor
@ 2026-01-15 13:58 Nicolas Frattaroli
2026-01-15 13:58 ` [PATCH v9 1/4] drm/panthor: Rework panthor_irq::suspended into panthor_irq::state Nicolas Frattaroli
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Nicolas Frattaroli @ 2026-01-15 13:58 UTC (permalink / raw)
To: Boris Brezillon, Steven Price, Liviu Dudau, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Chia-I Wu, Karunika Choo
Cc: kernel, linux-kernel, dri-devel, Nicolas Frattaroli
This series adds two tracepoints to panthor.
The first tracepoint allows for inspecting the power status of the
hardware subdivisions, e.g. how many shader cores are powered on. This
is done by reading three hardware registers when a certain IRQ fires.
The second tracepoint instruments panthor's job IRQ handler. This is
more useful than the generic interrupt tracing functionality, as the
tracepoint has the events bit mask included, which indicates which
command stream group interfaces triggered the interrupt.
To test the tracepoints, the following can be used:
:~# echo 1 > /sys/kernel/tracing/events/panthor/gpu_power_status/enable
:~# echo 1 > /sys/kernel/tracing/events/panthor/gpu_job_irq/enable
:~# echo 1 > /sys/kernel/tracing/tracing_on
:~# cat /sys/kernel/tracing/trace_pipe
Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
---
Changes in v9:
- Rebase onto drm-tip 2026y-01m-14d-17h-09m-04s
- Move the mask to pirq->mask change in the suspended refactor patch to
the follow-up patch
- Move the INT_MASK restoration changes in the suspended refactor patch
to the follow-up patch
- Move panthor_irq::mask_lock member right below the mask member
- panthor/mmu: Remove the enable_events calls before resume, as they're
redundant
- panthor/mmu: Rework the AS fault clearing logic. Drop the spurious
disable_events in panthor_vm_active, but use
disable_events/enable_events in as_disable/as_enable respectively. This
requires doing a forward declaration of the panthor_mmu_irq_handler to
get a definition of the helpers before it itself is defined. This
works out great, because it means I also no longer have to move the
entire panthor_vm_active function down some.
- Drop an accidentally added redundant empty line
- Link to v8: https://lore.kernel.org/r/20260112-panthor-tracepoints-v8-0-63efcb421d22@collabora.com
Changes in v8:
- Reorder panthor_irq::state patch to be before the new mask
modification helpers. The full set of states was kept (rather than
just active/suspended) as they don't hurt and make the follow-up patch
more concise in scope.
- Also bail out on STATE_SUSPENDING in irq_raw_handler
- Job irq: initialize start to 0 and don't emit a tracepoint if it is 0,
to fix passing an uninitialised stack variable to userspace if the
tracepoint was enabled while the handler was running
- Link to v7: https://lore.kernel.org/r/20260108-panthor-tracepoints-v7-0-afeae181f74a@collabora.com
Changes in v7:
- Get rid of old resume IRQ helper by reworking code throughout panthor,
and make what used to be resume_restore in v6 the new resume.
- Rename mask_enable/mask_disable to enable_events/disable_events.
- Turn panthor_irq::suspended into a multi-state value, and utilise it
in the IRQ helpers as appropriate.
- Link to v6: https://lore.kernel.org/r/20251223-panthor-tracepoints-v6-0-d3c998ee9efc@collabora.com
Changes in v6:
- Read the mask member into a local while holding the lock in
irq_threaded_handler.
- Drop the lock before entering the while loop, letting the threaded
handler function run without holding a spinlock
- Re-acquire the spinlock at the end of irq_threaded_handler, OR'ing the
mask register's contents with the mask local ANDed by the member. This
avoids stomping over any other modified bits, or restoring ones that
have been disabled in the meantime.
- Link to v5: https://lore.kernel.org/r/20251221-panthor-tracepoints-v5-0-889ef78165d8@collabora.com
Changes in v5:
- Change the panthor IRQ helpers to guard the mask member and register
with a spinlock. The rationale behind using a spinlock, rather than
some constellation of atomics, is that we have to guarantee mutual
exclusion for state beyond just a single value, namely both the register
write, and writes to/reads from the mask member, including
reads-from-member-writes-to-register. Making the mask atomic does not do
anything to avoid concurrency issues in such a case.
- Change the IRQ mask member to not get zeroed when suspended. It's
possible something outside of the IRQ helpers depends on this
behaviour, but I'd argue the code should not access the mask outside
of the IRQ helpers, as it'll do so with no lock taken.
- Drop the mask_set function, but add mask_enable/mask_disable helpers
to enable/disable individual parts of the IRQ mask.
- Add a resume_restore IRQ helper that does the same thing as resume,
but does not overwrite the mask member. This avoids me having to
refactor whatever panthor_mmu.c is doing with that poor mask member.
- Link to v4: https://lore.kernel.org/r/20251217-panthor-tracepoints-v4-0-916186cb8d03@collabora.com
Changes in v4:
- Include "panthor_hw.h" in panthor_trace.h instead of duplicating the
reg/unreg function prototypes.
- Link to v3: https://lore.kernel.org/r/20251211-panthor-tracepoints-v3-0-924c9d356a5c@collabora.com
Changes in v3:
- Drop PWRFEATURES patch, as this register is no longer needed by this
series.
- Eliminate the rt_on field from the gpu_power_status register, as per
Steven Price's feedback.
- Make gpu_power_status tracepoint reg/unreg functions generic across
hardware generations by wrapping a hw op in panthor_hw.c.
- Reimplement the <= v13 IRQ mask modification functions as the new hw
ops functions. v14 can add its own ops in due time.
- Link to v2: https://lore.kernel.org/r/20251210-panthor-tracepoints-v2-0-ace2e29bad0f@collabora.com
Changes in v2:
- Only enable the GPU_IRQ_POWER_CHANGED_* IRQ mask bits when the
tracepoint is enabled. Necessitates the new irq helper patch.
- Only enable the GPU_IRQ_POWER_CHANGED_* IRQ mask bits if the hardware
architecture is <= v13, as v14 changes things.
- Use _READY instead of _PWRACTIVE registers, and rename the tracepoint
accordingly.
- Also read the status of the ray tracing unit's power. This is a global
flag for all shader cores, it seems. Necessitates the new register
definition patch.
- Move the POWER_CHANGED_* check to earlier in the interrupt handler.
- Also listen to POWER_CHANGED, not just POWER_CHANGED_ALL, as this
provides useful information with the _READY registers.
- Print the device name in both tracepoints, to disambiguate things on
systems with multiple Mali GPUs.
- Document the gpu_power_status tracepoint, so the meaning of the fields
is made clear.
- Link to v1: https://lore.kernel.org/r/20251203-panthor-tracepoints-v1-0-871c8917e084@collabora.com
---
Nicolas Frattaroli (4):
drm/panthor: Rework panthor_irq::suspended into panthor_irq::state
drm/panthor: Extend IRQ helpers for mask modification/restoration
drm/panthor: Add tracepoint for hardware utilisation changes
drm/panthor: Add gpu_job_irq tracepoint
drivers/gpu/drm/panthor/panthor_device.h | 96 ++++++++++++++++++++++++++------
drivers/gpu/drm/panthor/panthor_fw.c | 16 +++++-
drivers/gpu/drm/panthor/panthor_gpu.c | 30 +++++++++-
drivers/gpu/drm/panthor/panthor_gpu.h | 2 +
drivers/gpu/drm/panthor/panthor_hw.c | 62 +++++++++++++++++++++
drivers/gpu/drm/panthor/panthor_hw.h | 8 +++
drivers/gpu/drm/panthor/panthor_mmu.c | 47 ++++++++--------
drivers/gpu/drm/panthor/panthor_pwr.c | 2 +-
drivers/gpu/drm/panthor/panthor_trace.h | 86 ++++++++++++++++++++++++++++
9 files changed, 307 insertions(+), 42 deletions(-)
---
base-commit: 733664f1edf3c01cc68e6dd0bbdb135158a98a1d
change-id: 20251203-panthor-tracepoints-488af09d46e7
Best regards,
--
Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH v9 1/4] drm/panthor: Rework panthor_irq::suspended into panthor_irq::state 2026-01-15 13:58 [PATCH v9 0/4] Add a few tracepoints to panthor Nicolas Frattaroli @ 2026-01-15 13:58 ` Nicolas Frattaroli 2026-01-15 14:55 ` Boris Brezillon 2026-01-15 13:59 ` [PATCH v9 2/4] drm/panthor: Extend IRQ helpers for mask modification/restoration Nicolas Frattaroli ` (2 subsequent siblings) 3 siblings, 1 reply; 9+ messages in thread From: Nicolas Frattaroli @ 2026-01-15 13:58 UTC (permalink / raw) To: Boris Brezillon, Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Chia-I Wu, Karunika Choo Cc: kernel, linux-kernel, dri-devel, Nicolas Frattaroli To deal with the threaded interrupt handler and a suspend action overlapping, the boolean panthor_irq::suspended is not sufficient. Rework it into taking several different values depending on the current state, and check it and set it within the IRQ helper functions. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> --- drivers/gpu/drm/panthor/panthor_device.h | 32 ++++++++++++++++++++++++++------ 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h index f35e52b9546a..2bf9a8434dc5 100644 --- a/drivers/gpu/drm/panthor/panthor_device.h +++ b/drivers/gpu/drm/panthor/panthor_device.h @@ -61,6 +61,17 @@ enum panthor_device_pm_state { PANTHOR_DEVICE_PM_STATE_SUSPENDING, }; +enum panthor_irq_state { + /** @PANTHOR_IRQ_STATE_ACTIVE: IRQ is active and ready to process events. */ + PANTHOR_IRQ_STATE_ACTIVE = 0, + /** @PANTHOR_IRQ_STATE_PROCESSING: IRQ is currently processing events. */ + PANTHOR_IRQ_STATE_PROCESSING, + /** @PANTHOR_IRQ_STATE_SUSPENDED: IRQ is suspended. */ + PANTHOR_IRQ_STATE_SUSPENDED, + /** @PANTHOR_IRQ_STATE_SUSPENDING: IRQ is being suspended. */ + PANTHOR_IRQ_STATE_SUSPENDING, +}; + /** * struct panthor_irq - IRQ data * @@ -76,8 +87,8 @@ struct panthor_irq { /** @mask: Current mask being applied to xxx_INT_MASK. */ u32 mask; - /** @suspended: Set to true when the IRQ is suspended. */ - atomic_t suspended; + /** @state: one of &enum panthor_irq_state reflecting the current state. */ + atomic_t state; }; /** @@ -409,8 +420,10 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data) { \ struct panthor_irq *pirq = data; \ struct panthor_device *ptdev = pirq->ptdev; \ + enum panthor_irq_state state; \ \ - if (atomic_read(&pirq->suspended)) \ + state = atomic_read(&pirq->state); \ + if (state == PANTHOR_IRQ_STATE_SUSPENDED || state == PANTHOR_IRQ_STATE_SUSPENDING) \ return IRQ_NONE; \ if (!gpu_read(ptdev, __reg_prefix ## _INT_STAT)) \ return IRQ_NONE; \ @@ -423,8 +436,11 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da { \ struct panthor_irq *pirq = data; \ struct panthor_device *ptdev = pirq->ptdev; \ + enum panthor_irq_state state; \ irqreturn_t ret = IRQ_NONE; \ \ + atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, PANTHOR_IRQ_STATE_PROCESSING); \ + \ while (true) { \ u32 status = gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mask; \ \ @@ -435,8 +451,11 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da ret = IRQ_HANDLED; \ } \ \ - if (!atomic_read(&pirq->suspended)) \ + state = atomic_read(&pirq->state); \ + if (state != PANTHOR_IRQ_STATE_SUSPENDED && state != PANTHOR_IRQ_STATE_SUSPENDING) { \ gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ + } \ \ return ret; \ } \ @@ -445,14 +464,15 @@ static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *pirq) { \ pirq->mask = 0; \ gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ synchronize_irq(pirq->irq); \ - atomic_set(&pirq->suspended, true); \ + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDED); \ } \ \ static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq, u32 mask) \ { \ - atomic_set(&pirq->suspended, false); \ pirq->mask = mask; \ + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, mask); \ gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, mask); \ } \ -- 2.52.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v9 1/4] drm/panthor: Rework panthor_irq::suspended into panthor_irq::state 2026-01-15 13:58 ` [PATCH v9 1/4] drm/panthor: Rework panthor_irq::suspended into panthor_irq::state Nicolas Frattaroli @ 2026-01-15 14:55 ` Boris Brezillon 0 siblings, 0 replies; 9+ messages in thread From: Boris Brezillon @ 2026-01-15 14:55 UTC (permalink / raw) To: Nicolas Frattaroli Cc: Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Chia-I Wu, Karunika Choo, kernel, linux-kernel, dri-devel On Thu, 15 Jan 2026 14:58:59 +0100 Nicolas Frattaroli <nicolas.frattaroli@collabora.com> wrote: > To deal with the threaded interrupt handler and a suspend action > overlapping, the boolean panthor_irq::suspended is not sufficient. > > Rework it into taking several different values depending on the current > state, and check it and set it within the IRQ helper functions. > > Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> > --- > drivers/gpu/drm/panthor/panthor_device.h | 32 ++++++++++++++++++++++++++------ > 1 file changed, 26 insertions(+), 6 deletions(-) > > diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h > index f35e52b9546a..2bf9a8434dc5 100644 > --- a/drivers/gpu/drm/panthor/panthor_device.h > +++ b/drivers/gpu/drm/panthor/panthor_device.h > @@ -61,6 +61,17 @@ enum panthor_device_pm_state { > PANTHOR_DEVICE_PM_STATE_SUSPENDING, > }; > > +enum panthor_irq_state { > + /** @PANTHOR_IRQ_STATE_ACTIVE: IRQ is active and ready to process events. */ > + PANTHOR_IRQ_STATE_ACTIVE = 0, > + /** @PANTHOR_IRQ_STATE_PROCESSING: IRQ is currently processing events. */ > + PANTHOR_IRQ_STATE_PROCESSING, > + /** @PANTHOR_IRQ_STATE_SUSPENDED: IRQ is suspended. */ > + PANTHOR_IRQ_STATE_SUSPENDED, > + /** @PANTHOR_IRQ_STATE_SUSPENDING: IRQ is being suspended. */ > + PANTHOR_IRQ_STATE_SUSPENDING, > +}; > + > /** > * struct panthor_irq - IRQ data > * > @@ -76,8 +87,8 @@ struct panthor_irq { > /** @mask: Current mask being applied to xxx_INT_MASK. */ > u32 mask; > > - /** @suspended: Set to true when the IRQ is suspended. */ > - atomic_t suspended; > + /** @state: one of &enum panthor_irq_state reflecting the current state. */ > + atomic_t state; > }; > > /** > @@ -409,8 +420,10 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data) > { \ > struct panthor_irq *pirq = data; \ > struct panthor_device *ptdev = pirq->ptdev; \ > + enum panthor_irq_state state; \ > \ > - if (atomic_read(&pirq->suspended)) \ > + state = atomic_read(&pirq->state); \ > + if (state == PANTHOR_IRQ_STATE_SUSPENDED || state == PANTHOR_IRQ_STATE_SUSPENDING) \ > return IRQ_NONE; \ > if (!gpu_read(ptdev, __reg_prefix ## _INT_STAT)) \ > return IRQ_NONE; \ I was imagining something like: if (!gpu_read(ptdev, __reg_prefix ## _INT_STAT)) return IRQ_NONE; old_state = atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, PANTHOR_IRQ_STATE_PROCESSING); if (old_state != PANTHOR_IRQ_STATE_ACTIVE) return IRQ_NONE; such that the processing phase starts as soon as we know we're going to wake up the irq thread, not when we enter the threaded handler. > @@ -423,8 +436,11 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > { \ > struct panthor_irq *pirq = data; \ > struct panthor_device *ptdev = pirq->ptdev; \ > + enum panthor_irq_state state; \ > irqreturn_t ret = IRQ_NONE; \ > \ > + atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, PANTHOR_IRQ_STATE_PROCESSING); \ I think this is better done in the raw_handler (see above). > + \ > while (true) { \ > u32 status = gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mask; \ > \ > @@ -435,8 +451,11 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > ret = IRQ_HANDLED; \ > } \ > \ > - if (!atomic_read(&pirq->suspended)) \ > + state = atomic_read(&pirq->state); \ > + if (state != PANTHOR_IRQ_STATE_SUSPENDED && state != PANTHOR_IRQ_STATE_SUSPENDING) { \ > gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > + } \ Similarly, I think I'd go for old_state = atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_PROCESSING, PANTHOR_IRQ_STATE_ACTIVE); if (old_state == PANTHOR_IRQ_STATE_PROCESSING) gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); > \ > return ret; \ > } \ > @@ -445,14 +464,15 @@ static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *pirq) > { \ > pirq->mask = 0; \ > gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ > synchronize_irq(pirq->irq); \ > - atomic_set(&pirq->suspended, true); \ > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDED); \ > } \ > \ > static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq, u32 mask) \ > { \ > - atomic_set(&pirq->suspended, false); \ > pirq->mask = mask; \ > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, mask); \ > gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, mask); \ > } \ > ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v9 2/4] drm/panthor: Extend IRQ helpers for mask modification/restoration 2026-01-15 13:58 [PATCH v9 0/4] Add a few tracepoints to panthor Nicolas Frattaroli 2026-01-15 13:58 ` [PATCH v9 1/4] drm/panthor: Rework panthor_irq::suspended into panthor_irq::state Nicolas Frattaroli @ 2026-01-15 13:59 ` Nicolas Frattaroli 2026-01-15 15:17 ` Boris Brezillon 2026-01-15 13:59 ` [PATCH v9 3/4] drm/panthor: Add tracepoint for hardware utilisation changes Nicolas Frattaroli 2026-01-15 13:59 ` [PATCH v9 4/4] drm/panthor: Add gpu_job_irq tracepoint Nicolas Frattaroli 3 siblings, 1 reply; 9+ messages in thread From: Nicolas Frattaroli @ 2026-01-15 13:59 UTC (permalink / raw) To: Boris Brezillon, Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Chia-I Wu, Karunika Choo Cc: kernel, linux-kernel, dri-devel, Nicolas Frattaroli The current IRQ helpers do not guarantee mutual exclusion that covers the entire transaction from accessing the mask member and modifying the mask register. This makes it hard, if not impossible, to implement mask modification helpers that may change one of these outside the normal suspend/resume/isr code paths. Add a spinlock to struct panthor_irq that protects both the mask member and register. Acquire it in all code paths that access these, but drop it before processing the threaded handler function. Then, add the aforementioned new helpers: enable_events, and disable_events. They work by ORing and NANDing the mask bits. resume is changed to no longer have a mask passed, as pirq->mask is supposed to be the user-requested mask now, rather than a mirror of the INT_MASK register contents. Users of the resume helper are adjusted accordingly, including a rather painful refactor in panthor_mmu.c. In panthor_mmu.c, the bespoke mask modification is excised, and replaced with enable_events/disable_events in as_enable/as_disable. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> --- drivers/gpu/drm/panthor/panthor_device.h | 74 +++++++++++++++++++++++++------- drivers/gpu/drm/panthor/panthor_fw.c | 3 +- drivers/gpu/drm/panthor/panthor_gpu.c | 2 +- drivers/gpu/drm/panthor/panthor_mmu.c | 47 ++++++++++---------- drivers/gpu/drm/panthor/panthor_pwr.c | 2 +- 5 files changed, 87 insertions(+), 41 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h index 2bf9a8434dc5..42580968adb3 100644 --- a/drivers/gpu/drm/panthor/panthor_device.h +++ b/drivers/gpu/drm/panthor/panthor_device.h @@ -84,9 +84,12 @@ struct panthor_irq { /** @irq: IRQ number. */ int irq; - /** @mask: Current mask being applied to xxx_INT_MASK. */ + /** @mask: Values to write to xxx_INT_MASK if active. */ u32 mask; + /** @mask_lock: protects modifications to _INT_MASK and @mask */ + spinlock_t mask_lock; + /** @state: one of &enum panthor_irq_state reflecting the current state. */ atomic_t state; }; @@ -422,6 +425,8 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data) struct panthor_device *ptdev = pirq->ptdev; \ enum panthor_irq_state state; \ \ + guard(spinlock_irqsave)(&pirq->mask_lock); \ + \ state = atomic_read(&pirq->state); \ if (state == PANTHOR_IRQ_STATE_SUSPENDED || state == PANTHOR_IRQ_STATE_SUSPENDING) \ return IRQ_NONE; \ @@ -438,11 +443,16 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da struct panthor_device *ptdev = pirq->ptdev; \ enum panthor_irq_state state; \ irqreturn_t ret = IRQ_NONE; \ + u32 mask; \ \ - atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, PANTHOR_IRQ_STATE_PROCESSING); \ + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ + mask = pirq->mask; \ + atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, \ + PANTHOR_IRQ_STATE_PROCESSING); \ + } \ \ while (true) { \ - u32 status = gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mask; \ + u32 status = (gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & mask); \ \ if (!status) \ break; \ @@ -451,10 +461,16 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da ret = IRQ_HANDLED; \ } \ \ - state = atomic_read(&pirq->state); \ - if (state != PANTHOR_IRQ_STATE_SUSPENDED && state != PANTHOR_IRQ_STATE_SUSPENDING) { \ - gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ - atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ + state = atomic_read(&pirq->state); \ + if (state != PANTHOR_IRQ_STATE_SUSPENDED && \ + state != PANTHOR_IRQ_STATE_SUSPENDING) { \ + /* Only restore the bits that were used and are still enabled */ \ + gpu_write(ptdev, __reg_prefix ## _INT_MASK, \ + gpu_read(ptdev, __reg_prefix ## _INT_MASK) | \ + (mask & pirq->mask)); \ + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ + } \ } \ \ return ret; \ @@ -462,19 +478,21 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da \ static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *pirq) \ { \ - pirq->mask = 0; \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ - atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ + } \ synchronize_irq(pirq->irq); \ atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDED); \ } \ \ -static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq, u32 mask) \ +static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq) \ { \ - pirq->mask = mask; \ + guard(spinlock_irqsave)(&pirq->mask_lock); \ + \ atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, mask); \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, mask); \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, pirq->mask); \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ } \ \ static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev, \ @@ -483,13 +501,39 @@ static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev, \ { \ pirq->ptdev = ptdev; \ pirq->irq = irq; \ - panthor_ ## __name ## _irq_resume(pirq, mask); \ + pirq->mask = mask; \ + spin_lock_init(&pirq->mask_lock); \ + panthor_ ## __name ## _irq_resume(pirq); \ \ return devm_request_threaded_irq(ptdev->base.dev, irq, \ panthor_ ## __name ## _irq_raw_handler, \ panthor_ ## __name ## _irq_threaded_handler, \ IRQF_SHARED, KBUILD_MODNAME "-" # __name, \ pirq); \ +} \ + \ +static inline void panthor_ ## __name ## _irq_enable_events(struct panthor_irq *pirq, u32 mask) \ +{ \ + enum panthor_irq_state state; \ + \ + guard(spinlock_irqsave)(&pirq->mask_lock); \ + \ + state = atomic_read(&pirq->state); \ + pirq->mask |= mask; \ + if (state != PANTHOR_IRQ_STATE_SUSPENDED || state != PANTHOR_IRQ_STATE_SUSPENDING) \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ +} \ + \ +static inline void panthor_ ## __name ## _irq_disable_events(struct panthor_irq *pirq, u32 mask)\ +{ \ + enum panthor_irq_state state; \ + \ + guard(spinlock_irqsave)(&pirq->mask_lock); \ + \ + state = atomic_read(&pirq->state); \ + pirq->mask &= ~mask; \ + if (state != PANTHOR_IRQ_STATE_SUSPENDED || state != PANTHOR_IRQ_STATE_SUSPENDING) \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ } extern struct workqueue_struct *panthor_cleanup_wq; diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c index a64ec8756bed..0e46625f7621 100644 --- a/drivers/gpu/drm/panthor/panthor_fw.c +++ b/drivers/gpu/drm/panthor/panthor_fw.c @@ -1080,7 +1080,8 @@ static int panthor_fw_start(struct panthor_device *ptdev) bool timedout = false; ptdev->fw->booted = false; - panthor_job_irq_resume(&ptdev->fw->irq, ~0); + panthor_job_irq_enable_events(&ptdev->fw->irq, ~0); + panthor_job_irq_resume(&ptdev->fw->irq); gpu_write(ptdev, MCU_CONTROL, MCU_CONTROL_AUTO); if (!wait_event_timeout(ptdev->fw->req_waitqueue, diff --git a/drivers/gpu/drm/panthor/panthor_gpu.c b/drivers/gpu/drm/panthor/panthor_gpu.c index 057e167468d0..9304469a711a 100644 --- a/drivers/gpu/drm/panthor/panthor_gpu.c +++ b/drivers/gpu/drm/panthor/panthor_gpu.c @@ -395,7 +395,7 @@ void panthor_gpu_suspend(struct panthor_device *ptdev) */ void panthor_gpu_resume(struct panthor_device *ptdev) { - panthor_gpu_irq_resume(&ptdev->gpu->irq, GPU_INTERRUPTS_MASK); + panthor_gpu_irq_resume(&ptdev->gpu->irq); panthor_hw_l2_power_on(ptdev); } diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c index 198d59f42578..a1b7917a31b1 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -562,9 +562,21 @@ static u64 pack_region_range(struct panthor_device *ptdev, u64 *region_start, u6 return region_width | *region_start; } +static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) +{ + return BIT(as); +} + +/* Forward declaration to call helpers within as_enable/disable */ +static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status); +PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); + static int panthor_mmu_as_enable(struct panthor_device *ptdev, u32 as_nr, u64 transtab, u64 transcfg, u64 memattr) { + panthor_mmu_irq_enable_events(&ptdev->mmu->irq, + panthor_mmu_as_fault_mask(ptdev, as_nr)); + gpu_write64(ptdev, AS_TRANSTAB(as_nr), transtab); gpu_write64(ptdev, AS_MEMATTR(as_nr), memattr); gpu_write64(ptdev, AS_TRANSCFG(as_nr), transcfg); @@ -580,6 +592,9 @@ static int panthor_mmu_as_disable(struct panthor_device *ptdev, u32 as_nr, lockdep_assert_held(&ptdev->mmu->as.slots_lock); + panthor_mmu_irq_disable_events(&ptdev->mmu->irq, + panthor_mmu_as_fault_mask(ptdev, as_nr)); + /* Flush+invalidate RW caches, invalidate RO ones. */ ret = panthor_gpu_flush_caches(ptdev, CACHE_CLEAN | CACHE_INV, CACHE_CLEAN | CACHE_INV, CACHE_INV); @@ -612,11 +627,6 @@ static u32 panthor_mmu_fault_mask(struct panthor_device *ptdev, u32 value) return value & GENMASK(15, 0); } -static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) -{ - return BIT(as); -} - /** * panthor_vm_has_unhandled_faults() - Check if a VM has unhandled faults * @vm: VM to check. @@ -670,6 +680,7 @@ int panthor_vm_active(struct panthor_vm *vm) struct io_pgtable_cfg *cfg = &io_pgtable_ops_to_pgtable(vm->pgtbl_ops)->cfg; int ret = 0, as, cookie; u64 transtab, transcfg; + u32 fault_mask; if (!drm_dev_enter(&ptdev->base, &cookie)) return -ENODEV; @@ -743,14 +754,13 @@ int panthor_vm_active(struct panthor_vm *vm) /* If the VM is re-activated, we clear the fault. */ vm->unhandled_fault = false; - /* Unhandled pagefault on this AS, clear the fault and re-enable interrupts - * before enabling the AS. + /* Unhandled pagefault on this AS, clear the fault and enable the AS, + * which re-enables interrupts. */ - if (ptdev->mmu->as.faulty_mask & panthor_mmu_as_fault_mask(ptdev, as)) { - gpu_write(ptdev, MMU_INT_CLEAR, panthor_mmu_as_fault_mask(ptdev, as)); - ptdev->mmu->as.faulty_mask &= ~panthor_mmu_as_fault_mask(ptdev, as); - ptdev->mmu->irq.mask |= panthor_mmu_as_fault_mask(ptdev, as); - gpu_write(ptdev, MMU_INT_MASK, ~ptdev->mmu->as.faulty_mask); + fault_mask = panthor_mmu_as_fault_mask(ptdev, as); + if (ptdev->mmu->as.faulty_mask & fault_mask) { + gpu_write(ptdev, MMU_INT_CLEAR, fault_mask); + ptdev->mmu->as.faulty_mask &= ~fault_mask; } /* The VM update is guarded by ::op_lock, which we take at the beginning @@ -1698,7 +1708,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) while (status) { u32 as = ffs(status | (status >> 16)) - 1; u32 mask = panthor_mmu_as_fault_mask(ptdev, as); - u32 new_int_mask; u64 addr; u32 fault_status; u32 exception_type; @@ -1716,8 +1725,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) mutex_lock(&ptdev->mmu->as.slots_lock); ptdev->mmu->as.faulty_mask |= mask; - new_int_mask = - panthor_mmu_fault_mask(ptdev, ~ptdev->mmu->as.faulty_mask); /* terminal fault, print info about the fault */ drm_err(&ptdev->base, @@ -1741,11 +1748,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) */ gpu_write(ptdev, MMU_INT_CLEAR, mask); - /* Ignore MMU interrupts on this AS until it's been - * re-enabled. - */ - ptdev->mmu->irq.mask = new_int_mask; - if (ptdev->mmu->as.slots[as].vm) ptdev->mmu->as.slots[as].vm->unhandled_fault = true; @@ -1760,7 +1762,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) if (has_unhandled_faults) panthor_sched_report_mmu_fault(ptdev); } -PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); /** * panthor_mmu_suspend() - Suspend the MMU logic @@ -1805,7 +1806,7 @@ void panthor_mmu_resume(struct panthor_device *ptdev) ptdev->mmu->as.faulty_mask = 0; mutex_unlock(&ptdev->mmu->as.slots_lock); - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0)); + panthor_mmu_irq_resume(&ptdev->mmu->irq); } /** @@ -1859,7 +1860,7 @@ void panthor_mmu_post_reset(struct panthor_device *ptdev) mutex_unlock(&ptdev->mmu->as.slots_lock); - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0)); + panthor_mmu_irq_resume(&ptdev->mmu->irq); /* Restart the VM_BIND queues. */ mutex_lock(&ptdev->mmu->vm.lock); diff --git a/drivers/gpu/drm/panthor/panthor_pwr.c b/drivers/gpu/drm/panthor/panthor_pwr.c index 57cfc7ce715b..ed3b2b4479ca 100644 --- a/drivers/gpu/drm/panthor/panthor_pwr.c +++ b/drivers/gpu/drm/panthor/panthor_pwr.c @@ -545,5 +545,5 @@ void panthor_pwr_resume(struct panthor_device *ptdev) if (!ptdev->pwr) return; - panthor_pwr_irq_resume(&ptdev->pwr->irq, PWR_INTERRUPTS_MASK); + panthor_pwr_irq_resume(&ptdev->pwr->irq); } -- 2.52.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v9 2/4] drm/panthor: Extend IRQ helpers for mask modification/restoration 2026-01-15 13:59 ` [PATCH v9 2/4] drm/panthor: Extend IRQ helpers for mask modification/restoration Nicolas Frattaroli @ 2026-01-15 15:17 ` Boris Brezillon 2026-01-15 20:05 ` Nicolas Frattaroli 0 siblings, 1 reply; 9+ messages in thread From: Boris Brezillon @ 2026-01-15 15:17 UTC (permalink / raw) To: Nicolas Frattaroli Cc: Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Chia-I Wu, Karunika Choo, kernel, linux-kernel, dri-devel On Thu, 15 Jan 2026 14:59:00 +0100 Nicolas Frattaroli <nicolas.frattaroli@collabora.com> wrote: > The current IRQ helpers do not guarantee mutual exclusion that covers > the entire transaction from accessing the mask member and modifying the > mask register. > > This makes it hard, if not impossible, to implement mask modification > helpers that may change one of these outside the normal > suspend/resume/isr code paths. > > Add a spinlock to struct panthor_irq that protects both the mask member > and register. Acquire it in all code paths that access these, but drop > it before processing the threaded handler function. Then, add the > aforementioned new helpers: enable_events, and disable_events. They work > by ORing and NANDing the mask bits. > > resume is changed to no longer have a mask passed, as pirq->mask is > supposed to be the user-requested mask now, rather than a mirror of the > INT_MASK register contents. Users of the resume helper are adjusted > accordingly, including a rather painful refactor in panthor_mmu.c. > > In panthor_mmu.c, the bespoke mask modification is excised, and replaced > with enable_events/disable_events in as_enable/as_disable. > > Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> > --- > drivers/gpu/drm/panthor/panthor_device.h | 74 +++++++++++++++++++++++++------- > drivers/gpu/drm/panthor/panthor_fw.c | 3 +- > drivers/gpu/drm/panthor/panthor_gpu.c | 2 +- > drivers/gpu/drm/panthor/panthor_mmu.c | 47 ++++++++++---------- > drivers/gpu/drm/panthor/panthor_pwr.c | 2 +- > 5 files changed, 87 insertions(+), 41 deletions(-) > > diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h > index 2bf9a8434dc5..42580968adb3 100644 > --- a/drivers/gpu/drm/panthor/panthor_device.h > +++ b/drivers/gpu/drm/panthor/panthor_device.h > @@ -84,9 +84,12 @@ struct panthor_irq { > /** @irq: IRQ number. */ > int irq; > > - /** @mask: Current mask being applied to xxx_INT_MASK. */ > + /** @mask: Values to write to xxx_INT_MASK if active. */ > u32 mask; > > + /** @mask_lock: protects modifications to _INT_MASK and @mask */ > + spinlock_t mask_lock; > + > /** @state: one of &enum panthor_irq_state reflecting the current state. */ > atomic_t state; > }; > @@ -422,6 +425,8 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data) > struct panthor_device *ptdev = pirq->ptdev; \ > enum panthor_irq_state state; \ > \ > + guard(spinlock_irqsave)(&pirq->mask_lock); \ That's me being old school, but I prefer to have a scoped_guard() around the INT_MASK write happening at the end to reflect exactly the data you're protecting against concurrent access. Unless mask_lock also covers the state changes, in which case the documentation should reflect that. Also, _irqsave() in an interrupt context is redundant, I think, but it's less of an issue, I guess. > + \ > state = atomic_read(&pirq->state); \ > if (state == PANTHOR_IRQ_STATE_SUSPENDED || state == PANTHOR_IRQ_STATE_SUSPENDING) \ > return IRQ_NONE; \ > @@ -438,11 +443,16 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > struct panthor_device *ptdev = pirq->ptdev; \ > enum panthor_irq_state state; \ > irqreturn_t ret = IRQ_NONE; \ > + u32 mask; \ > \ > - atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, PANTHOR_IRQ_STATE_PROCESSING); \ > + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ > + mask = pirq->mask; \ > + atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, \ > + PANTHOR_IRQ_STATE_PROCESSING); \ Okay, it seems like it's the <mask,state> pair the lock protects. There's probably a good reason which I'm missing, so it's probably worth adding a comment somewhere to explain the locking scheme. > + } \ > \ > while (true) { \ > - u32 status = gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mask; \ > + u32 status = (gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & mask); \ > \ > if (!status) \ > break; \ > @@ -451,10 +461,16 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > ret = IRQ_HANDLED; \ > } \ > \ > - state = atomic_read(&pirq->state); \ > - if (state != PANTHOR_IRQ_STATE_SUSPENDED && state != PANTHOR_IRQ_STATE_SUSPENDING) { \ > - gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > - atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ > + state = atomic_read(&pirq->state); \ > + if (state != PANTHOR_IRQ_STATE_SUSPENDED && \ > + state != PANTHOR_IRQ_STATE_SUSPENDING) { \ > + /* Only restore the bits that were used and are still enabled */ \ > + gpu_write(ptdev, __reg_prefix ## _INT_MASK, \ > + gpu_read(ptdev, __reg_prefix ## _INT_MASK) | \ > + (mask & pirq->mask)); \ > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > + } \ > } \ > \ > return ret; \ > @@ -462,19 +478,21 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > \ > static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *pirq) \ > { \ > - pirq->mask = 0; \ > - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ > - atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ > + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ > + } \ > synchronize_irq(pirq->irq); \ > atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDED); \ > } \ > \ > -static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq, u32 mask) \ > +static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq) \ > { \ > - pirq->mask = mask; \ > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > + \ > atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > - gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, mask); \ > - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, mask); \ > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, pirq->mask); \ > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > } \ > \ > static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev, \ > @@ -483,13 +501,39 @@ static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev, \ > { \ > pirq->ptdev = ptdev; \ > pirq->irq = irq; \ > - panthor_ ## __name ## _irq_resume(pirq, mask); \ > + pirq->mask = mask; \ > + spin_lock_init(&pirq->mask_lock); \ > + panthor_ ## __name ## _irq_resume(pirq); \ > \ > return devm_request_threaded_irq(ptdev->base.dev, irq, \ > panthor_ ## __name ## _irq_raw_handler, \ > panthor_ ## __name ## _irq_threaded_handler, \ > IRQF_SHARED, KBUILD_MODNAME "-" # __name, \ > pirq); \ > +} \ > + \ > +static inline void panthor_ ## __name ## _irq_enable_events(struct panthor_irq *pirq, u32 mask) \ > +{ \ > + enum panthor_irq_state state; \ > + \ > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > + \ > + state = atomic_read(&pirq->state); \ > + pirq->mask |= mask; \ > + if (state != PANTHOR_IRQ_STATE_SUSPENDED || state != PANTHOR_IRQ_STATE_SUSPENDING) \ > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > +} \ > + \ > +static inline void panthor_ ## __name ## _irq_disable_events(struct panthor_irq *pirq, u32 mask)\ > +{ \ > + enum panthor_irq_state state; \ > + \ > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > + \ > + state = atomic_read(&pirq->state); \ > + pirq->mask &= ~mask; \ > + if (state != PANTHOR_IRQ_STATE_SUSPENDED || state != PANTHOR_IRQ_STATE_SUSPENDING) \ > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > } > > extern struct workqueue_struct *panthor_cleanup_wq; > diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c > index a64ec8756bed..0e46625f7621 100644 > --- a/drivers/gpu/drm/panthor/panthor_fw.c > +++ b/drivers/gpu/drm/panthor/panthor_fw.c > @@ -1080,7 +1080,8 @@ static int panthor_fw_start(struct panthor_device *ptdev) > bool timedout = false; > > ptdev->fw->booted = false; > - panthor_job_irq_resume(&ptdev->fw->irq, ~0); > + panthor_job_irq_enable_events(&ptdev->fw->irq, ~0); We don't change the mask after initialization AFAICT, so I'm not sure this _enable_events() is needed. > + panthor_job_irq_resume(&ptdev->fw->irq); > gpu_write(ptdev, MCU_CONTROL, MCU_CONTROL_AUTO); > > if (!wait_event_timeout(ptdev->fw->req_waitqueue, > diff --git a/drivers/gpu/drm/panthor/panthor_gpu.c b/drivers/gpu/drm/panthor/panthor_gpu.c > index 057e167468d0..9304469a711a 100644 > --- a/drivers/gpu/drm/panthor/panthor_gpu.c > +++ b/drivers/gpu/drm/panthor/panthor_gpu.c > @@ -395,7 +395,7 @@ void panthor_gpu_suspend(struct panthor_device *ptdev) > */ > void panthor_gpu_resume(struct panthor_device *ptdev) > { > - panthor_gpu_irq_resume(&ptdev->gpu->irq, GPU_INTERRUPTS_MASK); > + panthor_gpu_irq_resume(&ptdev->gpu->irq); > panthor_hw_l2_power_on(ptdev); > } > > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c > index 198d59f42578..a1b7917a31b1 100644 > --- a/drivers/gpu/drm/panthor/panthor_mmu.c > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c > @@ -562,9 +562,21 @@ static u64 pack_region_range(struct panthor_device *ptdev, u64 *region_start, u6 > return region_width | *region_start; > } > > +static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) > +{ > + return BIT(as); > +} > + > +/* Forward declaration to call helpers within as_enable/disable */ > +static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status); > +PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); > + > static int panthor_mmu_as_enable(struct panthor_device *ptdev, u32 as_nr, > u64 transtab, u64 transcfg, u64 memattr) > { > + panthor_mmu_irq_enable_events(&ptdev->mmu->irq, > + panthor_mmu_as_fault_mask(ptdev, as_nr)); > + > gpu_write64(ptdev, AS_TRANSTAB(as_nr), transtab); > gpu_write64(ptdev, AS_MEMATTR(as_nr), memattr); > gpu_write64(ptdev, AS_TRANSCFG(as_nr), transcfg); > @@ -580,6 +592,9 @@ static int panthor_mmu_as_disable(struct panthor_device *ptdev, u32 as_nr, > > lockdep_assert_held(&ptdev->mmu->as.slots_lock); > > + panthor_mmu_irq_disable_events(&ptdev->mmu->irq, > + panthor_mmu_as_fault_mask(ptdev, as_nr)); I'd move that at the end, when everything else in this function succeeded. > + > /* Flush+invalidate RW caches, invalidate RO ones. */ > ret = panthor_gpu_flush_caches(ptdev, CACHE_CLEAN | CACHE_INV, > CACHE_CLEAN | CACHE_INV, CACHE_INV); > @@ -612,11 +627,6 @@ static u32 panthor_mmu_fault_mask(struct panthor_device *ptdev, u32 value) > return value & GENMASK(15, 0); > } > > -static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) > -{ > - return BIT(as); > -} > - > /** > * panthor_vm_has_unhandled_faults() - Check if a VM has unhandled faults > * @vm: VM to check. > @@ -670,6 +680,7 @@ int panthor_vm_active(struct panthor_vm *vm) > struct io_pgtable_cfg *cfg = &io_pgtable_ops_to_pgtable(vm->pgtbl_ops)->cfg; > int ret = 0, as, cookie; > u64 transtab, transcfg; > + u32 fault_mask; > > if (!drm_dev_enter(&ptdev->base, &cookie)) > return -ENODEV; > @@ -743,14 +754,13 @@ int panthor_vm_active(struct panthor_vm *vm) > /* If the VM is re-activated, we clear the fault. */ > vm->unhandled_fault = false; > > - /* Unhandled pagefault on this AS, clear the fault and re-enable interrupts > - * before enabling the AS. > + /* Unhandled pagefault on this AS, clear the fault and enable the AS, > + * which re-enables interrupts. > */ > - if (ptdev->mmu->as.faulty_mask & panthor_mmu_as_fault_mask(ptdev, as)) { > - gpu_write(ptdev, MMU_INT_CLEAR, panthor_mmu_as_fault_mask(ptdev, as)); > - ptdev->mmu->as.faulty_mask &= ~panthor_mmu_as_fault_mask(ptdev, as); > - ptdev->mmu->irq.mask |= panthor_mmu_as_fault_mask(ptdev, as); > - gpu_write(ptdev, MMU_INT_MASK, ~ptdev->mmu->as.faulty_mask); > + fault_mask = panthor_mmu_as_fault_mask(ptdev, as); > + if (ptdev->mmu->as.faulty_mask & fault_mask) { > + gpu_write(ptdev, MMU_INT_CLEAR, fault_mask); > + ptdev->mmu->as.faulty_mask &= ~fault_mask; > } > > /* The VM update is guarded by ::op_lock, which we take at the beginning > @@ -1698,7 +1708,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > while (status) { > u32 as = ffs(status | (status >> 16)) - 1; > u32 mask = panthor_mmu_as_fault_mask(ptdev, as); > - u32 new_int_mask; > u64 addr; > u32 fault_status; > u32 exception_type; > @@ -1716,8 +1725,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > mutex_lock(&ptdev->mmu->as.slots_lock); > > ptdev->mmu->as.faulty_mask |= mask; > - new_int_mask = > - panthor_mmu_fault_mask(ptdev, ~ptdev->mmu->as.faulty_mask); > > /* terminal fault, print info about the fault */ > drm_err(&ptdev->base, > @@ -1741,11 +1748,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > */ > gpu_write(ptdev, MMU_INT_CLEAR, mask); > > - /* Ignore MMU interrupts on this AS until it's been > - * re-enabled. > - */ > - ptdev->mmu->irq.mask = new_int_mask; > - I guess we need a _diable_events(mask) to replace this. > if (ptdev->mmu->as.slots[as].vm) > ptdev->mmu->as.slots[as].vm->unhandled_fault = true; > > @@ -1760,7 +1762,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > if (has_unhandled_faults) > panthor_sched_report_mmu_fault(ptdev); > } > -PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); > > /** > * panthor_mmu_suspend() - Suspend the MMU logic > @@ -1805,7 +1806,7 @@ void panthor_mmu_resume(struct panthor_device *ptdev) > ptdev->mmu->as.faulty_mask = 0; > mutex_unlock(&ptdev->mmu->as.slots_lock); > > - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0)); > + panthor_mmu_irq_resume(&ptdev->mmu->irq); > } > > /** > @@ -1859,7 +1860,7 @@ void panthor_mmu_post_reset(struct panthor_device *ptdev) > > mutex_unlock(&ptdev->mmu->as.slots_lock); > > - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0)); > + panthor_mmu_irq_resume(&ptdev->mmu->irq); > > /* Restart the VM_BIND queues. */ > mutex_lock(&ptdev->mmu->vm.lock); > diff --git a/drivers/gpu/drm/panthor/panthor_pwr.c b/drivers/gpu/drm/panthor/panthor_pwr.c > index 57cfc7ce715b..ed3b2b4479ca 100644 > --- a/drivers/gpu/drm/panthor/panthor_pwr.c > +++ b/drivers/gpu/drm/panthor/panthor_pwr.c > @@ -545,5 +545,5 @@ void panthor_pwr_resume(struct panthor_device *ptdev) > if (!ptdev->pwr) > return; > > - panthor_pwr_irq_resume(&ptdev->pwr->irq, PWR_INTERRUPTS_MASK); > + panthor_pwr_irq_resume(&ptdev->pwr->irq); > } > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v9 2/4] drm/panthor: Extend IRQ helpers for mask modification/restoration 2026-01-15 15:17 ` Boris Brezillon @ 2026-01-15 20:05 ` Nicolas Frattaroli 2026-01-16 8:38 ` Boris Brezillon 0 siblings, 1 reply; 9+ messages in thread From: Nicolas Frattaroli @ 2026-01-15 20:05 UTC (permalink / raw) To: Boris Brezillon Cc: Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Chia-I Wu, Karunika Choo, kernel, linux-kernel, dri-devel On Thursday, 15 January 2026 16:17:57 Central European Standard Time Boris Brezillon wrote: > On Thu, 15 Jan 2026 14:59:00 +0100 > Nicolas Frattaroli <nicolas.frattaroli@collabora.com> wrote: > > > The current IRQ helpers do not guarantee mutual exclusion that covers > > the entire transaction from accessing the mask member and modifying the > > mask register. > > > > This makes it hard, if not impossible, to implement mask modification > > helpers that may change one of these outside the normal > > suspend/resume/isr code paths. > > > > Add a spinlock to struct panthor_irq that protects both the mask member > > and register. Acquire it in all code paths that access these, but drop > > it before processing the threaded handler function. Then, add the > > aforementioned new helpers: enable_events, and disable_events. They work > > by ORing and NANDing the mask bits. > > > > resume is changed to no longer have a mask passed, as pirq->mask is > > supposed to be the user-requested mask now, rather than a mirror of the > > INT_MASK register contents. Users of the resume helper are adjusted > > accordingly, including a rather painful refactor in panthor_mmu.c. > > > > In panthor_mmu.c, the bespoke mask modification is excised, and replaced > > with enable_events/disable_events in as_enable/as_disable. > > > > Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> > > --- > > drivers/gpu/drm/panthor/panthor_device.h | 74 +++++++++++++++++++++++++------- > > drivers/gpu/drm/panthor/panthor_fw.c | 3 +- > > drivers/gpu/drm/panthor/panthor_gpu.c | 2 +- > > drivers/gpu/drm/panthor/panthor_mmu.c | 47 ++++++++++---------- > > drivers/gpu/drm/panthor/panthor_pwr.c | 2 +- > > 5 files changed, 87 insertions(+), 41 deletions(-) > > > > diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h > > index 2bf9a8434dc5..42580968adb3 100644 > > --- a/drivers/gpu/drm/panthor/panthor_device.h > > +++ b/drivers/gpu/drm/panthor/panthor_device.h > > @@ -84,9 +84,12 @@ struct panthor_irq { > > /** @irq: IRQ number. */ > > int irq; > > > > - /** @mask: Current mask being applied to xxx_INT_MASK. */ > > + /** @mask: Values to write to xxx_INT_MASK if active. */ > > u32 mask; > > > > + /** @mask_lock: protects modifications to _INT_MASK and @mask */ > > + spinlock_t mask_lock; > > + > > /** @state: one of &enum panthor_irq_state reflecting the current state. */ > > atomic_t state; > > }; > > @@ -422,6 +425,8 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data) > > struct panthor_device *ptdev = pirq->ptdev; \ > > enum panthor_irq_state state; \ > > \ > > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > > That's me being old school, but I prefer to have a scoped_guard() > around the INT_MASK write happening at the end to reflect exactly the > data you're protecting against concurrent access. Unless mask_lock also > covers the state changes, in which case the documentation should > reflect that. > > > Also, _irqsave() in an interrupt context is redundant, I think, but > it's less of an issue, I guess. > > > + \ > > state = atomic_read(&pirq->state); \ > > if (state == PANTHOR_IRQ_STATE_SUSPENDED || state == PANTHOR_IRQ_STATE_SUSPENDING) \ > > return IRQ_NONE; \ > > @@ -438,11 +443,16 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > > struct panthor_device *ptdev = pirq->ptdev; \ > > enum panthor_irq_state state; \ > > irqreturn_t ret = IRQ_NONE; \ > > + u32 mask; \ > > \ > > - atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, PANTHOR_IRQ_STATE_PROCESSING); \ > > + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ > > + mask = pirq->mask; \ > > + atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, \ > > + PANTHOR_IRQ_STATE_PROCESSING); \ > > Okay, it seems like it's the <mask,state> pair the lock protects. > There's probably a good reason which I'm missing, so it's probably > worth adding a comment somewhere to explain the locking scheme. The reason why I try to do state changes inside the lock rather than keeping the critical section to a minimum is that it reduces the number of <mask,state> combinations that are visible to other code paths to a minimum. If I didn't do it here for example, I'd have to deal with the cognitive load of a situation where one thread is going through _suspend, and I need to make sure that no matter the order the state changes become visible to this thread, we don't misbehave. I'm fairly sure that example would be okay in this case, but the problem is more that I have to think about it at all. Doing a state change within the lock means that we know we're not going to spin at the lock after already proclaiming to the world that our state changed. Similarly, doing it within the lock rather than after the lock means that we won't get an inopportune interleave with something else after modifying the mask and dropping the lock but before changing the state atomic. Saying that the lock protects the state atomic also isn't 100% true because suspend writes to it outside the lock. It's allowed to do that as a SUSPENDING->SUSPENDED transition that isn't mutually exclusive with someone else doing mask related things is fine, as we're already done with the mask by then, and nobody will mess with it if we're SUSPENDING. > > + } \ > > \ > > while (true) { \ > > - u32 status = gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mask; \ > > + u32 status = (gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & mask); \ > > \ > > if (!status) \ > > break; \ > > @@ -451,10 +461,16 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > > ret = IRQ_HANDLED; \ > > } \ > > \ > > - state = atomic_read(&pirq->state); \ > > - if (state != PANTHOR_IRQ_STATE_SUSPENDED && state != PANTHOR_IRQ_STATE_SUSPENDING) { \ > > - gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > > - atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > > + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ > > + state = atomic_read(&pirq->state); \ > > + if (state != PANTHOR_IRQ_STATE_SUSPENDED && \ > > + state != PANTHOR_IRQ_STATE_SUSPENDING) { \ > > + /* Only restore the bits that were used and are still enabled */ \ > > + gpu_write(ptdev, __reg_prefix ## _INT_MASK, \ > > + gpu_read(ptdev, __reg_prefix ## _INT_MASK) | \ > > + (mask & pirq->mask)); \ > > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > > + } \ > > } \ > > \ > > return ret; \ > > @@ -462,19 +478,21 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > > \ > > static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *pirq) \ > > { \ > > - pirq->mask = 0; \ > > - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ > > - atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ > > + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ > > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ > > + } \ > > synchronize_irq(pirq->irq); \ > > atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDED); \ > > } \ > > \ > > -static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq, u32 mask) \ > > +static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq) \ > > { \ > > - pirq->mask = mask; \ > > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > > + \ > > atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > > - gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, mask); \ > > - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, mask); \ > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, pirq->mask); \ > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > > } \ > > \ > > static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev, \ > > @@ -483,13 +501,39 @@ static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev, \ > > { \ > > pirq->ptdev = ptdev; \ > > pirq->irq = irq; \ > > - panthor_ ## __name ## _irq_resume(pirq, mask); \ > > + pirq->mask = mask; \ > > + spin_lock_init(&pirq->mask_lock); \ > > + panthor_ ## __name ## _irq_resume(pirq); \ > > \ > > return devm_request_threaded_irq(ptdev->base.dev, irq, \ > > panthor_ ## __name ## _irq_raw_handler, \ > > panthor_ ## __name ## _irq_threaded_handler, \ > > IRQF_SHARED, KBUILD_MODNAME "-" # __name, \ > > pirq); \ > > +} \ > > + \ > > +static inline void panthor_ ## __name ## _irq_enable_events(struct panthor_irq *pirq, u32 mask) \ > > +{ \ > > + enum panthor_irq_state state; \ > > + \ > > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > > + \ > > + state = atomic_read(&pirq->state); \ > > + pirq->mask |= mask; \ > > + if (state != PANTHOR_IRQ_STATE_SUSPENDED || state != PANTHOR_IRQ_STATE_SUSPENDING) \ > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > > +} \ > > + \ > > +static inline void panthor_ ## __name ## _irq_disable_events(struct panthor_irq *pirq, u32 mask)\ > > +{ \ > > + enum panthor_irq_state state; \ > > + \ > > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > > + \ > > + state = atomic_read(&pirq->state); \ > > + pirq->mask &= ~mask; \ > > + if (state != PANTHOR_IRQ_STATE_SUSPENDED || state != PANTHOR_IRQ_STATE_SUSPENDING) \ > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > > } > > > > extern struct workqueue_struct *panthor_cleanup_wq; > > diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c > > index a64ec8756bed..0e46625f7621 100644 > > --- a/drivers/gpu/drm/panthor/panthor_fw.c > > +++ b/drivers/gpu/drm/panthor/panthor_fw.c > > @@ -1080,7 +1080,8 @@ static int panthor_fw_start(struct panthor_device *ptdev) > > bool timedout = false; > > > > ptdev->fw->booted = false; > > - panthor_job_irq_resume(&ptdev->fw->irq, ~0); > > + panthor_job_irq_enable_events(&ptdev->fw->irq, ~0); > > We don't change the mask after initialization AFAICT, so I'm not sure > this _enable_events() is needed. The mask is initialized to 0. I think I can drop this, as long as it's fine that the mask bits are enabled as soon as the IRQ is requested, since the IRQ request helper resumes it. > > + panthor_job_irq_resume(&ptdev->fw->irq); > > gpu_write(ptdev, MCU_CONTROL, MCU_CONTROL_AUTO); > > > > if (!wait_event_timeout(ptdev->fw->req_waitqueue, > > diff --git a/drivers/gpu/drm/panthor/panthor_gpu.c b/drivers/gpu/drm/panthor/panthor_gpu.c > > index 057e167468d0..9304469a711a 100644 > > --- a/drivers/gpu/drm/panthor/panthor_gpu.c > > +++ b/drivers/gpu/drm/panthor/panthor_gpu.c > > @@ -395,7 +395,7 @@ void panthor_gpu_suspend(struct panthor_device *ptdev) > > */ > > void panthor_gpu_resume(struct panthor_device *ptdev) > > { > > - panthor_gpu_irq_resume(&ptdev->gpu->irq, GPU_INTERRUPTS_MASK); > > + panthor_gpu_irq_resume(&ptdev->gpu->irq); > > panthor_hw_l2_power_on(ptdev); > > } > > > > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c > > index 198d59f42578..a1b7917a31b1 100644 > > --- a/drivers/gpu/drm/panthor/panthor_mmu.c > > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c > > @@ -562,9 +562,21 @@ static u64 pack_region_range(struct panthor_device *ptdev, u64 *region_start, u6 > > return region_width | *region_start; > > } > > > > +static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) > > +{ > > + return BIT(as); > > +} > > + > > +/* Forward declaration to call helpers within as_enable/disable */ > > +static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status); > > +PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); > > + > > static int panthor_mmu_as_enable(struct panthor_device *ptdev, u32 as_nr, > > u64 transtab, u64 transcfg, u64 memattr) > > { > > + panthor_mmu_irq_enable_events(&ptdev->mmu->irq, > > + panthor_mmu_as_fault_mask(ptdev, as_nr)); > > + > > gpu_write64(ptdev, AS_TRANSTAB(as_nr), transtab); > > gpu_write64(ptdev, AS_MEMATTR(as_nr), memattr); > > gpu_write64(ptdev, AS_TRANSCFG(as_nr), transcfg); > > @@ -580,6 +592,9 @@ static int panthor_mmu_as_disable(struct panthor_device *ptdev, u32 as_nr, > > > > lockdep_assert_held(&ptdev->mmu->as.slots_lock); > > > > + panthor_mmu_irq_disable_events(&ptdev->mmu->irq, > > + panthor_mmu_as_fault_mask(ptdev, as_nr)); > > I'd move that at the end, when everything else in this function > succeeded. That would be a functional change from what the behaviour was prior to moving disable_events into as_disable. > > > + > > /* Flush+invalidate RW caches, invalidate RO ones. */ > > ret = panthor_gpu_flush_caches(ptdev, CACHE_CLEAN | CACHE_INV, > > CACHE_CLEAN | CACHE_INV, CACHE_INV); > > @@ -612,11 +627,6 @@ static u32 panthor_mmu_fault_mask(struct panthor_device *ptdev, u32 value) > > return value & GENMASK(15, 0); > > } > > > > -static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) > > -{ > > - return BIT(as); > > -} > > - > > /** > > * panthor_vm_has_unhandled_faults() - Check if a VM has unhandled faults > > * @vm: VM to check. > > @@ -670,6 +680,7 @@ int panthor_vm_active(struct panthor_vm *vm) > > struct io_pgtable_cfg *cfg = &io_pgtable_ops_to_pgtable(vm->pgtbl_ops)->cfg; > > int ret = 0, as, cookie; > > u64 transtab, transcfg; > > + u32 fault_mask; > > > > if (!drm_dev_enter(&ptdev->base, &cookie)) > > return -ENODEV; > > @@ -743,14 +754,13 @@ int panthor_vm_active(struct panthor_vm *vm) > > /* If the VM is re-activated, we clear the fault. */ > > vm->unhandled_fault = false; > > > > - /* Unhandled pagefault on this AS, clear the fault and re-enable interrupts > > - * before enabling the AS. > > + /* Unhandled pagefault on this AS, clear the fault and enable the AS, > > + * which re-enables interrupts. > > */ > > - if (ptdev->mmu->as.faulty_mask & panthor_mmu_as_fault_mask(ptdev, as)) { > > - gpu_write(ptdev, MMU_INT_CLEAR, panthor_mmu_as_fault_mask(ptdev, as)); > > - ptdev->mmu->as.faulty_mask &= ~panthor_mmu_as_fault_mask(ptdev, as); > > - ptdev->mmu->irq.mask |= panthor_mmu_as_fault_mask(ptdev, as); > > - gpu_write(ptdev, MMU_INT_MASK, ~ptdev->mmu->as.faulty_mask); > > + fault_mask = panthor_mmu_as_fault_mask(ptdev, as); > > + if (ptdev->mmu->as.faulty_mask & fault_mask) { > > + gpu_write(ptdev, MMU_INT_CLEAR, fault_mask); > > + ptdev->mmu->as.faulty_mask &= ~fault_mask; > > } > > > > /* The VM update is guarded by ::op_lock, which we take at the beginning > > @@ -1698,7 +1708,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > > while (status) { > > u32 as = ffs(status | (status >> 16)) - 1; > > u32 mask = panthor_mmu_as_fault_mask(ptdev, as); > > - u32 new_int_mask; > > u64 addr; > > u32 fault_status; > > u32 exception_type; > > @@ -1716,8 +1725,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > > mutex_lock(&ptdev->mmu->as.slots_lock); > > > > ptdev->mmu->as.faulty_mask |= mask; > > - new_int_mask = > > - panthor_mmu_fault_mask(ptdev, ~ptdev->mmu->as.faulty_mask); > > > > /* terminal fault, print info about the fault */ > > drm_err(&ptdev->base, > > @@ -1741,11 +1748,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > > */ > > gpu_write(ptdev, MMU_INT_CLEAR, mask); > > > > - /* Ignore MMU interrupts on this AS until it's been > > - * re-enabled. > > - */ > > - ptdev->mmu->irq.mask = new_int_mask; > > - > > I guess we need a _diable_events(mask) to replace this. Nope! It lives in as_disable now, which is called a bit further down (outside of the patch context window. Maybe I should play with b4's settings to ensure the end of functions is visible in the context). > > if (ptdev->mmu->as.slots[as].vm) > > ptdev->mmu->as.slots[as].vm->unhandled_fault = true; > > > > @@ -1760,7 +1762,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > > if (has_unhandled_faults) > > panthor_sched_report_mmu_fault(ptdev); > > } > > -PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); > > > > /** > > * panthor_mmu_suspend() - Suspend the MMU logic > > @@ -1805,7 +1806,7 @@ void panthor_mmu_resume(struct panthor_device *ptdev) > > ptdev->mmu->as.faulty_mask = 0; > > mutex_unlock(&ptdev->mmu->as.slots_lock); > > > > - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0)); > > + panthor_mmu_irq_resume(&ptdev->mmu->irq); > > } > > > > /** > > @@ -1859,7 +1860,7 @@ void panthor_mmu_post_reset(struct panthor_device *ptdev) > > > > mutex_unlock(&ptdev->mmu->as.slots_lock); > > > > - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0)); > > + panthor_mmu_irq_resume(&ptdev->mmu->irq); > > > > /* Restart the VM_BIND queues. */ > > mutex_lock(&ptdev->mmu->vm.lock); > > diff --git a/drivers/gpu/drm/panthor/panthor_pwr.c b/drivers/gpu/drm/panthor/panthor_pwr.c > > index 57cfc7ce715b..ed3b2b4479ca 100644 > > --- a/drivers/gpu/drm/panthor/panthor_pwr.c > > +++ b/drivers/gpu/drm/panthor/panthor_pwr.c > > @@ -545,5 +545,5 @@ void panthor_pwr_resume(struct panthor_device *ptdev) > > if (!ptdev->pwr) > > return; > > > > - panthor_pwr_irq_resume(&ptdev->pwr->irq, PWR_INTERRUPTS_MASK); > > + panthor_pwr_irq_resume(&ptdev->pwr->irq); > > } > > > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v9 2/4] drm/panthor: Extend IRQ helpers for mask modification/restoration 2026-01-15 20:05 ` Nicolas Frattaroli @ 2026-01-16 8:38 ` Boris Brezillon 0 siblings, 0 replies; 9+ messages in thread From: Boris Brezillon @ 2026-01-16 8:38 UTC (permalink / raw) To: Nicolas Frattaroli Cc: Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Chia-I Wu, Karunika Choo, kernel, linux-kernel, dri-devel On Thu, 15 Jan 2026 21:05:54 +0100 Nicolas Frattaroli <nicolas.frattaroli@collabora.com> wrote: > On Thursday, 15 January 2026 16:17:57 Central European Standard Time Boris Brezillon wrote: > > On Thu, 15 Jan 2026 14:59:00 +0100 > > Nicolas Frattaroli <nicolas.frattaroli@collabora.com> wrote: > > > > > The current IRQ helpers do not guarantee mutual exclusion that covers > > > the entire transaction from accessing the mask member and modifying the > > > mask register. > > > > > > This makes it hard, if not impossible, to implement mask modification > > > helpers that may change one of these outside the normal > > > suspend/resume/isr code paths. > > > > > > Add a spinlock to struct panthor_irq that protects both the mask member > > > and register. Acquire it in all code paths that access these, but drop > > > it before processing the threaded handler function. Then, add the > > > aforementioned new helpers: enable_events, and disable_events. They work > > > by ORing and NANDing the mask bits. > > > > > > resume is changed to no longer have a mask passed, as pirq->mask is > > > supposed to be the user-requested mask now, rather than a mirror of the > > > INT_MASK register contents. Users of the resume helper are adjusted > > > accordingly, including a rather painful refactor in panthor_mmu.c. > > > > > > In panthor_mmu.c, the bespoke mask modification is excised, and replaced > > > with enable_events/disable_events in as_enable/as_disable. > > > > > > Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> > > > --- > > > drivers/gpu/drm/panthor/panthor_device.h | 74 +++++++++++++++++++++++++------- > > > drivers/gpu/drm/panthor/panthor_fw.c | 3 +- > > > drivers/gpu/drm/panthor/panthor_gpu.c | 2 +- > > > drivers/gpu/drm/panthor/panthor_mmu.c | 47 ++++++++++---------- > > > drivers/gpu/drm/panthor/panthor_pwr.c | 2 +- > > > 5 files changed, 87 insertions(+), 41 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h > > > index 2bf9a8434dc5..42580968adb3 100644 > > > --- a/drivers/gpu/drm/panthor/panthor_device.h > > > +++ b/drivers/gpu/drm/panthor/panthor_device.h > > > @@ -84,9 +84,12 @@ struct panthor_irq { > > > /** @irq: IRQ number. */ > > > int irq; > > > > > > - /** @mask: Current mask being applied to xxx_INT_MASK. */ > > > + /** @mask: Values to write to xxx_INT_MASK if active. */ > > > u32 mask; > > > > > > + /** @mask_lock: protects modifications to _INT_MASK and @mask */ > > > + spinlock_t mask_lock; > > > + > > > /** @state: one of &enum panthor_irq_state reflecting the current state. */ > > > atomic_t state; > > > }; > > > @@ -422,6 +425,8 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data) > > > struct panthor_device *ptdev = pirq->ptdev; \ > > > enum panthor_irq_state state; \ > > > \ > > > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > > > > That's me being old school, but I prefer to have a scoped_guard() > > around the INT_MASK write happening at the end to reflect exactly the > > data you're protecting against concurrent access. Unless mask_lock also > > covers the state changes, in which case the documentation should > > reflect that. > > > > > > Also, _irqsave() in an interrupt context is redundant, I think, but > > it's less of an issue, I guess. > > > > > + \ > > > state = atomic_read(&pirq->state); \ > > > if (state == PANTHOR_IRQ_STATE_SUSPENDED || state == PANTHOR_IRQ_STATE_SUSPENDING) \ > > > return IRQ_NONE; \ > > > @@ -438,11 +443,16 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > > > struct panthor_device *ptdev = pirq->ptdev; \ > > > enum panthor_irq_state state; \ > > > irqreturn_t ret = IRQ_NONE; \ > > > + u32 mask; \ > > > \ > > > - atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, PANTHOR_IRQ_STATE_PROCESSING); \ > > > + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ > > > + mask = pirq->mask; \ > > > + atomic_cmpxchg(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE, \ > > > + PANTHOR_IRQ_STATE_PROCESSING); \ > > > > Okay, it seems like it's the <mask,state> pair the lock protects. > > There's probably a good reason which I'm missing, so it's probably > > worth adding a comment somewhere to explain the locking scheme. > > The reason why I try to do state changes inside the lock rather than > keeping the critical section to a minimum is that it reduces the number > of <mask,state> combinations that are visible to other code paths to > a minimum. If I didn't do it here for example, I'd have to deal with > the cognitive load of a situation where one thread is going through > _suspend, and I need to make sure that no matter the order the state > changes become visible to this thread, we don't misbehave. > > I'm fairly sure that example would be okay in this case, but the > problem is more that I have to think about it at all. Doing a state > change within the lock means that we know we're not going to spin > at the lock after already proclaiming to the world that our state > changed. Similarly, doing it within the lock rather than after the > lock means that we won't get an inopportune interleave with something > else after modifying the mask and dropping the lock but before changing > the state atomic. > > Saying that the lock protects the state atomic also isn't 100% true > because suspend writes to it outside the lock. It's allowed to do that > as a SUSPENDING->SUSPENDED transition that isn't mutually exclusive > with someone else doing mask related things is fine, as we're > already done with the mask by then, and nobody will mess with it > if we're SUSPENDING. Okay, makes sense. Can we document that somewhere, maybe something like this added to the mask_lock doc? + /** + * @mask_lock: protects modifications to _INT_MASK and @mask. + * + * In paths where _INT_MASK is updated based on a state + * transition/check it's crucial for the state update/check to be + * inside the locked section, otherwise it introduces a race window + * leading to potential _INT_MASK inconsistencies. + */ > > > > + } \ > > > \ > > > while (true) { \ > > > - u32 status = gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mask; \ > > > + u32 status = (gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & mask); \ > > > \ > > > if (!status) \ > > > break; \ > > > @@ -451,10 +461,16 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > > > ret = IRQ_HANDLED; \ > > > } \ > > > \ > > > - state = atomic_read(&pirq->state); \ > > > - if (state != PANTHOR_IRQ_STATE_SUSPENDED && state != PANTHOR_IRQ_STATE_SUSPENDING) { \ > > > - gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > > > - atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > > > + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ > > > + state = atomic_read(&pirq->state); \ > > > + if (state != PANTHOR_IRQ_STATE_SUSPENDED && \ > > > + state != PANTHOR_IRQ_STATE_SUSPENDING) { \ > > > + /* Only restore the bits that were used and are still enabled */ \ > > > + gpu_write(ptdev, __reg_prefix ## _INT_MASK, \ > > > + gpu_read(ptdev, __reg_prefix ## _INT_MASK) | \ > > > + (mask & pirq->mask)); \ > > > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > > > + } \ > > > } \ > > > \ > > > return ret; \ > > > @@ -462,19 +478,21 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da > > > \ > > > static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *pirq) \ > > > { \ > > > - pirq->mask = 0; \ > > > - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ > > > - atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ > > > + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ > > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ > > > + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ > > > + } \ > > > synchronize_irq(pirq->irq); \ > > > atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDED); \ > > > } \ > > > \ > > > -static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq, u32 mask) \ > > > +static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq) \ > > > { \ > > > - pirq->mask = mask; \ > > > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > > > + \ > > > atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ > > > - gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, mask); \ > > > - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, mask); \ > > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, pirq->mask); \ > > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > > > } \ > > > \ > > > static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev, \ > > > @@ -483,13 +501,39 @@ static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev, \ > > > { \ > > > pirq->ptdev = ptdev; \ > > > pirq->irq = irq; \ > > > - panthor_ ## __name ## _irq_resume(pirq, mask); \ > > > + pirq->mask = mask; \ > > > + spin_lock_init(&pirq->mask_lock); \ > > > + panthor_ ## __name ## _irq_resume(pirq); \ > > > \ > > > return devm_request_threaded_irq(ptdev->base.dev, irq, \ > > > panthor_ ## __name ## _irq_raw_handler, \ > > > panthor_ ## __name ## _irq_threaded_handler, \ > > > IRQF_SHARED, KBUILD_MODNAME "-" # __name, \ > > > pirq); \ > > > +} \ > > > + \ > > > +static inline void panthor_ ## __name ## _irq_enable_events(struct panthor_irq *pirq, u32 mask) \ > > > +{ \ > > > + enum panthor_irq_state state; \ > > > + \ > > > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > > > + \ > > > + state = atomic_read(&pirq->state); \ > > > + pirq->mask |= mask; \ > > > + if (state != PANTHOR_IRQ_STATE_SUSPENDED || state != PANTHOR_IRQ_STATE_SUSPENDING) \ > > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ Hm, I don't see a situation where _INT_MASK should be written when the state is not PANTHOR_IRQ_STATE_ACTIVE. If I'm correct, we can probably go for something like this: /* The only situation where we need to write the new mask is if the IRQ is active. \ * If it's being processed, the mask will be restored for us in _irq_threaded_handler() \ * on the PROCESSING -> ACTIVE transition. \ * If the IRQ is suspended/suspending, the mask is restored at resume time. \ */ \ if (atomic_read(&pirq->state) == PANTHOR_IRQ_STATE_ACTIVE) \ gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > > > +} \ > > > + \ > > > +static inline void panthor_ ## __name ## _irq_disable_events(struct panthor_irq *pirq, u32 mask)\ > > > +{ \ > > > + enum panthor_irq_state state; \ > > > + \ > > > + guard(spinlock_irqsave)(&pirq->mask_lock); \ > > > + \ > > > + state = atomic_read(&pirq->state); \ > > > + pirq->mask &= ~mask; \ > > > + if (state != PANTHOR_IRQ_STATE_SUSPENDED || state != PANTHOR_IRQ_STATE_SUSPENDING) \ > > > + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ > > > } > > > > > > extern struct workqueue_struct *panthor_cleanup_wq; > > > diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c > > > index a64ec8756bed..0e46625f7621 100644 > > > --- a/drivers/gpu/drm/panthor/panthor_fw.c > > > +++ b/drivers/gpu/drm/panthor/panthor_fw.c > > > @@ -1080,7 +1080,8 @@ static int panthor_fw_start(struct panthor_device *ptdev) > > > bool timedout = false; > > > > > > ptdev->fw->booted = false; > > > - panthor_job_irq_resume(&ptdev->fw->irq, ~0); > > > + panthor_job_irq_enable_events(&ptdev->fw->irq, ~0); > > > > We don't change the mask after initialization AFAICT, so I'm not sure > > this _enable_events() is needed. > > The mask is initialized to 0. I think I can drop this, as long as it's > fine that the mask bits are enabled as soon as the IRQ is requested, > since the IRQ request helper resumes it. Ah right. We should keep the panthor_job_irq_enable_events() here, my bad. > > > > + panthor_job_irq_resume(&ptdev->fw->irq); > > > gpu_write(ptdev, MCU_CONTROL, MCU_CONTROL_AUTO); > > > > > > if (!wait_event_timeout(ptdev->fw->req_waitqueue, > > > diff --git a/drivers/gpu/drm/panthor/panthor_gpu.c b/drivers/gpu/drm/panthor/panthor_gpu.c > > > index 057e167468d0..9304469a711a 100644 > > > --- a/drivers/gpu/drm/panthor/panthor_gpu.c > > > +++ b/drivers/gpu/drm/panthor/panthor_gpu.c > > > @@ -395,7 +395,7 @@ void panthor_gpu_suspend(struct panthor_device *ptdev) > > > */ > > > void panthor_gpu_resume(struct panthor_device *ptdev) > > > { > > > - panthor_gpu_irq_resume(&ptdev->gpu->irq, GPU_INTERRUPTS_MASK); > > > + panthor_gpu_irq_resume(&ptdev->gpu->irq); > > > panthor_hw_l2_power_on(ptdev); > > > } > > > > > > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c > > > index 198d59f42578..a1b7917a31b1 100644 > > > --- a/drivers/gpu/drm/panthor/panthor_mmu.c > > > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c > > > @@ -562,9 +562,21 @@ static u64 pack_region_range(struct panthor_device *ptdev, u64 *region_start, u6 > > > return region_width | *region_start; > > > } > > > > > > +static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) > > > +{ > > > + return BIT(as); > > > +} > > > + > > > +/* Forward declaration to call helpers within as_enable/disable */ > > > +static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status); > > > +PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); > > > + > > > static int panthor_mmu_as_enable(struct panthor_device *ptdev, u32 as_nr, > > > u64 transtab, u64 transcfg, u64 memattr) > > > { > > > + panthor_mmu_irq_enable_events(&ptdev->mmu->irq, > > > + panthor_mmu_as_fault_mask(ptdev, as_nr)); > > > + > > > gpu_write64(ptdev, AS_TRANSTAB(as_nr), transtab); > > > gpu_write64(ptdev, AS_MEMATTR(as_nr), memattr); > > > gpu_write64(ptdev, AS_TRANSCFG(as_nr), transcfg); > > > @@ -580,6 +592,9 @@ static int panthor_mmu_as_disable(struct panthor_device *ptdev, u32 as_nr, > > > > > > lockdep_assert_held(&ptdev->mmu->as.slots_lock); > > > > > > + panthor_mmu_irq_disable_events(&ptdev->mmu->irq, > > > + panthor_mmu_as_fault_mask(ptdev, as_nr)); > > > > I'd move that at the end, when everything else in this function > > succeeded. > > That would be a functional change from what the behaviour was prior to > moving disable_events into as_disable. Fair enough. The rational here was that we would disable fault events before even knowing if the disable-AS operation passed, but I guess this sort of situation will lead to a GPU reset, and that's also the order we're doing it right now, so I guess we're good. > > > > > > + > > > /* Flush+invalidate RW caches, invalidate RO ones. */ > > > ret = panthor_gpu_flush_caches(ptdev, CACHE_CLEAN | CACHE_INV, > > > CACHE_CLEAN | CACHE_INV, CACHE_INV); > > > @@ -612,11 +627,6 @@ static u32 panthor_mmu_fault_mask(struct panthor_device *ptdev, u32 value) > > > return value & GENMASK(15, 0); > > > } > > > > > > -static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) > > > -{ > > > - return BIT(as); > > > -} > > > - > > > /** > > > * panthor_vm_has_unhandled_faults() - Check if a VM has unhandled faults > > > * @vm: VM to check. > > > @@ -670,6 +680,7 @@ int panthor_vm_active(struct panthor_vm *vm) > > > struct io_pgtable_cfg *cfg = &io_pgtable_ops_to_pgtable(vm->pgtbl_ops)->cfg; > > > int ret = 0, as, cookie; > > > u64 transtab, transcfg; > > > + u32 fault_mask; > > > > > > if (!drm_dev_enter(&ptdev->base, &cookie)) > > > return -ENODEV; > > > @@ -743,14 +754,13 @@ int panthor_vm_active(struct panthor_vm *vm) > > > /* If the VM is re-activated, we clear the fault. */ > > > vm->unhandled_fault = false; > > > > > > - /* Unhandled pagefault on this AS, clear the fault and re-enable interrupts > > > - * before enabling the AS. > > > + /* Unhandled pagefault on this AS, clear the fault and enable the AS, > > > + * which re-enables interrupts. > > > */ > > > - if (ptdev->mmu->as.faulty_mask & panthor_mmu_as_fault_mask(ptdev, as)) { > > > - gpu_write(ptdev, MMU_INT_CLEAR, panthor_mmu_as_fault_mask(ptdev, as)); > > > - ptdev->mmu->as.faulty_mask &= ~panthor_mmu_as_fault_mask(ptdev, as); > > > - ptdev->mmu->irq.mask |= panthor_mmu_as_fault_mask(ptdev, as); > > > - gpu_write(ptdev, MMU_INT_MASK, ~ptdev->mmu->as.faulty_mask); > > > + fault_mask = panthor_mmu_as_fault_mask(ptdev, as); > > > + if (ptdev->mmu->as.faulty_mask & fault_mask) { > > > + gpu_write(ptdev, MMU_INT_CLEAR, fault_mask); > > > + ptdev->mmu->as.faulty_mask &= ~fault_mask; > > > } > > > > > > /* The VM update is guarded by ::op_lock, which we take at the beginning > > > @@ -1698,7 +1708,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > > > while (status) { > > > u32 as = ffs(status | (status >> 16)) - 1; > > > u32 mask = panthor_mmu_as_fault_mask(ptdev, as); > > > - u32 new_int_mask; > > > u64 addr; > > > u32 fault_status; > > > u32 exception_type; > > > @@ -1716,8 +1725,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > > > mutex_lock(&ptdev->mmu->as.slots_lock); > > > > > > ptdev->mmu->as.faulty_mask |= mask; > > > - new_int_mask = > > > - panthor_mmu_fault_mask(ptdev, ~ptdev->mmu->as.faulty_mask); > > > > > > /* terminal fault, print info about the fault */ > > > drm_err(&ptdev->base, > > > @@ -1741,11 +1748,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > > > */ > > > gpu_write(ptdev, MMU_INT_CLEAR, mask); > > > > > > - /* Ignore MMU interrupts on this AS until it's been > > > - * re-enabled. > > > - */ > > > - ptdev->mmu->irq.mask = new_int_mask; > > > - > > > > I guess we need a _diable_events(mask) to replace this. > > Nope! It lives in as_disable now, which is called a bit further down > (outside of the patch context window. Indeed, sorry for the noise. > Maybe I should play with b4's > settings to ensure the end of functions is visible in the context). Nah, now that I know it's there, that's fine. > > > > if (ptdev->mmu->as.slots[as].vm) > > > ptdev->mmu->as.slots[as].vm->unhandled_fault = true; > > > > > > @@ -1760,7 +1762,6 @@ static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 status) > > > if (has_unhandled_faults) > > > panthor_sched_report_mmu_fault(ptdev); > > > } > > > -PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); > > > > > > /** > > > * panthor_mmu_suspend() - Suspend the MMU logic > > > @@ -1805,7 +1806,7 @@ void panthor_mmu_resume(struct panthor_device *ptdev) > > > ptdev->mmu->as.faulty_mask = 0; > > > mutex_unlock(&ptdev->mmu->as.slots_lock); > > > > > > - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0)); > > > + panthor_mmu_irq_resume(&ptdev->mmu->irq); > > > } > > > > > > /** > > > @@ -1859,7 +1860,7 @@ void panthor_mmu_post_reset(struct panthor_device *ptdev) > > > > > > mutex_unlock(&ptdev->mmu->as.slots_lock); > > > > > > - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0)); > > > + panthor_mmu_irq_resume(&ptdev->mmu->irq); > > > > > > /* Restart the VM_BIND queues. */ > > > mutex_lock(&ptdev->mmu->vm.lock); > > > diff --git a/drivers/gpu/drm/panthor/panthor_pwr.c b/drivers/gpu/drm/panthor/panthor_pwr.c > > > index 57cfc7ce715b..ed3b2b4479ca 100644 > > > --- a/drivers/gpu/drm/panthor/panthor_pwr.c > > > +++ b/drivers/gpu/drm/panthor/panthor_pwr.c > > > @@ -545,5 +545,5 @@ void panthor_pwr_resume(struct panthor_device *ptdev) > > > if (!ptdev->pwr) > > > return; > > > > > > - panthor_pwr_irq_resume(&ptdev->pwr->irq, PWR_INTERRUPTS_MASK); > > > + panthor_pwr_irq_resume(&ptdev->pwr->irq); > > > } > > > > > > > > > > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v9 3/4] drm/panthor: Add tracepoint for hardware utilisation changes 2026-01-15 13:58 [PATCH v9 0/4] Add a few tracepoints to panthor Nicolas Frattaroli 2026-01-15 13:58 ` [PATCH v9 1/4] drm/panthor: Rework panthor_irq::suspended into panthor_irq::state Nicolas Frattaroli 2026-01-15 13:59 ` [PATCH v9 2/4] drm/panthor: Extend IRQ helpers for mask modification/restoration Nicolas Frattaroli @ 2026-01-15 13:59 ` Nicolas Frattaroli 2026-01-15 13:59 ` [PATCH v9 4/4] drm/panthor: Add gpu_job_irq tracepoint Nicolas Frattaroli 3 siblings, 0 replies; 9+ messages in thread From: Nicolas Frattaroli @ 2026-01-15 13:59 UTC (permalink / raw) To: Boris Brezillon, Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Chia-I Wu, Karunika Choo Cc: kernel, linux-kernel, dri-devel, Nicolas Frattaroli Mali GPUs have three registers that indicate which parts of the hardware are powered at any moment. These take the form of bitmaps. In the case of SHADER_READY for example, a high bit indicates that the shader core corresponding to that bit index is powered on. These bitmaps aren't solely contiguous bits, as it's common to have holes in the sequence of shader core indices, and the actual set of which cores are present is defined by the "shader present" register. When the GPU finishes a power state transition, it fires a GPU_IRQ_POWER_CHANGED_ALL interrupt. After such an interrupt is received, the _READY registers will contain new interesting data. During power transitions, the GPU_IRQ_POWER_CHANGED interrupt will fire, and the registers will likewise contain potentially changed data. This is not to be confused with the PWR_IRQ_POWER_CHANGED_ALL interrupt, which is something related to Mali v14+'s power control logic. The _READY registers and corresponding interrupts are already available in v9 and onwards. Expose the data as a tracepoint to userspace. This allows users to debug various scenarios and gather interesting information, such as: knowing how much hardware is lit up at any given time, correlating graphics corruption with a specific powered shader core, measuring when hardware is allowed to go to a powered off state again, and so on. The registration/unregistration functions for the tracepoint go through a wrapper in panthor_hw.c, so that v14+ can implement the same tracepoint by adding its hardware specific IRQ on/off callbacks to the panthor_hw.ops member. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> --- drivers/gpu/drm/panthor/panthor_gpu.c | 28 +++++++++++++++ drivers/gpu/drm/panthor/panthor_gpu.h | 2 ++ drivers/gpu/drm/panthor/panthor_hw.c | 62 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/panthor/panthor_hw.h | 8 +++++ drivers/gpu/drm/panthor/panthor_trace.h | 58 ++++++++++++++++++++++++++++++ 5 files changed, 158 insertions(+) diff --git a/drivers/gpu/drm/panthor/panthor_gpu.c b/drivers/gpu/drm/panthor/panthor_gpu.c index 9304469a711a..2ab444ee8c71 100644 --- a/drivers/gpu/drm/panthor/panthor_gpu.c +++ b/drivers/gpu/drm/panthor/panthor_gpu.c @@ -22,6 +22,9 @@ #include "panthor_hw.h" #include "panthor_regs.h" +#define CREATE_TRACE_POINTS +#include "panthor_trace.h" + /** * struct panthor_gpu - GPU block management data. */ @@ -48,6 +51,9 @@ struct panthor_gpu { GPU_IRQ_RESET_COMPLETED | \ GPU_IRQ_CLEAN_CACHES_COMPLETED) +#define GPU_POWER_INTERRUPTS_MASK \ + (GPU_IRQ_POWER_CHANGED | GPU_IRQ_POWER_CHANGED_ALL) + static void panthor_gpu_coherency_set(struct panthor_device *ptdev) { gpu_write(ptdev, GPU_COHERENCY_PROTOCOL, @@ -80,6 +86,12 @@ static void panthor_gpu_irq_handler(struct panthor_device *ptdev, u32 status) { gpu_write(ptdev, GPU_INT_CLEAR, status); + if (tracepoint_enabled(gpu_power_status) && (status & GPU_POWER_INTERRUPTS_MASK)) + trace_gpu_power_status(ptdev->base.dev, + gpu_read64(ptdev, SHADER_READY), + gpu_read64(ptdev, TILER_READY), + gpu_read64(ptdev, L2_READY)); + if (status & GPU_IRQ_FAULT) { u32 fault_status = gpu_read(ptdev, GPU_FAULT_STATUS); u64 address = gpu_read64(ptdev, GPU_FAULT_ADDR); @@ -157,6 +169,22 @@ int panthor_gpu_init(struct panthor_device *ptdev) return 0; } +int panthor_gpu_power_changed_on(struct panthor_device *ptdev) +{ + guard(pm_runtime_active)(ptdev->base.dev); + + panthor_gpu_irq_enable_events(&ptdev->gpu->irq, GPU_POWER_INTERRUPTS_MASK); + + return 0; +} + +void panthor_gpu_power_changed_off(struct panthor_device *ptdev) +{ + guard(pm_runtime_active)(ptdev->base.dev); + + panthor_gpu_irq_disable_events(&ptdev->gpu->irq, GPU_POWER_INTERRUPTS_MASK); +} + /** * panthor_gpu_block_power_off() - Power-off a specific block of the GPU * @ptdev: Device. diff --git a/drivers/gpu/drm/panthor/panthor_gpu.h b/drivers/gpu/drm/panthor/panthor_gpu.h index 12e66f48ced1..12c263a39928 100644 --- a/drivers/gpu/drm/panthor/panthor_gpu.h +++ b/drivers/gpu/drm/panthor/panthor_gpu.h @@ -51,5 +51,7 @@ int panthor_gpu_l2_power_on(struct panthor_device *ptdev); int panthor_gpu_flush_caches(struct panthor_device *ptdev, u32 l2, u32 lsc, u32 other); int panthor_gpu_soft_reset(struct panthor_device *ptdev); +void panthor_gpu_power_changed_off(struct panthor_device *ptdev); +int panthor_gpu_power_changed_on(struct panthor_device *ptdev); #endif diff --git a/drivers/gpu/drm/panthor/panthor_hw.c b/drivers/gpu/drm/panthor/panthor_hw.c index 80c521784cd3..d135aa6724fa 100644 --- a/drivers/gpu/drm/panthor/panthor_hw.c +++ b/drivers/gpu/drm/panthor/panthor_hw.c @@ -2,6 +2,8 @@ /* Copyright 2025 ARM Limited. All rights reserved. */ #include <linux/nvmem-consumer.h> +#include <linux/platform_device.h> + #include <drm/drm_print.h> #include "panthor_device.h" @@ -30,6 +32,8 @@ static struct panthor_hw panthor_hw_arch_v10 = { .soft_reset = panthor_gpu_soft_reset, .l2_power_off = panthor_gpu_l2_power_off, .l2_power_on = panthor_gpu_l2_power_on, + .power_changed_off = panthor_gpu_power_changed_off, + .power_changed_on = panthor_gpu_power_changed_on, }, }; @@ -54,6 +58,64 @@ static struct panthor_hw_entry panthor_hw_match[] = { }, }; +static int panthor_hw_set_power_tracing(struct device *dev, void *data) +{ + struct panthor_device *ptdev = dev_get_drvdata(dev); + + if (!ptdev) + return -ENODEV; + + if (!ptdev->hw) + return 0; + + if (data) { + if (ptdev->hw->ops.power_changed_on) + return ptdev->hw->ops.power_changed_on(ptdev); + } else { + if (ptdev->hw->ops.power_changed_off) + ptdev->hw->ops.power_changed_off(ptdev); + } + + return 0; +} + +int panthor_hw_power_status_register(void) +{ + struct device_driver *drv; + int ret; + + drv = driver_find("panthor", &platform_bus_type); + if (!drv) + return -ENODEV; + + ret = driver_for_each_device(drv, NULL, (void *)true, + panthor_hw_set_power_tracing); + + return ret; +} + +void panthor_hw_power_status_unregister(void) +{ + struct device_driver *drv; + int ret; + + drv = driver_find("panthor", &platform_bus_type); + if (!drv) + return; + + ret = driver_for_each_device(drv, NULL, NULL, panthor_hw_set_power_tracing); + + /* + * Ideally, it'd be possible to ask driver_for_each_device to hand us + * another "start" to keep going after the failing device, but it + * doesn't do that. Minor inconvenience in what is probably a bad day + * on the computer already though. + */ + if (ret) + pr_warn("Couldn't mask power IRQ for at least one device: %pe\n", + ERR_PTR(ret)); +} + static char *get_gpu_model_name(struct panthor_device *ptdev) { const u32 gpu_id = ptdev->gpu_info.gpu_id; diff --git a/drivers/gpu/drm/panthor/panthor_hw.h b/drivers/gpu/drm/panthor/panthor_hw.h index 56c68c1e9c26..2c28aea82841 100644 --- a/drivers/gpu/drm/panthor/panthor_hw.h +++ b/drivers/gpu/drm/panthor/panthor_hw.h @@ -19,6 +19,12 @@ struct panthor_hw_ops { /** @l2_power_on: L2 power on function pointer */ int (*l2_power_on)(struct panthor_device *ptdev); + + /** @power_changed_on: Start listening to power change IRQs */ + int (*power_changed_on)(struct panthor_device *ptdev); + + /** @power_changed_off: Stop listening to power change IRQs */ + void (*power_changed_off)(struct panthor_device *ptdev); }; /** @@ -32,6 +38,8 @@ struct panthor_hw { }; int panthor_hw_init(struct panthor_device *ptdev); +int panthor_hw_power_status_register(void); +void panthor_hw_power_status_unregister(void); static inline int panthor_hw_soft_reset(struct panthor_device *ptdev) { diff --git a/drivers/gpu/drm/panthor/panthor_trace.h b/drivers/gpu/drm/panthor/panthor_trace.h new file mode 100644 index 000000000000..5bd420894745 --- /dev/null +++ b/drivers/gpu/drm/panthor/panthor_trace.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: GPL-2.0 or MIT */ +/* Copyright 2025 Collabora ltd. */ + +#undef TRACE_SYSTEM +#define TRACE_SYSTEM panthor + +#if !defined(__PANTHOR_TRACE_H__) || defined(TRACE_HEADER_MULTI_READ) +#define __PANTHOR_TRACE_H__ + +#include <linux/tracepoint.h> +#include <linux/types.h> + +#include "panthor_hw.h" + +/** + * gpu_power_status - called whenever parts of GPU hardware are turned on or off + * @dev: pointer to the &struct device, for printing the device name + * @shader_bitmap: bitmap where a high bit indicates the shader core at a given + * bit index is on, and a low bit indicates a shader core is + * either powered off or absent + * @tiler_bitmap: bitmap where a high bit indicates the tiler unit at a given + * bit index is on, and a low bit indicates a tiler unit is + * either powered off or absent + * @l2_bitmap: bitmap where a high bit indicates the L2 cache at a given bit + * index is on, and a low bit indicates the L2 cache is either + * powered off or absent + */ +TRACE_EVENT_FN(gpu_power_status, + TP_PROTO(const struct device *dev, u64 shader_bitmap, u64 tiler_bitmap, + u64 l2_bitmap), + TP_ARGS(dev, shader_bitmap, tiler_bitmap, l2_bitmap), + TP_STRUCT__entry( + __string(dev_name, dev_name(dev)) + __field(u64, shader_bitmap) + __field(u64, tiler_bitmap) + __field(u64, l2_bitmap) + ), + TP_fast_assign( + __assign_str(dev_name); + __entry->shader_bitmap = shader_bitmap; + __entry->tiler_bitmap = tiler_bitmap; + __entry->l2_bitmap = l2_bitmap; + ), + TP_printk("%s: shader_bitmap=0x%llx tiler_bitmap=0x%llx l2_bitmap=0x%llx", + __get_str(dev_name), __entry->shader_bitmap, __entry->tiler_bitmap, + __entry->l2_bitmap + ), + panthor_hw_power_status_register, panthor_hw_power_status_unregister +); + +#endif /* __PANTHOR_TRACE_H__ */ + +#undef TRACE_INCLUDE_PATH +#define TRACE_INCLUDE_PATH . +#undef TRACE_INCLUDE_FILE +#define TRACE_INCLUDE_FILE panthor_trace + +#include <trace/define_trace.h> -- 2.52.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v9 4/4] drm/panthor: Add gpu_job_irq tracepoint 2026-01-15 13:58 [PATCH v9 0/4] Add a few tracepoints to panthor Nicolas Frattaroli ` (2 preceding siblings ...) 2026-01-15 13:59 ` [PATCH v9 3/4] drm/panthor: Add tracepoint for hardware utilisation changes Nicolas Frattaroli @ 2026-01-15 13:59 ` Nicolas Frattaroli 3 siblings, 0 replies; 9+ messages in thread From: Nicolas Frattaroli @ 2026-01-15 13:59 UTC (permalink / raw) To: Boris Brezillon, Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Chia-I Wu, Karunika Choo Cc: kernel, linux-kernel, dri-devel, Nicolas Frattaroli Mali's CSF firmware triggers the job IRQ whenever there's new firmware events for processing. While this can be a global event (BIT(31) of the status register), it's usually an event relating to a command stream group (the other bit indices). Panthor throws these events onto a workqueue for processing outside the IRQ handler. It's therefore useful to have an instrumented tracepoint that goes beyond the generic IRQ tracepoint for this specific case, as it can be augmented with additional data, namely the events bit mask. This can then be used to debug problems relating to GPU jobs events not being processed quickly enough. The duration_ns field can be used to work backwards from when the tracepoint fires (at the end of the IRQ handler) to figure out when the interrupt itself landed, providing not just information on how long the work queueing took, but also when the actual interrupt itself arrived. With this information in hand, the IRQ handler itself being slow can be excluded as a possible source of problems, and attention can be directed to the workqueue processing instead. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> --- drivers/gpu/drm/panthor/panthor_fw.c | 13 +++++++++++++ drivers/gpu/drm/panthor/panthor_trace.h | 28 ++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+) diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c index 0e46625f7621..5a904ca64525 100644 --- a/drivers/gpu/drm/panthor/panthor_fw.c +++ b/drivers/gpu/drm/panthor/panthor_fw.c @@ -26,6 +26,7 @@ #include "panthor_mmu.h" #include "panthor_regs.h" #include "panthor_sched.h" +#include "panthor_trace.h" #define CSF_FW_NAME "mali_csffw.bin" @@ -1060,6 +1061,12 @@ static void panthor_fw_init_global_iface(struct panthor_device *ptdev) static void panthor_job_irq_handler(struct panthor_device *ptdev, u32 status) { + u32 duration; + u64 start = 0; + + if (tracepoint_enabled(gpu_job_irq)) + start = ktime_get_ns(); + gpu_write(ptdev, JOB_INT_CLEAR, status); if (!ptdev->fw->booted && (status & JOB_INT_GLOBAL_IF)) @@ -1072,6 +1079,12 @@ static void panthor_job_irq_handler(struct panthor_device *ptdev, u32 status) return; panthor_sched_report_fw_events(ptdev, status); + + if (tracepoint_enabled(gpu_job_irq) && start) { + if (check_sub_overflow(ktime_get_ns(), start, &duration)) + duration = U32_MAX; + trace_gpu_job_irq(ptdev->base.dev, status, duration); + } } PANTHOR_IRQ_HANDLER(job, JOB, panthor_job_irq_handler); diff --git a/drivers/gpu/drm/panthor/panthor_trace.h b/drivers/gpu/drm/panthor/panthor_trace.h index 5bd420894745..6ffeb4fe6599 100644 --- a/drivers/gpu/drm/panthor/panthor_trace.h +++ b/drivers/gpu/drm/panthor/panthor_trace.h @@ -48,6 +48,34 @@ TRACE_EVENT_FN(gpu_power_status, panthor_hw_power_status_register, panthor_hw_power_status_unregister ); +/** + * gpu_job_irq - called after a job interrupt from firmware completes + * @dev: pointer to the &struct device, for printing the device name + * @events: bitmask of BIT(CSG id) | BIT(31) for a global event + * @duration_ns: Nanoseconds between job IRQ handler entry and exit + * + * The panthor_job_irq_handler() function instrumented by this tracepoint exits + * once it has queued the firmware interrupts for processing, not when the + * firmware interrupts are fully processed. This tracepoint allows for debugging + * issues with delays in the workqueue's processing of events. + */ +TRACE_EVENT(gpu_job_irq, + TP_PROTO(const struct device *dev, u32 events, u32 duration_ns), + TP_ARGS(dev, events, duration_ns), + TP_STRUCT__entry( + __string(dev_name, dev_name(dev)) + __field(u32, events) + __field(u32, duration_ns) + ), + TP_fast_assign( + __assign_str(dev_name); + __entry->events = events; + __entry->duration_ns = duration_ns; + ), + TP_printk("%s: events=0x%x duration_ns=%d", __get_str(dev_name), + __entry->events, __entry->duration_ns) +); + #endif /* __PANTHOR_TRACE_H__ */ #undef TRACE_INCLUDE_PATH -- 2.52.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-01-16 8:38 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-01-15 13:58 [PATCH v9 0/4] Add a few tracepoints to panthor Nicolas Frattaroli 2026-01-15 13:58 ` [PATCH v9 1/4] drm/panthor: Rework panthor_irq::suspended into panthor_irq::state Nicolas Frattaroli 2026-01-15 14:55 ` Boris Brezillon 2026-01-15 13:59 ` [PATCH v9 2/4] drm/panthor: Extend IRQ helpers for mask modification/restoration Nicolas Frattaroli 2026-01-15 15:17 ` Boris Brezillon 2026-01-15 20:05 ` Nicolas Frattaroli 2026-01-16 8:38 ` Boris Brezillon 2026-01-15 13:59 ` [PATCH v9 3/4] drm/panthor: Add tracepoint for hardware utilisation changes Nicolas Frattaroli 2026-01-15 13:59 ` [PATCH v9 4/4] drm/panthor: Add gpu_job_irq tracepoint Nicolas Frattaroli
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox