From: "Onur Özkan" <work@onurozkan.dev>
To: Alice Ryhl <aliceryhl@google.com>
Cc: Daniel Almeida <daniel.almeida@collabora.com>,
Boris Brezillon <boris.brezillon@collabora.com>,
linux-kernel@vger.kernel.org, dakr@kernel.org, airlied@gmail.com,
simona@ffwll.ch, dri-devel@lists.freedesktop.org,
rust-for-linux@vger.kernel.org,
Deborah Brouwer <deborah.brouwer@collabora.com>
Subject: Re: [PATCH v1 RESEND 4/4] drm/tyr: add GPU reset handling
Date: Sat, 11 Apr 2026 08:52:15 +0300 [thread overview]
Message-ID: <20260411055215.120719-1-work@onurozkan.dev> (raw)
In-Reply-To: <aditNn-iQc3QXRfK@google.com>
On Fri, 10 Apr 2026 07:56:38 +0000
Alice Ryhl <aliceryhl@google.com> wrote:
> On Thu, Apr 09, 2026 at 10:44:24AM -0300, Daniel Almeida wrote:
> > Hi Onur,
> >
> > > On 9 Apr 2026, at 08:41, Onur Özkan <work@onurozkan.dev> wrote:
> > >
> > > On Fri, 03 Apr 2026 12:01:09 -0300
> > > Daniel Almeida <daniel.almeida@collabora.com> wrote:
> > >
> > >>
> > >>
> > >>> On 19 Mar 2026, at 08:08, Boris Brezillon <boris.brezillon@collabora.com> wrote:
> > >>>
> > >>> On Fri, 13 Mar 2026 12:16:44 +0300
> > >>> Onur Özkan <work@onurozkan.dev> wrote:
> > >>>
> > >>>> +impl Controller {
> > >>>> + /// Creates a [`Controller`] instance.
> > >>>> + fn new(pdev: ARef<platform::Device>, iomem: Arc<Devres<IoMem>>) -> Result<Arc<Self>> {
> > >>>> + let wq = workqueue::OrderedQueue::new(c"tyr-reset-wq", 0)?;
> > >>>> +
> > >>>> + Arc::pin_init(
> > >>>> + try_pin_init!(Self {
> > >>>> + pdev,
> > >>>> + iomem,
> > >>>> + pending: Atomic::new(false),
> > >>>> + wq,
> > >>>> + work <- kernel::new_work!("tyr::reset"),
> > >>>> + }),
> > >>>> + GFP_KERNEL,
> > >>>> + )
> > >>>> + }
> > >>>> +
> > >>>> + /// Processes one scheduled reset request.
> > >>>> + ///
> > >>>> + /// Panthor reference:
> > >>>> + /// - drivers/gpu/drm/panthor/panthor_device.c::panthor_device_reset_work()
> > >>>> + fn reset_work(self: &Arc<Self>) {
> > >>>> + dev_info!(self.pdev.as_ref(), "GPU reset work is started.\n");
> > >>>> +
> > >>>> + // SAFETY: `Controller` is part of driver-private data and only exists
> > >>>> + // while the platform device is bound.
> > >>>> + let pdev = unsafe { self.pdev.as_ref().as_bound() };
> > >>>> + if let Err(e) = run_reset(pdev, &self.iomem) {
> > >>>> + dev_err!(self.pdev.as_ref(), "GPU reset failed: {:?}\n", e);
> > >>>> + } else {
> > >>>> + dev_info!(self.pdev.as_ref(), "GPU reset work is done.\n");
> > >>>> + }
> > >>>
> > >>> Unfortunately, the reset operation is not as simple as instructing the
> > >>> GPU to reset, it's a complex synchronization process where you need to
> > >>> try to gracefully put various components on hold before you reset, and
> > >>> then resume those after the reset is effective. Of course, with what we
> > >>> currently have in-tree, there's not much to suspend/resume, but I think
> > >>> I'd prefer to design the thing so we can progressively add more
> > >>> components without changing the reset logic too much.
> > >>>
> > >>> I would probably start with a Resettable trait that has the
> > >>> {pre,post}_reset() methods that exist in Panthor.
> > >>>
> > >>> The other thing we need is a way for those components to know when a
> > >>> reset is about to happen so they can postpone some actions they were
> > >>> planning in order to not further delay the reset, or end up with
> > >>> actions that fail because the HW is already unusable. Not too sure how
> > >>> we want to handle that though. Panthor is currently sprinkled with
> > >>> panthor_device_reset_is_pending() calls in key places, but that's still
> > >>> very manual, maybe we can automate that a bit more in Tyr, dunno.
> > >>>
> > >>
> > >>
> > >> We could have an enum where one of the variants is Resetting, and the other one
> > >> gives access to whatever state is not accessible while resets are in progress.
> > >>
> > >> Something like
> > >>
> > >> pub enum TyrData {
> > >> Active(ActiveTyrData),
> > >> ResetInProgress(ActiveTyrData)
> > >> }
> > >>
> > >> fn access() -> Option<&ActiveTyrData> {
> > >> … // if the “ResetInProgress” variant is active, return None
> > >> }
> > >>
> > >
> > > That's an interesting approach, but if it's all about `fn access` function, we
> > > can already do that with a simple atomic state e.g.,:
> > >
> > > // The state flag in reset::Controller
> > > state: Atomic<ResetState>
> > >
> > > fn access(&self) -> Option<&Arc<Devres<IoMem>>> {
> > > match self.state.load(Relaxed) {
> > > ResetState::Idle => Some(&self.iomem),
> > > _ => None,
> > > }
> > > }
> > >
> > > What do you think? Would this be sufficient?
> > >
> > > Btw, a sample code snippet from the caller side would be very helpful for
> > > designing this further. That part is kind a blurry for me.
> > >
> > > Thanks,
> > > Onur
> > >
> > >>
> > >>>> +
> > >>>> + self.pending.store(false, Release);
> > >>>> + }
> > >>>> +}
> >
> > I think that there are two things we're trying to implement correctly:
> >
> > 1) Deny access to a subset of the state while a reset is in progress
> > 2) Wait for anyone accessing 1) to finish before starting a reset
> >
> > IIUC, using Atomic<T> can solve 1 by bailing if the "reset in progress"
> > flag/variant is set, but I don't think it implements 2? One would have to
> > implement more logic to block until the state is not being actively used.
> >
> > Now, there are probably easier ways to solve this, but I propose that we do the
> > extra legwork to make this explicit and enforceable by the type system.
> >
> > How about introducing a r/w semaphore abstraction? It seems to correctly encode
> > the logic we want:
> >
> > a) multiple users can access the state if no reset is pending ("read" side)
> > b) the reset code can block until the state is no longer being accessed (the "write" side)
> >
> > In Tyr, this would roughly map to something like:
> >
> > struct TyrData {
> > reset_gate: RwSem<ActiveHwState>
> > // other, always accessible members
> > }
> >
> > impl TyrData {
> > fn try_access(&self) -> Option<ReadGuard<'_, ActiveHwState>> {...}
> > }
> >
> > Where ActiveHwState contains the fw/mmu/sched blocks (these are not upstream
> > yet, Deborah has a series that will introduce the fw block that should land
> > soon) and perhaps more.
> >
> > Now, the reset logic would be roughly:
> >
> > fn reset_work(tdev: Arc<TyrDevice>) {
> > // Block until nobody else is accessing the hw, prevent others from
> > // initiating new accesses too..
> > let _guard = tdev.reset_gate.write();
> >
> > // pre_reset() all Resettable implementors
> >
> > ... reset
> >
> > // post_Reset all Resettable implementors
> > }
> >
> > Now, for every block that might touch a resource that would be unavailable
> > during a reset, we enforce a try_access() via the type system, and ensure that
> > the reset cannot start while the guard is alive. In particular, ioctls would
> > look like:
> >
> > fn ioctl_foo(...) {
> > let hw = tdev.reset_gate.try_access()?;
> > // resets are blocked while the guard is alive, no other way to access that state otherwise
> > }
> >
> > The code will not compile otherwise, so long as we keep the state in ActiveHwState, i.e.:
> > protected by the r/w sem.
> >
> > This looks like an improvement over Panthor, since Panthor relies on manually
> > canceling work that access hw state via cancel_work_sync(), and gating new work
> > submissions on the "reset_in_progress" flag, i.e.:
> >
> > /**
> > * sched_queue_work() - Queue a scheduler work.
> > * @sched: Scheduler object.
> > * @wname: Work name.
> > *
> > * Conditionally queues a scheduler work if no reset is pending/in-progress.
> > */
> > #define sched_queue_work(sched, wname) \
> > do { \
> > if (!atomic_read(&(sched)->reset.in_progress) && \
> > !panthor_device_reset_is_pending((sched)->ptdev)) \
> > queue_work((sched)->wq, &(sched)->wname ## _work); \
> > } while (0)
> >
> >
> > Thoughts?
>
> I don't think a semaphore is the right answer. I think SRCU plus a
> counter makes more sense. (With a sentinal value reserved for "currently
> resetting".)
>
> When you begin using the hardware, you start an srcu critical region and
> read the counter. If the counter has the sentinel value, you know the
> hardware is resetting and you fail. Otherwise you record the couter and
> proceed.
>
> If at any point you release the srcu critical region and want to
> re-acquire it to continue the same ongoing work, then you must ensure
> that the counter still has the same value. This ensures that if the GPU
> is reset, then even if the reset has finished by the time you come back,
> you still fail because the counter has changed.
>
>
> Another option could be to rely on the existing Device unbind logic.
> Entirely remove the class device and run the full unbind procedure,
> cleaning up all devm work etc. Then once that has finished, probe the
> device again and start from scratch. After all, GPU reset does not have
> to be a cheap operation.
>
> Alice
Anybody working on the SRCU support or do we need to write one?
Ignoring the sync primitive for a moment (since there is no clear consensus
yet), I was trying to figure out the HW access API around a closure.
I wonder if something like this (roughly, haven't implemented or compiled it
yet) would make sense:
struct HwGate {
// srcu, I guess?
}
struct HwGuard<'a> {
gate: &'a HwGate,
hw: &'a ActiveHw,
}
struct ActiveHw {
// mmu, fw, ...
}
struct Controller {
...
hw: ActiveHw,
}
impl Controller {
fn with_hw<R>(&self, f: impl FnOnce(&HwGuard<'_>) -> Result<R>) -> Result<R> {
let guard = self.gate.lock(&self.hw)?;
f(&guard)
}
}
and TyrDrmDeviceData would hava a helper wrapper:
impl TyrDrmDeviceData {
fn with_hw<R>(&self, f: impl FnOnce(&ActiveHwState) -> Result<R>) -> Result<R> {
self.reset.with_hw(f) // maybe find a better name for "reset"
}
}
Then the user can use it like this:
let driver: TyrDrmDeviceData = ...;
driver.with_hw(|hw| {
let mmu = hw.mmu;
...
})
Any thoughts?
- Onur
next prev parent reply other threads:[~2026-04-11 5:52 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-13 9:16 [PATCH v1 RESEND 0/4] drm/tyr: implement GPU reset API Onur Özkan
2026-03-13 9:16 ` [PATCH v1 RESEND 1/4] drm/tyr: clear reset IRQ before soft reset Onur Özkan
2026-03-19 10:47 ` Boris Brezillon
2026-03-13 9:16 ` [PATCH v1 RESEND 2/4] rust: add Work::disable_sync Onur Özkan
2026-03-13 12:00 ` Alice Ryhl
2026-03-15 10:45 ` Onur Özkan
2026-03-13 9:16 ` [PATCH v1 RESEND 3/4] rust: add ordered workqueue wrapper Onur Özkan
2026-03-13 9:16 ` [PATCH v1 RESEND 4/4] drm/tyr: add GPU reset handling Onur Özkan
2026-03-13 14:56 ` Daniel Almeida
2026-03-15 10:44 ` Onur Özkan
2026-03-19 11:08 ` Boris Brezillon
2026-03-19 12:51 ` Onur Özkan
2026-04-03 15:01 ` Daniel Almeida
2026-04-09 11:41 ` Onur Özkan
2026-04-09 13:44 ` Daniel Almeida
2026-04-10 7:56 ` Alice Ryhl
2026-04-10 13:00 ` Daniel Almeida
2026-04-10 13:20 ` Boris Brezillon
2026-04-11 5:52 ` Onur Özkan [this message]
2026-04-11 6:02 ` Onur Özkan
2026-03-13 9:52 ` [PATCH v1 RESEND 0/4] drm/tyr: implement GPU reset API Alice Ryhl
2026-03-13 11:12 ` Onur Özkan
2026-03-13 11:26 ` Alice Ryhl
2026-04-03 12:36 ` Onur Özkan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260411055215.120719-1-work@onurozkan.dev \
--to=work@onurozkan.dev \
--cc=airlied@gmail.com \
--cc=aliceryhl@google.com \
--cc=boris.brezillon@collabora.com \
--cc=dakr@kernel.org \
--cc=daniel.almeida@collabora.com \
--cc=deborah.brouwer@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=rust-for-linux@vger.kernel.org \
--cc=simona@ffwll.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox