From: Boris Brezillon <boris.brezillon@collabora.com>
To: "Adrián Larumbe" <adrian.larumbe@collabora.com>
Cc: Steven Price <steven.price@arm.com>,
Liviu Dudau <liviu.dudau@arm.com>,
Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
Maxime Ripard <mripard@kernel.org>,
Thomas Zimmermann <tzimmermann@suse.de>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
kernel@collabora.com, dri-devel@lists.freedesktop.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path
Date: Sat, 15 Feb 2025 10:28:07 +0100 [thread overview]
Message-ID: <20250215102807.7502e7ba@collabora.com> (raw)
In-Reply-To: <20250214210009.1994543-1-adrian.larumbe@collabora.com>
On Fri, 14 Feb 2025 20:55:20 +0000
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> Commit 0590c94c3596 ("drm/panthor: Fix race condition when gathering fdinfo
> group samples") introduced an xarray lock to deal with potential
> use-after-free errors when accessing groups fdinfo figures. However, this
> toggles the kernel's atomic context status, so the next nested mutex lock
> will raise a warning when the kernel is compiled with mutex debug options:
>
> CONFIG_DEBUG_RT_MUTEXES=y
> CONFIG_DEBUG_MUTEXES=y
>
> Replace Panthor's group fdinfo data mutex with a guarded spinlock.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> 0590c94c3596 ("drm/panthor: Fix race condition when gathering fdinfo group samples")
My previous
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
stands.
> ---
> drivers/gpu/drm/panthor/panthor_sched.c | 26 ++++++++++++-------------
> 1 file changed, 12 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
> index 1a276db095ff..4d31d1967716 100644
> --- a/drivers/gpu/drm/panthor/panthor_sched.c
> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> @@ -9,6 +9,7 @@
> #include <drm/panthor_drm.h>
>
> #include <linux/build_bug.h>
> +#include <linux/cleanup.h>
> #include <linux/clk.h>
> #include <linux/delay.h>
> #include <linux/dma-mapping.h>
> @@ -631,10 +632,10 @@ struct panthor_group {
> struct panthor_gpu_usage data;
>
> /**
> - * @lock: Mutex to govern concurrent access from drm file's fdinfo callback
> - * and job post-completion processing function
> + * @fdinfo.lock: Spinlock to govern concurrent access from drm file's fdinfo
> + * callback and job post-completion processing function
> */
> - struct mutex lock;
> + spinlock_t lock;
>
> /** @fdinfo.kbo_sizes: Aggregate size of private kernel BO's held by the group. */
> size_t kbo_sizes;
> @@ -910,8 +911,6 @@ static void group_release_work(struct work_struct *work)
> release_work);
> u32 i;
>
> - mutex_destroy(&group->fdinfo.lock);
> -
> for (i = 0; i < group->queue_count; i++)
> group_free_queue(group, group->queues[i]);
>
> @@ -2861,12 +2860,12 @@ static void update_fdinfo_stats(struct panthor_job *job)
> struct panthor_job_profiling_data *slots = queue->profiling.slots->kmap;
> struct panthor_job_profiling_data *data = &slots[job->profiling.slot];
>
> - mutex_lock(&group->fdinfo.lock);
> - if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_CYCLES)
> - fdinfo->cycles += data->cycles.after - data->cycles.before;
> - if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP)
> - fdinfo->time += data->time.after - data->time.before;
> - mutex_unlock(&group->fdinfo.lock);
> + scoped_guard(spinlock, &group->fdinfo.lock) {
> + if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_CYCLES)
> + fdinfo->cycles += data->cycles.after - data->cycles.before;
> + if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP)
> + fdinfo->time += data->time.after - data->time.before;
> + }
> }
>
> void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile)
> @@ -2880,12 +2879,11 @@ void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile)
>
> xa_lock(&gpool->xa);
> xa_for_each(&gpool->xa, i, group) {
> - mutex_lock(&group->fdinfo.lock);
> + guard(spinlock)(&group->fdinfo.lock);
> pfile->stats.cycles += group->fdinfo.data.cycles;
> pfile->stats.time += group->fdinfo.data.time;
> group->fdinfo.data.cycles = 0;
> group->fdinfo.data.time = 0;
> - mutex_unlock(&group->fdinfo.lock);
> }
> xa_unlock(&gpool->xa);
> }
> @@ -3537,7 +3535,7 @@ int panthor_group_create(struct panthor_file *pfile,
> mutex_unlock(&sched->reset.lock);
>
> add_group_kbo_sizes(group->ptdev, group);
> - mutex_init(&group->fdinfo.lock);
> + spin_lock_init(&group->fdinfo.lock);
>
> return gid;
>
prev parent reply other threads:[~2025-02-15 9:28 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-14 20:55 [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path Adrián Larumbe
2025-02-14 20:55 ` [PATCH v2 2/2] drm/panthor: Avoid sleep locking in the internal BO size path Adrián Larumbe
2025-02-15 9:44 ` Boris Brezillon
2025-02-20 20:26 ` Adrián Larumbe
2025-02-24 11:51 ` Boris Brezillon
2025-02-15 9:28 ` Boris Brezillon [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250215102807.7502e7ba@collabora.com \
--to=boris.brezillon@collabora.com \
--cc=adrian.larumbe@collabora.com \
--cc=airlied@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=kernel@collabora.com \
--cc=linux-kernel@vger.kernel.org \
--cc=liviu.dudau@arm.com \
--cc=maarten.lankhorst@linux.intel.com \
--cc=mripard@kernel.org \
--cc=simona@ffwll.ch \
--cc=steven.price@arm.com \
--cc=tzimmermann@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox