Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Danilo Krummrich <dakr@redhat.com>
To: Boris Brezillon <boris.brezillon@collabora.com>
Cc: robdclark@chromium.org, sarah.walker@imgtec.com,
	ketil.johnsen@arm.com, lina@asahilina.net, Liviu.Dudau@arm.com,
	dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
	luben.tuikov@amd.com, donald.robson@imgtec.com,
	"Christian König" <christian.koenig@amd.com>,
	faith.ekstrand@collabora.com
Subject: Re: [Intel-xe] [PATCH v2 4/9] drm/sched: Split free_job into own work item
Date: Tue, 12 Sep 2023 15:34:41 +0200	[thread overview]
Message-ID: <ZQBo8YbjFigHyKc5@pollux> (raw)
In-Reply-To: <20230912152705.70064f9f@collabora.com>

On Tue, Sep 12, 2023 at 03:27:05PM +0200, Boris Brezillon wrote:
> On Fri, 25 Aug 2023 15:45:49 +0200
> Christian König <christian.koenig@amd.com> wrote:
> 
> > >>>> I tried this patch with Nouveau and found a race condition:
> > >>>>
> > >>>> In drm_sched_run_job_work() the job is added to the pending_list via
> > >>>> drm_sched_job_begin(), then the run_job() callback is called and the scheduled
> > >>>> fence is signaled.
> > >>>>
> > >>>> However, in parallel drm_sched_get_cleanup_job() might be called from
> > >>>> drm_sched_free_job_work(), which picks the first job from the pending_list and
> > >>>> for the next job on the pending_list sets the scheduled fence' timestamp field.  
> > >> Well why can this happen in parallel? Either the work items are scheduled to
> > >> a single threaded work queue or you have protected the pending list with
> > >> some locks.
> > >>  
> > > Xe uses a single-threaded work queue, Nouveau does not (desired
> > > behavior).
> > >
> > > The list of pending jobs is protected by a lock (safe), the race is:
> > >
> > > add job to pending list
> > > run_job
> > > signal scheduled fence
> > >
> > > dequeue from pending list
> > > free_job
> > > update timestamp
> > >
> > > Once a job is on the pending list its timestamp can be accessed which
> > > can blow up if scheduled fence isn't signaled or more specifically unless
> > > DMA_FENCE_FLAG_TIMESTAMP_BIT is set.  
> 
> I'm a bit lost. How can this lead to a NULL deref? Timestamp is a
> ktime_t embedded in dma_fence, and finished/scheduled are both
> dma_fence objects embedded in drm_sched_fence. So, unless
> {job,next_job}->s_fence is NULL, or {job,next_job} itself is NULL, I
> don't really see where the NULL deref is. If s_fence is NULL, that means
> drm_sched_job_init() wasn't called (unlikely to be detected that late),
> or ->free_job()/drm_sched_job_cleanup() was called while the job was
> still in the pending list. I don't really see a situation where job
> could NULL to be honest.

I think the problem here was that a dma_fence' timestamp field is within a union
together with it's cb_list list_head [1]. If a timestamp is set before the fence
is actually signalled, dma_fence_signal_timestamp_locked() will access the
cb_list to run the particular callbacks registered to this dma_fence. However,
writing the timestap will overwrite this list_head since it's a union, hence
we'd try to dereference the timestamp while iterating the list.

[1] https://elixir.bootlin.com/linux/latest/source/include/linux/dma-fence.h#L87

> 
> While I agree that updating the timestamp before the fence has been
> flagged as signaled/timestamped is broken (timestamp will be
> overwritten when dma_fence_signal(scheduled) is called) I don't see a
> situation where it would cause a NULL/invalid pointer deref. So I
> suspect there's another race causing jobs to be cleaned up while
> they're still in the pending_list.
> 
> > 
> > Ah, that problem again. No that is actually quite harmless.
> > 
> > You just need to double check if the DMA_FENCE_FLAG_TIMESTAMP_BIT is 
> > already set and if it's not set don't do anything.
> 


  reply	other threads:[~2023-09-12 13:34 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-11  2:31 [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 1/9] drm/sched: Convert drm scheduler to use a work queue rather than kthread Matthew Brost
2023-08-16 11:30   ` Danilo Krummrich
2023-08-16 14:05     ` Christian König
2023-08-16 12:30       ` Danilo Krummrich
2023-08-16 14:38         ` Matthew Brost
2023-08-16 15:40           ` Danilo Krummrich
2023-08-16 14:59         ` Christian König
2023-08-16 16:33           ` Danilo Krummrich
2023-08-17  5:33             ` Christian König
2023-08-17 11:13               ` Danilo Krummrich
2023-08-17 13:35                 ` Christian König
2023-08-17 12:48                   ` Danilo Krummrich
2023-08-17 16:17                     ` Christian König
2023-08-18 11:58                       ` Danilo Krummrich
2023-08-21 14:07                         ` Christian König
2023-08-21 18:01                           ` Danilo Krummrich
2023-08-21 18:12                             ` Christian König
2023-08-21 19:07                               ` Danilo Krummrich
2023-08-22  9:35                                 ` Christian König
2023-08-21 19:46                               ` Faith Ekstrand
2023-08-22  9:51                                 ` Christian König
2023-08-22 16:55                                   ` Faith Ekstrand
2023-08-24 11:50                                     ` Bas Nieuwenhuizen
2023-08-18  3:08                 ` Matthew Brost
2023-08-18  5:40                   ` Christian König
2023-08-18 12:49                     ` Matthew Brost
2023-08-18 12:06                       ` Danilo Krummrich
2023-09-12 14:28                 ` Boris Brezillon
2023-09-12 14:33                   ` Danilo Krummrich
2023-09-12 14:49                     ` Boris Brezillon
2023-09-12 15:13                       ` Boris Brezillon
2023-09-12 16:58                         ` Danilo Krummrich
2023-09-12 16:52                       ` Danilo Krummrich
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 2/9] drm/sched: Move schedule policy to scheduler / entity Matthew Brost
2023-08-11 21:43   ` Maira Canal
2023-08-12  3:20     ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 3/9] drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy Matthew Brost
2023-08-29 17:37   ` Danilo Krummrich
2023-09-05 11:10     ` Danilo Krummrich
2023-09-11 19:44       ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 4/9] drm/sched: Split free_job into own work item Matthew Brost
2023-08-17 13:39   ` Christian König
2023-08-17 17:54     ` Matthew Brost
2023-08-18  5:27       ` Christian König
2023-08-18 13:13         ` Matthew Brost
2023-08-21 13:17           ` Christian König
2023-08-23  3:27             ` Matthew Brost
2023-08-23  7:10               ` Christian König
2023-08-23 15:24                 ` Matthew Brost
2023-08-23 15:41                   ` Alex Deucher
2023-08-23 17:26                     ` Rodrigo Vivi
2023-08-23 23:12                       ` Matthew Brost
2023-08-24 11:44                         ` Christian König
2023-08-24 14:30                           ` Matthew Brost
2023-08-24 23:04   ` Danilo Krummrich
2023-08-25  2:58     ` Matthew Brost
2023-08-25  8:02       ` Christian König
2023-08-25 13:36         ` Matthew Brost
2023-08-25 13:45           ` Christian König
2023-09-12 10:13             ` Boris Brezillon
2023-09-12 10:46               ` Danilo Krummrich
2023-09-12 12:18                 ` Boris Brezillon
2023-09-12 12:56                   ` Danilo Krummrich
2023-09-12 13:52                     ` Boris Brezillon
2023-09-12 14:10                       ` Danilo Krummrich
2023-09-12 13:27             ` Boris Brezillon
2023-09-12 13:34               ` Danilo Krummrich [this message]
2023-09-12 13:53                 ` Boris Brezillon
2023-08-28 18:04   ` Danilo Krummrich
2023-08-28 18:41     ` Matthew Brost
2023-08-29  1:20       ` Danilo Krummrich
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 5/9] drm/sched: Add generic scheduler message interface Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 6/9] drm/sched: Add drm_sched_start_timeout_unlocked helper Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 7/9] drm/sched: Start run wq before TDR in drm_sched_start Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 8/9] drm/sched: Submit job before starting TDR Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 9/9] drm/sched: Add helper to set TDR timeout Matthew Brost
2023-08-11  2:34 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev2) Patchwork
2023-08-24  0:08 ` [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Danilo Krummrich
2023-08-24  3:23   ` Matthew Brost
2023-08-24 14:51     ` Danilo Krummrich
2023-08-25  3:01 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev3) Patchwork
2023-09-05 11:13 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev4) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZQBo8YbjFigHyKc5@pollux \
    --to=dakr@redhat.com \
    --cc=Liviu.Dudau@arm.com \
    --cc=boris.brezillon@collabora.com \
    --cc=christian.koenig@amd.com \
    --cc=donald.robson@imgtec.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=faith.ekstrand@collabora.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=ketil.johnsen@arm.com \
    --cc=lina@asahilina.net \
    --cc=luben.tuikov@amd.com \
    --cc=robdclark@chromium.org \
    --cc=sarah.walker@imgtec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox