* [PATCH] drm/xe/vm: prevent UAF in rebind_work_func()
@ 2024-04-17 16:31 Matthew Auld
2024-04-17 18:01 ` Matthew Brost
0 siblings, 1 reply; 3+ messages in thread
From: Matthew Auld @ 2024-04-17 16:31 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, stable
We flush the rebind worker during the vm close phase, however in places
like preempt_fence_work_func() we seem to queue the rebind worker
without first checking if the vm has already been closed. The concern
here is the vm being closed with the worker flushed, but then being
rearmed later, which looks like potential uaf, since there is no actual
refcounting to track the queued worker. To ensure this can't happen
prevent queueing the rebind worker once the vm has been closed.
Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1591
Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1304
Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1249
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <stable@vger.kernel.org> # v6.8+
---
drivers/gpu/drm/xe/xe_pt.c | 2 +-
drivers/gpu/drm/xe/xe_vm.h | 17 ++++++++++++++---
2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 5b7930f46cf3..e21461be904f 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -1327,7 +1327,7 @@ __xe_pt_bind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue
}
if (!rebind && last_munmap_rebind &&
xe_vm_in_preempt_fence_mode(vm))
- xe_vm_queue_rebind_worker(vm);
+ xe_vm_queue_rebind_worker_locked(vm);
} else {
kfree(rfence);
kfree(ifence);
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 306cd0934a19..8420fbf19f6d 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -211,10 +211,20 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
int xe_vm_invalidate_vma(struct xe_vma *vma);
-static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
+static inline void xe_vm_queue_rebind_worker_locked(struct xe_vm *vm)
{
xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
- queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
+ lockdep_assert_held(&vm->lock);
+
+ if (!xe_vm_is_closed(vm))
+ queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
+}
+
+static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
+{
+ down_read(&vm->lock);
+ xe_vm_queue_rebind_worker_locked(vm);
+ up_read(&vm->lock);
}
/**
@@ -225,12 +235,13 @@ static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
* If the rebind functionality on a compute vm was disabled due
* to nothing to execute. Reactivate it and run the rebind worker.
* This function should be called after submitting a batch to a compute vm.
+ *
*/
static inline void xe_vm_reactivate_rebind(struct xe_vm *vm)
{
if (xe_vm_in_preempt_fence_mode(vm) && vm->preempt.rebind_deactivated) {
vm->preempt.rebind_deactivated = false;
- xe_vm_queue_rebind_worker(vm);
+ xe_vm_queue_rebind_worker_locked(vm);
}
}
--
2.44.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] drm/xe/vm: prevent UAF in rebind_work_func()
2024-04-17 16:31 [PATCH] drm/xe/vm: prevent UAF in rebind_work_func() Matthew Auld
@ 2024-04-17 18:01 ` Matthew Brost
2024-04-17 18:49 ` Matthew Brost
0 siblings, 1 reply; 3+ messages in thread
From: Matthew Brost @ 2024-04-17 18:01 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-xe, stable
On Wed, Apr 17, 2024 at 05:31:08PM +0100, Matthew Auld wrote:
> We flush the rebind worker during the vm close phase, however in places
> like preempt_fence_work_func() we seem to queue the rebind worker
> without first checking if the vm has already been closed. The concern
> here is the vm being closed with the worker flushed, but then being
> rearmed later, which looks like potential uaf, since there is no actual
> refcounting to track the queued worker. To ensure this can't happen
> prevent queueing the rebind worker once the vm has been closed.
>
> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1591
> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1304
> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1249
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: <stable@vger.kernel.org> # v6.8+
> ---
> drivers/gpu/drm/xe/xe_pt.c | 2 +-
> drivers/gpu/drm/xe/xe_vm.h | 17 ++++++++++++++---
> 2 files changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 5b7930f46cf3..e21461be904f 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -1327,7 +1327,7 @@ __xe_pt_bind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue
> }
> if (!rebind && last_munmap_rebind &&
> xe_vm_in_preempt_fence_mode(vm))
> - xe_vm_queue_rebind_worker(vm);
> + xe_vm_queue_rebind_worker_locked(vm);
> } else {
> kfree(rfence);
> kfree(ifence);
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 306cd0934a19..8420fbf19f6d 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -211,10 +211,20 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
>
> int xe_vm_invalidate_vma(struct xe_vma *vma);
>
> -static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
> +static inline void xe_vm_queue_rebind_worker_locked(struct xe_vm *vm)
> {
> xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
> - queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
> + lockdep_assert_held(&vm->lock);
> +
> + if (!xe_vm_is_closed(vm))
xe_vm_is_closed_or_banned
Otherwise LGTM. With the above changed:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> + queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
> +}
> +
> +static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
> +{
> + down_read(&vm->lock);
> + xe_vm_queue_rebind_worker_locked(vm);
> + up_read(&vm->lock);
> }
>
> /**
> @@ -225,12 +235,13 @@ static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
> * If the rebind functionality on a compute vm was disabled due
> * to nothing to execute. Reactivate it and run the rebind worker.
> * This function should be called after submitting a batch to a compute vm.
> + *
> */
> static inline void xe_vm_reactivate_rebind(struct xe_vm *vm)
> {
> if (xe_vm_in_preempt_fence_mode(vm) && vm->preempt.rebind_deactivated) {
> vm->preempt.rebind_deactivated = false;
> - xe_vm_queue_rebind_worker(vm);
> + xe_vm_queue_rebind_worker_locked(vm);
> }
> }
>
> --
> 2.44.0
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] drm/xe/vm: prevent UAF in rebind_work_func()
2024-04-17 18:01 ` Matthew Brost
@ 2024-04-17 18:49 ` Matthew Brost
0 siblings, 0 replies; 3+ messages in thread
From: Matthew Brost @ 2024-04-17 18:49 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-xe, stable
On Wed, Apr 17, 2024 at 06:01:04PM +0000, Matthew Brost wrote:
> On Wed, Apr 17, 2024 at 05:31:08PM +0100, Matthew Auld wrote:
> > We flush the rebind worker during the vm close phase, however in places
> > like preempt_fence_work_func() we seem to queue the rebind worker
> > without first checking if the vm has already been closed. The concern
> > here is the vm being closed with the worker flushed, but then being
> > rearmed later, which looks like potential uaf, since there is no actual
> > refcounting to track the queued worker. To ensure this can't happen
> > prevent queueing the rebind worker once the vm has been closed.
> >
> > Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
> > Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1591
> > Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1304
> > Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1249
> > Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: <stable@vger.kernel.org> # v6.8+
> > ---
> > drivers/gpu/drm/xe/xe_pt.c | 2 +-
> > drivers/gpu/drm/xe/xe_vm.h | 17 ++++++++++++++---
> > 2 files changed, 15 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> > index 5b7930f46cf3..e21461be904f 100644
> > --- a/drivers/gpu/drm/xe/xe_pt.c
> > +++ b/drivers/gpu/drm/xe/xe_pt.c
> > @@ -1327,7 +1327,7 @@ __xe_pt_bind_vma(struct xe_tile *tile, struct xe_vma *vma, struct xe_exec_queue
> > }
> > if (!rebind && last_munmap_rebind &&
> > xe_vm_in_preempt_fence_mode(vm))
> > - xe_vm_queue_rebind_worker(vm);
> > + xe_vm_queue_rebind_worker_locked(vm);
> > } else {
> > kfree(rfence);
> > kfree(ifence);
> > diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> > index 306cd0934a19..8420fbf19f6d 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.h
> > +++ b/drivers/gpu/drm/xe/xe_vm.h
> > @@ -211,10 +211,20 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
> >
> > int xe_vm_invalidate_vma(struct xe_vma *vma);
> >
> > -static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
> > +static inline void xe_vm_queue_rebind_worker_locked(struct xe_vm *vm)
> > {
> > xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
> > - queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
> > + lockdep_assert_held(&vm->lock);
> > +
> > + if (!xe_vm_is_closed(vm))
>
> xe_vm_is_closed_or_banned
>
> Otherwise LGTM. With the above changed:
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Revoking RB, should have looked at CI first.
This doesn't work as it deadlocks [1].
- VMA invalidate notifier waits on VM dma-resv (preempt fences)
- Preempt fences signaled via worker which queues rebind worked (VM read lock), order WQ too so serially
- Preempt rebind worker waits on VMA invalidate under VM write lock
Going to have to rethink this and aside from this new bug in actually
rather dangerous too as taking the VM lock any path that block a
signaling of a fence seemly can deadlock. Wondering if we can use
lockdep to catch bugs like this?
In the middle of few other things right now so haven't thought a ton
about this but open to ideas.
Matt
[1] https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-132571v1/bat-atsm-2/igt@xe_exec_threads@threads-mixed-userptr-invalidate.html
>
> > + queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
> > +}
> > +
> > +static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
> > +{
> > + down_read(&vm->lock);
> > + xe_vm_queue_rebind_worker_locked(vm);
> > + up_read(&vm->lock);
> > }
> >
> > /**
> > @@ -225,12 +235,13 @@ static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
> > * If the rebind functionality on a compute vm was disabled due
> > * to nothing to execute. Reactivate it and run the rebind worker.
> > * This function should be called after submitting a batch to a compute vm.
> > + *
> > */
> > static inline void xe_vm_reactivate_rebind(struct xe_vm *vm)
> > {
> > if (xe_vm_in_preempt_fence_mode(vm) && vm->preempt.rebind_deactivated) {
> > vm->preempt.rebind_deactivated = false;
> > - xe_vm_queue_rebind_worker(vm);
> > + xe_vm_queue_rebind_worker_locked(vm);
> > }
> > }
> >
> > --
> > 2.44.0
> >
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-04-17 18:49 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-17 16:31 [PATCH] drm/xe/vm: prevent UAF in rebind_work_func() Matthew Auld
2024-04-17 18:01 ` Matthew Brost
2024-04-17 18:49 ` Matthew Brost
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox