From: Nirmoy Das <nirmoy.das@linux.intel.com>
To: Matthew Auld <matthew.auld@intel.com>, intel-xe@lists.freedesktop.org
Subject: Re: [Intel-xe] [PATCH 4/4] drm/xe: add missing bulk_move reset
Date: Thu, 13 Jul 2023 18:05:02 +0200 [thread overview]
Message-ID: <bc7b9c55-0726-0406-4e32-a9c7cd1d031a@linux.intel.com> (raw)
In-Reply-To: <20230713094125.326709-8-matthew.auld@intel.com>
Hi Matt,
On 7/13/2023 11:41 AM, Matthew Auld wrote:
> It looks like bulk_move is set during object construction, but is only
> removed on object close, however in various places we might not yet have
> an actual fd to close, like on the error paths for the gem_create ioctl,
> and also one internal user for the evict_test_run_gt() selftest. Try to
> handle those cases by manually resetting the bulk_move. This should
> prevent triggering:
>
> WARNING: CPU: 7 PID: 8252 at drivers/gpu/drm/ttm/ttm_bo.c:327
> ttm_bo_release+0x25e/0x2a0 [ttm]
>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/tests/xe_bo.c | 7 +++++++
> drivers/gpu/drm/xe/xe_bo.c | 27 ++++++++++++++++++---------
> drivers/gpu/drm/xe/xe_bo.h | 6 ++++++
> 3 files changed, 31 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c
> index 21c6dfef8dc7..1a5b48d60c80 100644
> --- a/drivers/gpu/drm/xe/tests/xe_bo.c
> +++ b/drivers/gpu/drm/xe/tests/xe_bo.c
> @@ -285,6 +285,10 @@ static int evict_test_run_gt(struct xe_device *xe, struct xe_gt *gt, struct kuni
> xe_bo_unlock(external, &ww);
>
> xe_bo_put(external);
> +
> + xe_bo_lock(bo, &ww, 0, false);
> + __xe_bo_unset_bulk_move(bo);
> + xe_bo_unlock(bo, &ww);
> xe_bo_put(bo);
> continue;
>
> @@ -295,6 +299,9 @@ static int evict_test_run_gt(struct xe_device *xe, struct xe_gt *gt, struct kuni
> cleanup_external:
> xe_bo_put(external);
> cleanup_bo:
> + xe_bo_lock(bo, &ww, 0, false);
> + __xe_bo_unset_bulk_move(bo);
> + xe_bo_unlock(bo, &ww);
> xe_bo_put(bo);
> break;
> }
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 6353afa8d846..2ce09ae1d614 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -1317,6 +1317,8 @@ xe_bo_create_locked_range(struct xe_device *xe,
> return bo;
>
> err_unlock_put_bo:
> + if (vm)
We create bulk move obj when "vm && !xe_vm_in_fault_mode(vm) && flags &
XE_BO_CREATE_USER_BIT"
So I think we should rather check for the above condition or
bo->ttm.bulk_move or even just call __xe_bo_unset_bulk_move().
Otherwise Reviewed-by: Nirmoy Das <nirmoy.das@intel.com>
Regards,
Nirmoy
> + __xe_bo_unset_bulk_move(bo);
> xe_bo_unlock_vm_held(bo);
> xe_bo_put(bo);
> return ERR_PTR(err);
> @@ -1760,22 +1762,29 @@ int xe_gem_create_ioctl(struct drm_device *dev, void *data,
> bo_flags |= args->flags << (ffs(XE_BO_CREATE_SYSTEM_BIT) - 1);
> bo = xe_bo_create(xe, NULL, vm, args->size, ttm_bo_type_device,
> bo_flags);
> - if (vm) {
> - xe_vm_unlock(vm, &ww);
> - xe_vm_put(vm);
> + if (IS_ERR(bo)) {
> + err = PTR_ERR(bo);
> + goto out_vm;
> }
>
> - if (IS_ERR(bo))
> - return PTR_ERR(bo);
> -
> err = drm_gem_handle_create(file, &bo->ttm.base, &handle);
> - xe_bo_put(bo);
> if (err)
> - return err;
> + goto out_bulk;
>
> args->handle = handle;
> + goto out_put;
>
> - return 0;
> +out_bulk:
> + if (vm)
> + __xe_bo_unset_bulk_move(bo);
> +out_put:
> + xe_bo_put(bo);
> +out_vm:
> + if (vm) {
> + xe_vm_unlock(vm, &ww);
> + xe_vm_put(vm);
> + }
> + return err;
> }
>
> int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index 08ca1d06bf77..72c68facd481 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -135,6 +135,12 @@ static inline void xe_bo_put(struct xe_bo *bo)
> drm_gem_object_put(&bo->ttm.base);
> }
>
> +static inline void __xe_bo_unset_bulk_move(struct xe_bo *bo)
> +{
> + if (bo)
> + ttm_bo_set_bulk_move(&bo->ttm, NULL);
> +}
> +
> static inline void xe_bo_assert_held(struct xe_bo *bo)
> {
> if (bo)
next prev parent reply other threads:[~2023-07-13 16:11 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-13 9:41 [Intel-xe] [PATCH 1/4] drm/xe/selftests: hold rpm for evict_test_run_device() Matthew Auld
2023-07-13 9:41 ` [Intel-xe] [PATCH 2/4] drm/xe/selftests: hold rpm for ccs_test_migrate() Matthew Auld
2023-07-13 15:24 ` Nirmoy Das
2023-07-13 9:41 ` [Intel-xe] [PATCH 3/4] drm/xe/selftests: restart GT after xe_bo_restore_kernel() Matthew Auld
2023-07-13 15:34 ` Nirmoy Das
2023-07-13 9:41 ` [Intel-xe] [PATCH 4/4] drm/xe: add missing bulk_move reset Matthew Auld
2023-07-13 16:05 ` Nirmoy Das [this message]
2023-07-14 9:02 ` Matthew Auld
2023-07-13 10:37 ` [Intel-xe] ✓ CI.Patch_applied: success for series starting with [1/4] drm/xe/selftests: hold rpm for evict_test_run_device() Patchwork
2023-07-13 10:37 ` [Intel-xe] ✓ CI.checkpatch: " Patchwork
2023-07-13 10:39 ` [Intel-xe] ✓ CI.KUnit: " Patchwork
2023-07-13 10:43 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-07-13 10:43 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-07-13 10:44 ` [Intel-xe] ✓ CI.checksparse: " Patchwork
2023-07-13 15:24 ` [Intel-xe] [PATCH 1/4] " Nirmoy Das
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bc7b9c55-0726-0406-4e32-a9c7cd1d031a@linux.intel.com \
--to=nirmoy.das@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox