From: Jianxun Zhang <jianxun.zhang@intel.com>
To: Jonathan Cavitt <jonathan.cavitt@intel.com>,
<intel-xe@lists.freedesktop.org>
Cc: <saurabhg.gupta@intel.com>, <alex.zuo@intel.com>,
<joonas.lahtinen@linux.intel.com>, <matthew.brost@intel.com>,
<shuicheng.lin@intel.com>, <dri-devel@lists.freedesktop.org>,
<Michal.Wajdeczko@intel.com>, <michal.mrozek@intel.com>,
<raag.jadav@intel.com>, <john.c.harrison@intel.com>
Subject: Re: [PATCH v13 6/6] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
Date: Tue, 25 Mar 2025 20:16:40 -0700 [thread overview]
Message-ID: <05b7f08e-c6af-4a4c-beb3-1f90e1ac05f2@intel.com> (raw)
In-Reply-To: <20250325222724.93226-7-jonathan.cavitt@intel.com>
On 3/25/25 15:27, Jonathan Cavitt wrote:
> Add support for userspace to request a list of observed faults
> from a specified VM.
>
> v2:
> - Only allow querying of failed pagefaults (Matt Brost)
>
> v3:
> - Remove unnecessary size parameter from helper function, as it
> is a property of the arguments. (jcavitt)
> - Remove unnecessary copy_from_user (Jainxun)
> - Set address_precision to 1 (Jainxun)
> - Report max size instead of dynamic size for memory allocation
> purposes. Total memory usage is reported separately.
>
> v4:
> - Return int from xe_vm_get_property_size (Shuicheng)
> - Fix memory leak (Shuicheng)
> - Remove unnecessary size variable (jcavitt)
>
> v5:
> - Rename ioctl to xe_vm_get_faults_ioctl (jcavitt)
> - Update fill_property_pfs to eliminate need for kzalloc (Jianxun)
>
> v6:
> - Repair and move fill_faults break condition (Dan Carpenter)
> - Free vm after use (jcavitt)
> - Combine assertions (jcavitt)
> - Expand size check in xe_vm_get_faults_ioctl (jcavitt)
> - Remove return mask from fill_faults, as return is already -EFAULT or 0
> (jcavitt)
>
> v7:
> - Revert back to using xe_vm_get_property_ioctl
> - Apply better copy_to_user logic (jcavitt)
>
> v8:
> - Fix and clean up error value handling in ioctl (jcavitt)
> - Reapply return mask for fill_faults (jcavitt)
>
> v9:
> - Future-proof size logic for zero-size properties (jcavitt)
> - Add access and fault types (Jianxun)
> - Remove address type (Jianxun)
>
> v10:
> - Remove unnecessary switch case logic (Raag)
> - Compress size get, size validation, and property fill functions into a
> single helper function (jcavitt)
> - Assert valid size (jcavitt)
>
> v11:
> - Remove unnecessary else condition
> - Correct backwards helper function size logic (jcavitt)
>
> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Cc: Jainxun Zhang <jianxun.zhang@intel.com>
> Cc: Shuicheng Lin <shuicheng.lin@intel.com>
> Cc: Raag Jadav <raag.jadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_device.c | 3 ++
> drivers/gpu/drm/xe/xe_vm.c | 88 ++++++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 2 +
> 3 files changed, 93 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 1ffb7d1f6be6..02f84a855502 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -195,6 +195,9 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
> DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
> DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
> + DRM_IOCTL_DEF_DRV(XE_VM_GET_PROPERTY, xe_vm_get_property_ioctl,
> + DRM_RENDER_ALLOW),
> +
> };
>
> static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 2c89af125a90..d57aa24a5de8 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -3553,6 +3553,94 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> return err;
> }
>
> +static int fill_faults(struct xe_vm *vm,
> + struct drm_xe_vm_get_property *args)
> +{
> + struct xe_vm_fault __user *usr_ptr = u64_to_user_ptr(args->data);
> + struct xe_vm_fault store = { 0 };
> + struct xe_vm_fault_entry *entry;
> + int ret = 0, i = 0, count, entry_size;
> +
> + entry_size = sizeof(struct xe_vm_fault);
> + count = args->size / entry_size;
> +
> + spin_lock(&vm->faults.lock);
> + list_for_each_entry(entry, &vm->faults.list, list) {
> + if (i++ == count)
> + break;
> +
> + memset(&store, 0, entry_size);
> +
> + store.address = entry->address;
> + store.address_precision = entry->address_precision;
> + store.access_type = entry->access_type;
> + store.fault_type = entry->fault_type;
> + store.fault_level = entry->fault_level;
> + store.engine_class = xe_to_user_engine_class[entry->engine_class];
> + store.engine_instance = entry->engine_instance;
> +
> + ret = copy_to_user(usr_ptr, &store, entry_size);
> + if (ret)
> + break;
> +
> + usr_ptr++;
> + }
> + spin_unlock(&vm->faults.lock);
> +
> + return ret ? -EFAULT : 0;
> +}
> +
> +static int xe_vm_get_property_helper(struct xe_vm *vm,
> + struct drm_xe_vm_get_property *args)
> +{
> + int size;
> +
> + switch (args->property) {
> + case DRM_XE_VM_GET_PROPERTY_FAULTS:
> + spin_lock(&vm->faults.lock);
> + size = size_mul(sizeof(struct xe_vm_fault), vm->faults.len);
> + spin_unlock(&vm->faults.lock);
> +
> + if (args->size)
> + /*
> + * Number of faults may increase between calls to
> + * xe_vm_get_property_ioctl, so just report the
> + * number of faults the user requests if it's less
> + * than or equal to the number of faults in the VM
> + * fault array.
> + */
> + return args->size <= size ? fill_faults(vm, args) : -EINVAL;
> +
> + args->size = size;
> + return 0;
> + }
> + return -EINVAL;
> +}
> +
> +int xe_vm_get_property_ioctl(struct drm_device *drm, void *data,
> + struct drm_file *file)
> +{
I place the test result here just because this function is shown in a
lock bug in dmesg. So the good news is saving fault starts running. The
issue occurs in the last rev (patchwork rev 13) too, but I don't see
obvious mis-matched lock premetives there.
watchdog: BUG: soft lockup - CPU#4 stuck for 26s! [deqp-vk:4031]
...
Backtrace:
[ 227.950361] _raw_spin_lock+0x3f/0x60
[ 227.950364] xe_vm_get_property_ioctl+0x11f/0x2e0 [xe]
[ 227.950530] ? __pfx_xe_vm_get_property_ioctl+0x10/0x10 [xe]
[ 227.950664] drm_ioctl_kernel+0xaf/0x110
> + struct xe_device *xe = to_xe_device(drm);
> + struct xe_file *xef = to_xe_file(file);
> + struct drm_xe_vm_get_property *args = data;
> + struct xe_vm *vm;
> + int ret = 0;
> +
> + if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
> + return -EINVAL;
> + if (XE_IOCTL_DBG(xe, args->size < 0))
> + return -EINVAL;
> +
> + vm = xe_vm_lookup(xef, args->vm_id);
> + if (XE_IOCTL_DBG(xe, !vm))
> + return -ENOENT;
> +
> + ret = xe_vm_get_property_helper(vm, args);
> +
> + xe_vm_put(vm);
> + return ret;
> +}
> +
> /**
> * xe_vm_bind_kernel_bo - bind a kernel BO to a VM
> * @vm: VM to bind the BO to
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 9bd7e93824da..63ec22458e04 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -196,6 +196,8 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file);
> int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file);
> +int xe_vm_get_property_ioctl(struct drm_device *dev, void *data,
> + struct drm_file *file);
>
> void xe_vm_close_and_put(struct xe_vm *vm);
>
next prev parent reply other threads:[~2025-03-26 3:17 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-25 22:27 [PATCH v13 0/6] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2025-03-25 22:27 ` [PATCH v13 1/6] drm/xe/xe_hw_engine: Map xe and user engine class in header Jonathan Cavitt
2025-03-25 22:27 ` [PATCH v13 2/6] drm/xe/xe_gt_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
2025-03-25 22:27 ` [PATCH v13 3/6] drm/xe/xe_gt_pagefault: Move pagefault struct to header Jonathan Cavitt
2025-03-25 22:27 ` [PATCH v13 4/6] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2025-03-25 22:27 ` [PATCH v13 5/6] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2025-03-25 22:27 ` [PATCH v13 6/6] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2025-03-26 3:16 ` Jianxun Zhang [this message]
2025-03-25 23:23 ` ✓ CI.Patch_applied: success for drm/xe/xe_vm: Implement xe_vm_get_property_ioctl (rev14) Patchwork
2025-03-25 23:24 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-25 23:25 ` ✓ CI.KUnit: success " Patchwork
2025-03-25 23:41 ` ✓ CI.Build: " Patchwork
2025-03-25 23:44 ` ✗ CI.Hooks: failure " Patchwork
2025-03-25 23:45 ` ✓ CI.checksparse: success " Patchwork
2025-03-26 0:06 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-03-26 8:57 ` ✗ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=05b7f08e-c6af-4a4c-beb3-1f90e1ac05f2@intel.com \
--to=jianxun.zhang@intel.com \
--cc=Michal.Wajdeczko@intel.com \
--cc=alex.zuo@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=john.c.harrison@intel.com \
--cc=jonathan.cavitt@intel.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.mrozek@intel.com \
--cc=raag.jadav@intel.com \
--cc=saurabhg.gupta@intel.com \
--cc=shuicheng.lin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox