Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Jianxun Zhang <jianxun.zhang@intel.com>
To: "Cavitt, Jonathan" <jonathan.cavitt@intel.com>,
	"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Cc: "Gupta, saurabhg" <saurabhg.gupta@intel.com>,
	"Zuo, Alex" <alex.zuo@intel.com>,
	"joonas.lahtinen@linux.intel.com"
	<joonas.lahtinen@linux.intel.com>,
	"Brost, Matthew" <matthew.brost@intel.com>,
	"Lin, Shuicheng" <shuicheng.lin@intel.com>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"Wajdeczko, Michal" <Michal.Wajdeczko@intel.com>,
	"michal.mzorek@intel.com" <michal.mzorek@intel.com>
Subject: Re: [PATCH v8 6/6] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
Date: Tue, 18 Mar 2025 13:21:37 -0700	[thread overview]
Message-ID: <c1fa3db3-662e-446f-8e37-f2f637536b44@intel.com> (raw)
In-Reply-To: <CH0PR11MB54448F266357E1D5C1B34108E5DE2@CH0PR11MB5444.namprd11.prod.outlook.com>



On 3/18/25 11:12, Cavitt, Jonathan wrote:
> -----Original Message-----
> From: Zhang, Jianxun <jianxun.zhang@intel.com>
> Sent: Tuesday, March 18, 2025 10:48 AM
> To: Cavitt, Jonathan <jonathan.cavitt@intel.com>; intel-xe@lists.freedesktop.org
> Cc: Gupta, saurabhg <saurabhg.gupta@intel.com>; Zuo, Alex <alex.zuo@intel.com>; joonas.lahtinen@linux.intel.com; Brost, Matthew <matthew.brost@intel.com>; Lin, Shuicheng <shuicheng.lin@intel.com>; dri-devel@lists.freedesktop.org; Wajdeczko, Michal <Michal.Wajdeczko@intel.com>; michal.mzorek@intel.com
> Subject: Re: [PATCH v8 6/6] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
>>
>> On 3/13/25 11:34, Jonathan Cavitt wrote:
>>> Add support for userspace to request a list of observed faults
>>> from a specified VM.
>>>
>>> v2:
>>> - Only allow querying of failed pagefaults (Matt Brost)
>>>
>>> v3:
>>> - Remove unnecessary size parameter from helper function, as it
>>>     is a property of the arguments. (jcavitt)
>>> - Remove unnecessary copy_from_user (Jainxun)
>>> - Set address_precision to 1 (Jainxun)
>>> - Report max size instead of dynamic size for memory allocation
>>>     purposes.  Total memory usage is reported separately.
>>>
>>> v4:
>>> - Return int from xe_vm_get_property_size (Shuicheng)
>>> - Fix memory leak (Shuicheng)
>>> - Remove unnecessary size variable (jcavitt)
>>>
>>> v5:
>>> - Rename ioctl to xe_vm_get_faults_ioctl (jcavitt)
>>> - Update fill_property_pfs to eliminate need for kzalloc (Jianxun)
>>>
>>> v6:
>>> - Repair and move fill_faults break condition (Dan Carpenter)
>>> - Free vm after use (jcavitt)
>>> - Combine assertions (jcavitt)
>>> - Expand size check in xe_vm_get_faults_ioctl (jcavitt)
>>> - Remove return mask from fill_faults, as return is already -EFAULT or 0
>>>     (jcavitt)
>>>
>>> v7:
>>> - Revert back to using xe_vm_get_property_ioctl
>>> - Apply better copy_to_user logic (jcavitt)
>>>
>>> Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
>>> Suggested-by: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Jainxun Zhang <jianxun.zhang@intel.com>
>>> Cc: Shuicheng Lin <shuicheng.lin@intel.com>
>>> ---
>>>    drivers/gpu/drm/xe/xe_device.c |   3 +
>>>    drivers/gpu/drm/xe/xe_vm.c     | 134 +++++++++++++++++++++++++++++++++
>>>    drivers/gpu/drm/xe/xe_vm.h     |   2 +
>>>    3 files changed, 139 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
>>> index b2f656b2a563..74e510cb0e47 100644
>>> --- a/drivers/gpu/drm/xe/xe_device.c
>>> +++ b/drivers/gpu/drm/xe/xe_device.c
>>> @@ -194,6 +194,9 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
>>>    	DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
>>>    			  DRM_RENDER_ALLOW),
>>>    	DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
>>> +	DRM_IOCTL_DEF_DRV(XE_VM_GET_PROPERTY, xe_vm_get_property_ioctl,
>>> +			  DRM_RENDER_ALLOW),
>>> +
>>>    };
>>>    
>>>    static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>> index 067a9cedcad9..521f0032d9e2 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>> @@ -42,6 +42,14 @@
>>>    #include "xe_wa.h"
>>>    #include "xe_hmm.h"
>>>    
>>> +static const u16 xe_to_user_engine_class[] = {
>>> +	[XE_ENGINE_CLASS_RENDER] = DRM_XE_ENGINE_CLASS_RENDER,
>>> +	[XE_ENGINE_CLASS_COPY] = DRM_XE_ENGINE_CLASS_COPY,
>>> +	[XE_ENGINE_CLASS_VIDEO_DECODE] = DRM_XE_ENGINE_CLASS_VIDEO_DECODE,
>>> +	[XE_ENGINE_CLASS_VIDEO_ENHANCE] = DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE,
>>> +	[XE_ENGINE_CLASS_COMPUTE] = DRM_XE_ENGINE_CLASS_COMPUTE,
>>> +};
>>> +
>>>    static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
>>>    {
>>>    	return vm->gpuvm.r_obj;
>>> @@ -3551,6 +3559,132 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>>>    	return err;
>>>    }
>>>    
>>> +static int xe_vm_get_property_size(struct xe_vm *vm, u32 property)
>>> +{
>>> +	int size = -EINVAL;
>>> +
>>> +	switch (property) {
>>> +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
>>> +		spin_lock(&vm->faults.lock);
>>> +		size = vm->faults.len * sizeof(struct xe_vm_fault);
>>> +		spin_unlock(&vm->faults.lock);
>>> +		break;
>>> +	default:
>>> +		break;
>>> +	}
>>> +	return size;
>>> +}
>>> +
>>> +static int xe_vm_get_property_verify_size(struct xe_vm *vm, u32 property,
>>> +					  int expected, int actual)
>>> +{
>>> +	switch (property) {
>>> +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
>>> +		/*
>>> +		 * Number of faults may increase between calls to
>>> +		 * xe_vm_get_property_ioctl, so just report the
>>> +		 * number of faults the user requests if it's less
>>> +		 * than or equal to the number of faults in the VM
>>> +		 * fault array.
>>> +		 */
>>> +		if (actual < expected)
>>> +			return -EINVAL;
>> Application should be allowed to provide more memory than the needed. In
>> return, the driver should update the arg->size with number of bytes
>> actually written to the memory.
> 
> In an earlier discussion, we had decided that we should allow for the args->size
> value to be less than the actual size value, since it would be impossible to prevent
> additional faults from filling the fault list between calls to the get property ioctl.
> And in such a case, we would only return in total the faults that could be stored
> in the memory of said size.  In a sense, we had agreed that args->size would
> express how many faults the user expects to read.
> 
> So, if the args->size is greater than the total size of the fault array, then user is
> requesting more faults than there are faults available to be returned.  This
> request cannot be satisfied, so we return -EINVAL.

Yes. You are correct. We catpured this in 4) in the section "Execution 
Flow" in the design doc. There shouldn't be a valid case for UMD to 
query more than what from the first query on size, considering the fault 
list in KMD can only be increased and a query on size is always issued 
by UMD before fetching.

> 
> -Jonathan Cavitt
> 
>>> +		break;
>>> +	default:
>>> +		if (actual != expected)
>>> +			return -EINVAL;
>>> +		break;
>>> +	}
>>> +	return 0;
>>> +}
>>> +
>>> +static int fill_faults(struct xe_vm *vm,
>>> +		       struct drm_xe_vm_get_property *args)
>>> +{
>>> +	struct xe_vm_fault __user *usr_ptr = u64_to_user_ptr(args->data);
>>> +	struct xe_vm_fault store = { 0 };
>>> +	struct xe_vm_fault_entry *entry;
>>> +	int ret = 0, i = 0, count;
>>> +
>>> +	count = args->size / sizeof(struct xe_vm_fault);
>>> +
>>> +	spin_lock(&vm->faults.lock);
>>> +	list_for_each_entry(entry, &vm->faults.list, list) {
>>> +		if (i++ == count)
>>> +			break;
>>> +
>>> +		memset(&store, 0, sizeof(struct xe_vm_fault));
>>> +
>>> +		store.address = entry->address;
>>> +		store.address_type = entry->address_type;
>>> +		store.address_precision = entry->address_precision;
>>> +		store.fault_level = entry->fault_level;
>>> +		store.engine_class = xe_to_user_engine_class[entry->engine_class];
>>> +		store.engine_instance = entry->engine_instance;
>>> +
>>> +		ret = copy_to_user(usr_ptr, &store, sizeof(struct xe_vm_fault));
>>> +		if (ret)
>>> +			break;
>>> +
>>> +		usr_ptr++;
>>> +	}
>>> +	spin_unlock(&vm->faults.lock);
>>> +
>> Update the args->size with the amount of the written. Refer to my
>> another comment above. Also document this in the comment of the interface.
>>> +	return ret;
>>> +}
>>> +
>>> +static int xe_vm_get_property_fill_data(struct xe_vm *vm,
>>> +					struct drm_xe_vm_get_property *args)
>>> +{
>>> +	switch (args->property) {
>>> +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
>>> +		return fill_faults(vm, args);
>>> +	default:
>>> +		break;
>>> +	}
>>> +	return -EINVAL;
>>> +}
>>> +
>>> +int xe_vm_get_property_ioctl(struct drm_device *drm, void *data,
>>> +			     struct drm_file *file)
>>> +{
>>> +	struct xe_device *xe = to_xe_device(drm);
>>> +	struct xe_file *xef = to_xe_file(file);
>>> +	struct drm_xe_vm_get_property *args = data;
>>> +	struct xe_vm *vm;
>>> +	int size, ret = 0;
>>> +
>>> +	if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
>>> +		return -EINVAL;
>>> +
>>> +	vm = xe_vm_lookup(xef, args->vm_id);
>>> +	if (XE_IOCTL_DBG(xe, !vm))
>>> +		return -ENOENT;
>>> +
>>> +	size = xe_vm_get_property_size(vm, args->property);
>>> +
>>> +	if (size < 0) {
>>> +		ret = size;
>>> +		goto put_vm;
>>> +	} else if (!args->size) {
>>> +		args->size = size;
>>> +		goto put_vm;
>>> +	}
>>> +
>>> +	ret = xe_vm_get_property_verify_size(vm, args->property,
>>> +					     args->size, size);
>>> +	if (XE_IOCTL_DBG(xe, ret)) {
>>> +		ret = -EINVAL;
>>> +		goto put_vm;
>>> +	}
>>> +
>>> +	ret = xe_vm_get_property_fill_data(vm, args);
>>> +
>>> +put_vm:
>>> +	xe_vm_put(vm);
>>> +	return ret;
>>> +}
>>> +
>>>    /**
>>>     * xe_vm_bind_kernel_bo - bind a kernel BO to a VM
>>>     * @vm: VM to bind the BO to
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>>> index 9bd7e93824da..63ec22458e04 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.h
>>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>>> @@ -196,6 +196,8 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
>>>    			struct drm_file *file);
>>>    int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
>>>    		     struct drm_file *file);
>>> +int xe_vm_get_property_ioctl(struct drm_device *dev, void *data,
>>> +			     struct drm_file *file);
>>>    
>>>    void xe_vm_close_and_put(struct xe_vm *vm);
>>>    
>>
>>


  reply	other threads:[~2025-03-18 20:23 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-13 18:34 [PATCH v8 0/6] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2025-03-13 18:34 ` [PATCH v8 1/6] drm/xe/xe_gt_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
2025-03-13 18:34 ` [PATCH v8 2/6] drm/xe/xe_gt_pagefault: Move pagefault struct to header Jonathan Cavitt
2025-03-14 17:01   ` Michal Wajdeczko
2025-03-14 22:06     ` Cavitt, Jonathan
2025-03-15 14:45       ` Michal Wajdeczko
2025-03-17 20:55         ` Cavitt, Jonathan
2025-03-18  8:44           ` Michal Wajdeczko
2025-03-13 18:34 ` [PATCH v8 3/6] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2025-03-13 22:10   ` Jianxun Zhang
2025-03-13 18:34 ` [PATCH v8 4/6] drm/xe/xe_gt_pagefault: Add address_type field to pagefaults Jonathan Cavitt
2025-03-13 18:34 ` [PATCH v8 5/6] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2025-03-13 18:34 ` [PATCH v8 6/6] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2025-03-18 17:48   ` Jianxun Zhang
2025-03-18 18:12     ` Cavitt, Jonathan
2025-03-18 20:21       ` Jianxun Zhang [this message]
2025-03-19 23:58   ` Jianxun Zhang
2025-03-20 14:10     ` Cavitt, Jonathan
2025-03-13 19:10 ` ✓ CI.Patch_applied: success for drm/xe/xe_vm: Implement xe_vm_get_property_ioctl (rev8) Patchwork
2025-03-13 19:10 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-13 19:12 ` ✓ CI.KUnit: success " Patchwork
2025-03-13 19:24 ` ✗ CI.Build: failure " Patchwork
2025-03-13 21:26   ` Cavitt, Jonathan
2025-03-13 22:12 ` ✓ CI.Patch_applied: success for drm/xe/xe_vm: Implement xe_vm_get_property_ioctl (rev9) Patchwork
2025-03-13 22:12 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-13 22:13 ` ✓ CI.KUnit: success " Patchwork
2025-03-13 22:19 ` ✗ CI.Build: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c1fa3db3-662e-446f-8e37-f2f637536b44@intel.com \
    --to=jianxun.zhang@intel.com \
    --cc=Michal.Wajdeczko@intel.com \
    --cc=alex.zuo@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jonathan.cavitt@intel.com \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=matthew.brost@intel.com \
    --cc=michal.mzorek@intel.com \
    --cc=saurabhg.gupta@intel.com \
    --cc=shuicheng.lin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox