From: "Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>
To: Matthew Brost <matthew.brost@intel.com>,
<intel-xe@lists.freedesktop.org>
Cc: <dri-devel@lists.freedesktop.org>, <thomas.hellstrom@linux.intel.com>
Subject: Re: [PATCH v4 5/5] drm/xe: Add atomic_svm_timeslice_ms debugfs entry
Date: Wed, 23 Apr 2025 11:07:30 +0530 [thread overview]
Message-ID: <9a71fc97-aa69-4228-8abb-8b43aee58106@intel.com> (raw)
In-Reply-To: <20250422170415.584662-6-matthew.brost@intel.com>
On 22-04-2025 22:34, Matthew Brost wrote:
> Add some informal control for atomic SVM fault GPU timeslice to be able
> to play around with values and tweak performance.
>
> v2:
> - Reduce timeslice default value to 5ms
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_debugfs.c | 38 ++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_device.c | 1 +
> drivers/gpu/drm/xe/xe_device_types.h | 3 +++
> drivers/gpu/drm/xe/xe_svm.c | 3 ++-
> 4 files changed, 44 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
> index d0503959a8ed..d83cd6ed3fa8 100644
> --- a/drivers/gpu/drm/xe/xe_debugfs.c
> +++ b/drivers/gpu/drm/xe/xe_debugfs.c
> @@ -191,6 +191,41 @@ static const struct file_operations wedged_mode_fops = {
> .write = wedged_mode_set,
> };
>
> +static ssize_t atomic_svm_timeslice_ms_show(struct file *f, char __user *ubuf,
> + size_t size, loff_t *pos)
> +{
> + struct xe_device *xe = file_inode(f)->i_private;
> + char buf[32];
> + int len = 0;
> +
> + len = scnprintf(buf, sizeof(buf), "%d\n", xe->atomic_svm_timeslice_ms);
> +
> + return simple_read_from_buffer(ubuf, size, pos, buf, len);
> +}
> +
> +static ssize_t atomic_svm_timeslice_ms_set(struct file *f,
> + const char __user *ubuf,
> + size_t size, loff_t *pos)
> +{
> + struct xe_device *xe = file_inode(f)->i_private;
> + u32 atomic_svm_timeslice_ms;
> + ssize_t ret;
> +
> + ret = kstrtouint_from_user(ubuf, size, 0, &atomic_svm_timeslice_ms);
> + if (ret)
> + return ret;
> +
> + xe->atomic_svm_timeslice_ms = atomic_svm_timeslice_ms;
> +
> + return size;
> +}
> +
> +static const struct file_operations atomic_svm_timeslice_ms_fops = {
> + .owner = THIS_MODULE,
> + .read = atomic_svm_timeslice_ms_show,
> + .write = atomic_svm_timeslice_ms_set,
> +};
> +
> void xe_debugfs_register(struct xe_device *xe)
> {
> struct ttm_device *bdev = &xe->ttm;
> @@ -211,6 +246,9 @@ void xe_debugfs_register(struct xe_device *xe)
> debugfs_create_file("wedged_mode", 0600, root, xe,
> &wedged_mode_fops);
>
> + debugfs_create_file("atomic_svm_timeslice_ms", 0600, root, xe,
> + &atomic_svm_timeslice_ms_fops);
> +
> for (mem_type = XE_PL_VRAM0; mem_type <= XE_PL_VRAM1; ++mem_type) {
> man = ttm_manager_type(bdev, mem_type);
>
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 75e753e0a682..abf3c72baaa6 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -444,6 +444,7 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
> xe->info.devid = pdev->device;
> xe->info.revid = pdev->revision;
> xe->info.force_execlist = xe_modparam.force_execlist;
> + xe->atomic_svm_timeslice_ms = 5;
>
> err = xe_irq_init(xe);
> if (err)
> diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
> index b9a892c44c67..6f5222f42410 100644
> --- a/drivers/gpu/drm/xe/xe_device_types.h
> +++ b/drivers/gpu/drm/xe/xe_device_types.h
> @@ -567,6 +567,9 @@ struct xe_device {
> /** @pmu: performance monitoring unit */
> struct xe_pmu pmu;
>
> + /** @atomic_svm_timeslice_ms: Atomic SVM fault timeslice MS */
> + u32 atomic_svm_timeslice_ms;
> +
> #ifdef TEST_VM_OPS_ERROR
> /**
> * @vm_inject_error_position: inject errors at different places in VM
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index d5376a76cdd1..de4eb04fc78e 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -784,7 +784,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> .devmem_only = atomic && IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
> .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
> - IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? 5 : 0,
> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
> + vm->xe->atomic_svm_timeslice_ms : 0,
LGTM
Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> };
> struct xe_svm_range *range;
> struct drm_gpusvm_range *r;
next prev parent reply other threads:[~2025-04-23 5:37 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-22 17:04 [PATCH v4 0/5] Enable SVM atomics in Xe / GPU SVM Matthew Brost
2025-04-22 17:04 ` [PATCH v4 1/5] drm/gpusvm: Introduce devmem_only flag for allocation Matthew Brost
2025-04-23 16:14 ` Matthew Brost
2025-04-22 17:04 ` [PATCH v4 2/5] drm/xe: Strict migration policy for atomic SVM faults Matthew Brost
2025-04-22 17:21 ` Ghimiray, Himal Prasad
2025-04-23 17:29 ` Matthew Brost
2025-04-24 14:39 ` Thomas Hellström
2025-04-24 18:03 ` Matthew Brost
2025-04-25 7:18 ` Thomas Hellström
2025-04-25 7:39 ` Matthew Brost
2025-04-25 9:10 ` Thomas Hellström
2025-04-22 17:04 ` [PATCH v4 3/5] drm/gpusvm: Add timeslicing support to GPU SVM Matthew Brost
2025-04-23 5:36 ` Ghimiray, Himal Prasad
2025-04-22 17:04 ` [PATCH v4 4/5] drm/xe: Timeslice GPU on atomic SVM fault Matthew Brost
2025-04-23 5:36 ` Ghimiray, Himal Prasad
2025-04-22 17:04 ` [PATCH v4 5/5] drm/xe: Add atomic_svm_timeslice_ms debugfs entry Matthew Brost
2025-04-23 5:37 ` Ghimiray, Himal Prasad [this message]
2025-04-22 22:57 ` ✗ Xe.CI.Full: failure for Enable SVM atomics in Xe / GPU SVM (rev4) Patchwork
2025-04-23 13:02 ` ✓ CI.Patch_applied: success " Patchwork
2025-04-23 13:03 ` ✗ CI.checkpatch: warning " Patchwork
2025-04-23 13:04 ` ✓ CI.KUnit: success " Patchwork
2025-04-23 13:12 ` ✓ CI.Build: " Patchwork
2025-04-23 13:14 ` ✗ CI.Hooks: failure " Patchwork
2025-04-23 13:15 ` ✓ CI.checksparse: success " Patchwork
2025-04-23 20:09 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9a71fc97-aa69-4228-8abb-8b43aee58106@intel.com \
--to=himal.prasad.ghimiray@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox