From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>,
intel-xe@lists.freedesktop.org
Cc: Matthew Brost <matthew.brost@intel.com>
Subject: Re: [PATCH 2/2] drm/xe/uapi: Enable madvise to pass multiple pat indeces
Date: Mon, 22 Sep 2025 13:09:15 +0200 [thread overview]
Message-ID: <a390df38c880bdbd163d9c5efd6d418d29df2259.camel@linux.intel.com> (raw)
In-Reply-To: <20250918080003.153906-3-himal.prasad.ghimiray@intel.com>
On Thu, 2025-09-18 at 13:30 +0530, Himal Prasad Ghimiray wrote:
> Allow users to pass multiple PAT indices via madvise, which can be
> used
> to encode PTEs based on the actual memory location system memory,
> local
> VRAM, or remote GPU memory.
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Himal Prasad Ghimiray
> <himal.prasad.ghimiray@intel.com>
Based on the feedback from UMD, it might make sense to add two special
values,
XE_PAT_INDEX_INITIAL - that sets the pat index to the initial value of
the VM_BIND
XE_PAT_INDEX_KEEP - Keep the current value.
And then we perhaps we can use __s16 values where needed.
Let's see where the discussion lands. If we're not able to resolve this
before your vacation, I can pick up the patches from where you left
off.
Thanks,
Thomas
> ---
> drivers/gpu/drm/xe/xe_vm.c | 7 +++--
> drivers/gpu/drm/xe/xe_vm_madvise.c | 33 ++++++++++++++++-------
> include/uapi/drm/xe_drm.h | 42 +++++++++++++++++++++++++---
> --
> 3 files changed, 62 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 6122061786f6..a4906ae94bd4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2003,10 +2003,9 @@ static int get_mem_attrs(struct xe_vm *vm, u32
> *num_vmas, u64 start,
> attrs[i].start = xe_vma_start(vma);
> attrs[i].end = xe_vma_end(vma);
> attrs[i].atomic.val = vma->attr.atomic_access;
> -
> - /* TODO: Modify drm_xe_mem_range_attr for all pats
> */
> - attrs[i].pat_index.val = vma-
> >attr.pat_index.initial;
> -
> + attrs[i].pat_index.smem = vma->attr.pat_index.smem;
> + attrs[i].pat_index.devmem = vma-
> >attr.pat_index.devmem;
> + attrs[i].pat_index.remote = vma-
> >attr.pat_index.remote;
> attrs[i].preferred_mem_loc.devmem_fd = vma-
> >attr.preferred_loc.devmem_fd;
> attrs[i].preferred_mem_loc.migration_policy =
> vma->attr.preferred_loc.migration_policy;
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index e3f0cf23a3a9..3116036e6d24 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -149,16 +149,15 @@ static void madvise_pat_index(struct xe_device
> *xe, struct xe_vm *vm,
> xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PAT);
>
> for (i = 0; i < num_vmas; i++) {
> - /*TODO : Pass different pat_indexes from ioctl */
> - if (vmas[i]->attr.pat_index.smem == op-
> >pat_index.val &&
> - vmas[i]->attr.pat_index.devmem == op-
> >pat_index.val &&
> - vmas[i]->attr.pat_index.remote == op-
> >pat_index.val) {
> + if (vmas[i]->attr.pat_index.smem == op-
> >pat_index.smem &&
> + vmas[i]->attr.pat_index.devmem == op-
> >pat_index.devmem &&
> + vmas[i]->attr.pat_index.remote == op-
> >pat_index.remote) {
> vmas[i]->skip_invalidation = true;
> } else {
> vmas[i]->skip_invalidation = false;
> - vmas[i]->attr.pat_index.smem = op-
> >pat_index.val;
> - vmas[i]->attr.pat_index.devmem = op-
> >pat_index.val;
> - vmas[i]->attr.pat_index.remote = op-
> >pat_index.val;
> + vmas[i]->attr.pat_index.smem = op-
> >pat_index.smem;
> + vmas[i]->attr.pat_index.devmem = op-
> >pat_index.devmem;
> + vmas[i]->attr.pat_index.remote = op-
> >pat_index.remote;
> }
> }
> }
> @@ -273,12 +272,26 @@ static bool madvise_args_are_sane(struct
> xe_device *xe, const struct drm_xe_madv
> break;
> case DRM_XE_MEM_RANGE_ATTR_PAT:
> {
> - u16 coh_mode = xe_pat_index_get_coh_mode(xe, args-
> >pat_index.val);
> + u16 coh_mode_smem = xe_pat_index_get_coh_mode(xe,
> args->pat_index.smem);
> + u16 coh_mode_devmem = xe_pat_index_get_coh_mode(xe,
> args->pat_index.devmem);
> + u16 coh_mode_remote = xe_pat_index_get_coh_mode(xe,
> args->pat_index.remote);
>
> - if (XE_IOCTL_DBG(xe, !coh_mode))
> + if (XE_IOCTL_DBG(xe, !coh_mode_smem))
> return false;
>
> - if (XE_WARN_ON(coh_mode > XE_COH_AT_LEAST_1WAY))
> + if (XE_IOCTL_DBG(xe, !coh_mode_devmem))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, !coh_mode_remote))
> + return false;
> +
> + if (XE_WARN_ON(coh_mode_smem >
> XE_COH_AT_LEAST_1WAY))
> + return false;
> +
> + if (XE_WARN_ON(coh_mode_devmem >
> XE_COH_AT_LEAST_1WAY))
> + return false;
> +
> + if (XE_WARN_ON(coh_mode_remote >
> XE_COH_AT_LEAST_1WAY))
> return false;
>
> if (XE_IOCTL_DBG(xe, args->pat_index.pad))
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 40ff19f52a8d..09d9a9cf02e8 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -2101,11 +2101,26 @@ struct drm_xe_madvise {
> * Used when @type == DRM_XE_MEM_RANGE_ATTR_PAT.
> */
> struct {
> - /** @pat_index.val: PAT index value */
> - __u32 val;
> + /**
> + * @pat_index.smem: PAT index value to be
> used when
> + * location is system memory during GPU
> access time.
> + */
> + __u16 smem;
> +
> + /**
> + * @pat_index.devmem: PAT index value to be
> used when
> + * location is local vram during GPU access
> time.
> + */
> + __u16 devmem;
> +
> + /**
> + * @pat_index.remote: PAT index value to be
> used when
> + * location is remote gpu vram during GPU
> access time.
> + */
> + __u16 remote;
>
> /** @pat_index.pad: MBZ */
> - __u32 pad;
> + __u16 pad;
>
> /** @pat_index.reserved: Reserved */
> __u64 reserved;
> @@ -2163,11 +2178,26 @@ struct drm_xe_mem_range_attr {
>
> /** @pat_index: Page attribute table index */
> struct {
> - /** @pat_index.val: PAT index */
> - __u32 val;
> + /**
> + * @pat_index.smem: PAT index value to be used when
> + * location is system memory during GPU access time.
> + */
> + __u16 smem;
> +
> + /**
> + * @pat_index.devmem: PAT index value to be used
> when
> + * location is local vram during GPU access time.
> + */
> + __u16 devmem;
> +
> + /**
> + * @pat_index.remote: PAT index value to be used
> when
> + * location is remote gpu tiles during GPU access
> time.
> + */
> + __u16 remote;
>
> /** @pat_index.reserved: Reserved */
> - __u32 reserved;
> + __u16 reserved;
> } pat_index;
>
> /** @reserved: Reserved */
next prev parent reply other threads:[~2025-09-22 11:09 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-18 8:00 [PATCH 0/2] MADVISE to support multiple PAT indeces Himal Prasad Ghimiray
2025-09-18 8:00 ` [PATCH 1/2] drm/xe/pat: Improve PAT index handling in xe_vma_mem_attr Himal Prasad Ghimiray
2025-09-22 11:11 ` Thomas Hellström
2025-09-18 8:00 ` [PATCH 2/2] drm/xe/uapi: Enable madvise to pass multiple pat indeces Himal Prasad Ghimiray
2025-09-22 11:09 ` Thomas Hellström [this message]
2025-09-18 8:15 ` ✓ CI.KUnit: success for MADVISE to support multiple PAT indeces Patchwork
2025-09-18 8:49 ` ✓ Xe.CI.BAT: " Patchwork
2025-09-18 14:52 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a390df38c880bdbd163d9c5efd6d418d29df2259.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox