Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] drm/xe/bo: Honor madvise(2) advices
@ 2025-11-28 10:46 Thomas Hellström
  2025-11-28 10:53 ` ✓ CI.KUnit: success for " Patchwork
  2025-11-28 12:57 ` [RFC PATCH] " Matthew Auld
  0 siblings, 2 replies; 8+ messages in thread
From: Thomas Hellström @ 2025-11-28 10:46 UTC (permalink / raw)
  To: intel-xe; +Cc: Thomas Hellström, Matthew Brost, Matthew Auld

The user can give advices as to how the CPU will access an
address range. Use those advices to determine the number of
bo pages to prefault on a page-fault.

Do this regardless of whether we can find a way to avoid the
fairly slow vm_insert_pfn_prot() to populate buffer
object maps.

Initially, fault up to 512 pages on sequential access and
a single page on random access.

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/xe/xe_bo.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 6fd6ce6c6586..07d0d954f826 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1821,15 +1821,31 @@ static int xe_bo_fault_migrate(struct xe_bo *bo, struct ttm_operation_ctx *ctx,
 	return err;
 }
 
+/*
+ * Number of prefaulted pages for the MADV_SEQUENTIAL and
+ * MADV_RANDOM madvise() advices.
+ */
+#define XE_BO_VM_NUM_PREFAULT_SEQ  512
+#define XE_BO_VM_NUM_PREFAULT_RAND 1
+
 /* Call into TTM to populate PTEs, and register bo for PTE removal on runtime suspend. */
 static vm_fault_t __xe_bo_cpu_fault(struct vm_fault *vmf, struct xe_device *xe, struct xe_bo *bo)
 {
+	const struct vm_area_struct *vma = vmf->vma;
+	pgoff_t num_prefault;
 	vm_fault_t ret;
 
 	trace_xe_bo_cpu_fault(bo);
 
+	if (vma->vm_flags & VM_SEQ_READ)
+		num_prefault = XE_BO_VM_NUM_PREFAULT_SEQ;
+	else if (vma->vm_flags & VM_RAND_READ)
+		num_prefault = XE_BO_VM_NUM_PREFAULT_RAND;
+	else
+		num_prefault = TTM_BO_VM_NUM_PREFAULT;
+
 	ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
-				       TTM_BO_VM_NUM_PREFAULT);
+				       num_prefault);
 	/*
 	 * When TTM is actually called to insert PTEs, ensure no blocking conditions
 	 * remain, in which case TTM may drop locks and return VM_FAULT_RETRY.
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-11-29 16:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-28 10:46 [RFC PATCH] drm/xe/bo: Honor madvise(2) advices Thomas Hellström
2025-11-28 10:53 ` ✓ CI.KUnit: success for " Patchwork
2025-11-28 12:57 ` [RFC PATCH] " Matthew Auld
2025-11-28 21:01   ` Matthew Brost
2025-11-29 12:51     ` Thomas Hellström
2025-11-29 15:55       ` Matthew Brost
2025-11-29 16:18         ` Thomas Hellström
2025-11-29 12:40   ` Thomas Hellström

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox