From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: intel-xe@lists.freedesktop.org
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Matthew Brost" <matthew.brost@intel.com>,
"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>,
"Jani Nikula" <jani.nikula@intel.com>,
"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
"Matthew Auld" <matthew.auld@intel.com>
Subject: [PATCH v4 12/13] drm/xe: Rework instances of variants of xe_bo_create_locked()
Date: Tue, 2 Sep 2025 14:40:20 +0200 [thread overview]
Message-ID: <20250902124021.70211-13-thomas.hellstrom@linux.intel.com> (raw)
In-Reply-To: <20250902124021.70211-1-thomas.hellstrom@linux.intel.com>
A common pattern is to create a locked bo, pin it without mapping
and then unlock it. Add a function to do that, which internally
uses xe_validation_guard().
With that we can remove xe_bo_create_locked_range() and add
exhaustive eviction to stolen, pf_provision_vf_lmem and
psmi_alloc_object.
v4:
- New patch after reorganization.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
| 13 +--
drivers/gpu/drm/xe/xe_bo.c | 87 +++++++++++--------
drivers/gpu/drm/xe/xe_bo.h | 9 +-
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 23 ++---
drivers/gpu/drm/xe/xe_psmi.c | 26 ++----
5 files changed, 71 insertions(+), 87 deletions(-)
--git a/drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_stolen.h b/drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_stolen.h
index 1ce1e9da975b..a2d1f684a3a8 100644
--- a/drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_stolen.h
+++ b/drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_stolen.h
@@ -21,7 +21,6 @@ static inline int i915_gem_stolen_insert_node_in_range(struct xe_device *xe,
u32 size, u32 align,
u32 start, u32 end)
{
- struct drm_exec *exec = XE_VALIDATION_UNIMPLEMENTED;
struct xe_bo *bo;
int err;
u32 flags = XE_BO_FLAG_PINNED | XE_BO_FLAG_STOLEN;
@@ -34,21 +33,13 @@ static inline int i915_gem_stolen_insert_node_in_range(struct xe_device *xe,
start = ALIGN(start, align);
}
- bo = xe_bo_create_locked_range(xe, xe_device_get_root_tile(xe),
- NULL, size, start, end,
- ttm_bo_type_kernel, flags, 0, exec);
+ bo = xe_bo_create_pin_range_novm(xe, xe_device_get_root_tile(xe),
+ size, start, end, ttm_bo_type_kernel, flags);
if (IS_ERR(bo)) {
err = PTR_ERR(bo);
bo = NULL;
return err;
}
- err = xe_bo_pin(bo, exec);
- xe_bo_unlock_vm_held(bo);
-
- if (err) {
- xe_bo_put(fb->bo);
- bo = NULL;
- }
fb->bo = bo;
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 583d48d5d240..03494736a018 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -2297,37 +2297,6 @@ __xe_bo_create_locked(struct xe_device *xe,
return ERR_PTR(err);
}
-/**
- * xe_bo_create_locked_range() - Create a BO with range- and alignment options
- * @xe: The xe device.
- * @tile: The tile to select for migration of this bo, and the tile used for
- * GGTT binding if any. Only to be non-NULL for ttm_bo_type_kernel bos.
- * @vm: The local vm or NULL for external objects.
- * @size: The storage size to use for the bo.
- * @start: Start of fixed VRAM range or 0.
- * @end: End of fixed VRAM range or ~0ULL.
- * @type: The TTM buffer object type.
- * @flags: XE_BO_FLAG_ flags.
- * @alignment: For GGTT buffer objects, the minimum GGTT alignment.
- * @exec: The drm_exec transaction to use for exhaustive eviction.
- *
- * Create an Xe BO with range- and alignment options. If @start and @end indicate
- * a fixed VRAM range, this must be a ttm_bo_type_kernel bo with VRAM placement
- * only. The @alignment parameter can be used for GGTT alignment.
- *
- * Return: The buffer object on success. Negative error pointer on failure.
- */
-struct xe_bo *
-xe_bo_create_locked_range(struct xe_device *xe,
- struct xe_tile *tile, struct xe_vm *vm,
- size_t size, u64 start, u64 end,
- enum ttm_bo_type type, u32 flags, u64 alignment,
- struct drm_exec *exec)
-{
- return __xe_bo_create_locked(xe, tile, vm, size, start, end, 0, type,
- flags, alignment, exec);
-}
-
/**
* xe_bo_create_locked() - Create a BO
* @xe: The xe device.
@@ -2416,6 +2385,55 @@ struct xe_bo *xe_bo_create_user(struct xe_device *xe,
return bo;
}
+/**
+ * xe_bo_create_pin_range_novm() - Create and pin a BO with range options.
+ * @xe: The xe device.
+ * @tile: The tile to select for migration of this bo, and the tile used for
+ * GGTT binding if any. Only to be non-NULL for ttm_bo_type_kernel bos.
+ * @size: The storage size to use for the bo.
+ * @start: Start of fixed VRAM range or 0.
+ * @end: End of fixed VRAM range or ~0ULL.
+ * @type: The TTM buffer object type.
+ * @flags: XE_BO_FLAG_ flags.
+ *
+ * Create an Xe BO with range- and options. If @start and @end indicate
+ * a fixed VRAM range, this must be a ttm_bo_type_kernel bo with VRAM placement
+ * only.
+ *
+ * Return: The buffer object on success. Negative error pointer on failure.
+ */
+struct xe_bo *xe_bo_create_pin_range_novm(struct xe_device *xe, struct xe_tile *tile,
+ size_t size, u64 start, u64 end,
+ enum ttm_bo_type type, u32 flags)
+{
+ struct xe_validation_ctx ctx;
+ struct drm_exec exec;
+ struct xe_bo *bo;
+ int err = 0;
+
+ xe_validation_guard(&ctx, &xe->val, &exec, (struct xe_val_flags) {}, err) {
+ bo = __xe_bo_create_locked(xe, tile, NULL, size, start, end,
+ DRM_XE_GEM_CPU_CACHING_WB, type, flags, 0, &exec);
+ if (IS_ERR(bo)) {
+ drm_exec_retry_on_contention(&exec);
+ err = PTR_ERR(bo);
+ xe_validation_retry_on_oom(&ctx, &err);
+ break;
+ }
+
+ err = xe_bo_pin(bo, &exec);
+ xe_bo_unlock(bo);
+ if (err) {
+ xe_bo_put(bo);
+ drm_exec_retry_on_contention(&exec);
+ xe_validation_retry_on_oom(&ctx, &err);
+ break;
+ }
+ }
+
+ return err ? ERR_PTR(err) : bo;
+}
+
static struct xe_bo *xe_bo_create_pin_map_at_aligned(struct xe_device *xe,
struct xe_tile *tile,
struct xe_vm *vm,
@@ -2432,9 +2450,10 @@ static struct xe_bo *xe_bo_create_pin_map_at_aligned(struct xe_device *xe,
xe_ttm_stolen_cpu_access_needs_ggtt(xe))
flags |= XE_BO_FLAG_GGTT;
- bo = xe_bo_create_locked_range(xe, tile, vm, size, start, end, type,
- flags | XE_BO_FLAG_NEEDS_CPU_ACCESS | XE_BO_FLAG_PINNED,
- alignment, exec);
+ bo = __xe_bo_create_locked(xe, tile, vm, size, start, end,
+ DRM_XE_GEM_CPU_CACHING_WB, type,
+ flags | XE_BO_FLAG_NEEDS_CPU_ACCESS | XE_BO_FLAG_PINNED,
+ alignment, exec);
if (IS_ERR(bo))
return bo;
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 7de0f5a166d5..e2be6cbc35d6 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -94,12 +94,6 @@ struct xe_bo *xe_bo_init_locked(struct xe_device *xe, struct xe_bo *bo,
struct ttm_lru_bulk_move *bulk, size_t size,
u16 cpu_caching, enum ttm_bo_type type,
u32 flags, struct drm_exec *exec);
-struct xe_bo *
-xe_bo_create_locked_range(struct xe_device *xe,
- struct xe_tile *tile, struct xe_vm *vm,
- size_t size, u64 start, u64 end,
- enum ttm_bo_type type, u32 flags, u64 alignment,
- struct drm_exec *exec);
struct xe_bo *xe_bo_create_locked(struct xe_device *xe, struct xe_tile *tile,
struct xe_vm *vm, size_t size,
enum ttm_bo_type type, u32 flags,
@@ -113,6 +107,9 @@ struct xe_bo *xe_bo_create_pin_map(struct xe_device *xe, struct xe_tile *tile,
struct xe_bo *xe_bo_create_pin_map_novm(struct xe_device *xe, struct xe_tile *tile,
size_t size, enum ttm_bo_type type, u32 flags,
bool intr);
+struct xe_bo *xe_bo_create_pin_range_novm(struct xe_device *xe, struct xe_tile *tile,
+ size_t size, u64 start, u64 end,
+ enum ttm_bo_type type, u32 flags);
struct xe_bo *
xe_bo_create_pin_map_at_novm(struct xe_device *xe, struct xe_tile *tile,
size_t size, u64 offset, enum ttm_bo_type type,
diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index 906011671b60..6a9a752fe24a 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -1452,7 +1452,6 @@ static bool pf_release_vf_config_lmem(struct xe_gt *gt, struct xe_gt_sriov_confi
static int pf_provision_vf_lmem(struct xe_gt *gt, unsigned int vfid, u64 size)
{
struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
- struct drm_exec *exec = XE_VALIDATION_UNIMPLEMENTED;
struct xe_device *xe = gt_to_xe(gt);
struct xe_tile *tile = gt_to_tile(gt);
struct xe_bo *bo;
@@ -1479,24 +1478,16 @@ static int pf_provision_vf_lmem(struct xe_gt *gt, unsigned int vfid, u64 size)
return 0;
xe_gt_assert(gt, pf_get_lmem_alignment(gt) == SZ_2M);
- bo = xe_bo_create_locked(xe, tile, NULL,
- ALIGN(size, PAGE_SIZE),
- ttm_bo_type_kernel,
- XE_BO_FLAG_VRAM_IF_DGFX(tile) |
- XE_BO_FLAG_NEEDS_2M |
- XE_BO_FLAG_PINNED |
- XE_BO_FLAG_PINNED_LATE_RESTORE,
- exec);
+ bo = xe_bo_create_pin_range_novm(xe, tile,
+ ALIGN(size, PAGE_SIZE), 0, ~0ull,
+ ttm_bo_type_kernel,
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+ XE_BO_FLAG_NEEDS_2M |
+ XE_BO_FLAG_PINNED |
+ XE_BO_FLAG_PINNED_LATE_RESTORE);
if (IS_ERR(bo))
return PTR_ERR(bo);
- err = xe_bo_pin(bo, exec);
- xe_bo_unlock(bo);
- if (unlikely(err)) {
- xe_bo_put(bo);
- return err;
- }
-
config->lmem_obj = bo;
if (xe_device_has_lmtt(xe)) {
diff --git a/drivers/gpu/drm/xe/xe_psmi.c b/drivers/gpu/drm/xe/xe_psmi.c
index 2815987c190d..45d142191d60 100644
--- a/drivers/gpu/drm/xe/xe_psmi.c
+++ b/drivers/gpu/drm/xe/xe_psmi.c
@@ -68,10 +68,7 @@ static void psmi_cleanup(struct xe_device *xe)
static struct xe_bo *psmi_alloc_object(struct xe_device *xe,
unsigned int id, size_t bo_size)
{
- struct drm_exec *exec = XE_VALIDATION_UNIMPLEMENTED;
- struct xe_bo *bo = NULL;
struct xe_tile *tile;
- int err;
if (!id || !bo_size)
return NULL;
@@ -79,23 +76,12 @@ static struct xe_bo *psmi_alloc_object(struct xe_device *xe,
tile = &xe->tiles[id - 1];
/* VRAM: Allocate GEM object for the capture buffer */
- bo = xe_bo_create_locked(xe, tile, NULL, bo_size,
- ttm_bo_type_kernel,
- XE_BO_FLAG_VRAM_IF_DGFX(tile) |
- XE_BO_FLAG_PINNED |
- XE_BO_FLAG_PINNED_LATE_RESTORE |
- XE_BO_FLAG_NEEDS_CPU_ACCESS,
- exec);
-
- if (!IS_ERR(bo)) {
- /* Buffer written by HW, ensure stays resident */
- err = xe_bo_pin(bo, exec);
- if (err)
- bo = ERR_PTR(err);
- xe_bo_unlock(bo);
- }
-
- return bo;
+ return xe_bo_create_pin_range_novm(xe, tile, bo_size, 0, ~0ull,
+ ttm_bo_type_kernel,
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+ XE_BO_FLAG_PINNED |
+ XE_BO_FLAG_PINNED_LATE_RESTORE |
+ XE_BO_FLAG_NEEDS_CPU_ACCESS);
}
/*
--
2.50.1
next prev parent reply other threads:[~2025-09-02 12:41 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-02 12:40 [PATCH v4 00/13] Driver-managed exhaustive eviction Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 01/13] drm/xe: Pass down drm_exec context to validation Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 02/13] drm/xe: Introduce an xe_validation wrapper around drm_exec Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 03/13] drm/xe: Convert xe_bo_create_user() for exhaustive eviction Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 04/13] drm/xe: Convert SVM validation " Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 05/13] drm/xe: Convert existing drm_exec transactions " Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 06/13] drm/xe: Convert the CPU fault handler " Thomas Hellström
2025-09-03 17:58 ` Matthew Brost
2025-09-04 6:30 ` Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 07/13] drm/xe/display: Convert __xe_pin_fb_vma() Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 08/13] drm/xe: Convert xe_dma_buf.c for exhaustive eviction Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 09/13] drm/xe: Rename ___xe_bo_create_locked() Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 10/13] drm/xe: Convert xe_bo_create_pin_map_at() for exhaustive eviction Thomas Hellström
2025-09-02 12:40 ` [PATCH v4 11/13] drm/xe: Convert xe_bo_create_pin_map() " Thomas Hellström
2025-09-03 18:03 ` Matthew Brost
2025-09-02 12:40 ` Thomas Hellström [this message]
2025-09-03 18:09 ` [PATCH v4 12/13] drm/xe: Rework instances of variants of xe_bo_create_locked() Matthew Brost
2025-09-02 12:40 ` [PATCH v4 13/13] drm/xe: Convert pinned suspend eviction for exhaustive eviction Thomas Hellström
2025-09-02 12:47 ` ✗ CI.checkpatch: warning for Driver-managed exhaustive eviction (rev4) Patchwork
2025-09-02 12:48 ` ✓ CI.KUnit: success " Patchwork
2025-09-02 13:39 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-09-02 16:14 ` ✗ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250902124021.70211-13-thomas.hellstrom@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@intel.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox