From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3EA56CCFA1A for ; Tue, 11 Nov 2025 16:45:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EAA7210E11C; Tue, 11 Nov 2025 16:45:04 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="kR7grWDq"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) by gabe.freedesktop.org (Postfix) with ESMTPS id 22DC610E613; Tue, 11 Nov 2025 16:45:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1762879504; x=1794415504; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XI2SMcZDRBG0NeMCCrZ+jUxP2Yyu3l0t8puVjoUL9RY=; b=kR7grWDq+99a9zN2JiVBund42zOqlGPQbhKkscbtPUE5q0Jmaydo3FKR KL+sW4L6orBE/PIY6HmqQ+IXwbGoYG8u+4QRB8tA3St6/JI5g7MOtjgQR 5HmGnPV9xLwEazn9xTw/JwVxCIXaHu866lojag3G/89kEkryu/lndCWoL S6WzbxgI0oobHSn3nlfaTQQOTFh+KTtyppxTECmvPYuG5vvkIg6u2Ea6Z QISfEVjq5V6aMb50xcrmAgcho+fh/6LpKO4c9lpNUT8Zzn0PpxhF+w6pg 6UbmmlSoEV5qUZPXd2niURzs+nt1PBkGQ7NAprwKWMrUuRDO8SK+Kdj+e g==; X-CSE-ConnectionGUID: nVwOpN9MR86krXGsvUaIgA== X-CSE-MsgGUID: nzUxOF7NQ4SS71GJqYlK9Q== X-IronPort-AV: E=McAfee;i="6800,10657,11610"; a="75244714" X-IronPort-AV: E=Sophos;i="6.19,297,1754982000"; d="scan'208";a="75244714" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2025 08:45:04 -0800 X-CSE-ConnectionGUID: ytDReuEvRaS6QGWXLjQ6cg== X-CSE-MsgGUID: EeXhPya+SEWqVhbkZdtxHw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.19,297,1754982000"; d="scan'208";a="188646881" Received: from rvuia-mobl.ger.corp.intel.com (HELO fedora) ([10.245.244.91]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2025 08:45:00 -0800 From: =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org, himal.prasad.ghimiray@intel.com, apopple@nvidia.com, airlied@gmail.com, Simona Vetter , felix.kuehling@amd.com, Matthew Brost , =?UTF-8?q?Christian=20K=C3=B6nig?= , dakr@kernel.org, "Mrozek, Michal" , Joonas Lahtinen Subject: [PATCH v2 10/17] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes Date: Tue, 11 Nov 2025 17:44:00 +0100 Message-ID: <20251111164408.113070-11-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251111164408.113070-1-thomas.hellstrom@linux.intel.com> References: <20251111164408.113070-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" As a consequence, struct xe_vma_mem_attr() can't simply be assigned or freed without taking the reference count of individual members into account. Also add helpers to do that. v2: - Move some calls to xe_vma_mem_attr_fini() to xe_vma_free(). (Matt Brost) Signed-off-by: Thomas Hellström --- drivers/gpu/drm/xe/xe_svm.c | 2 +- drivers/gpu/drm/xe/xe_vm.c | 34 +++++++++++++++++++++++++----- drivers/gpu/drm/xe/xe_vm.h | 1 + drivers/gpu/drm/xe/xe_vm_madvise.c | 1 + drivers/gpu/drm/xe/xe_vm_types.h | 9 ++++++++ 5 files changed, 41 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c index 7db9eafec66b..4a3853a5cd64 100644 --- a/drivers/gpu/drm/xe/xe_svm.c +++ b/drivers/gpu/drm/xe/xe_svm.c @@ -329,7 +329,7 @@ static int xe_svm_range_set_default_attr(struct xe_vm *vm, u64 range_start, u64 if (xe_vma_start(vma) == range_start && xe_vma_end(vma) == range_end) { default_attr.pat_index = vma->attr.default_pat_index; default_attr.default_pat_index = vma->attr.default_pat_index; - vma->attr = default_attr; + xe_vma_mem_attr_copy(&vma->attr, &default_attr); } else { vm_dbg(&vm->xe->drm, "Split VMA start=0x%016llx, vma_end=0x%016llx", range_start, range_end); diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 2321e7c8ae76..27669f80b7ff 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -957,14 +957,37 @@ struct dma_fence *xe_vm_range_unbind(struct xe_vm *vm, return fence; } +static void xe_vma_mem_attr_fini(struct xe_vma_mem_attr *attr) +{ + drm_pagemap_put(attr->preferred_loc.dpagemap); +} + static void xe_vma_free(struct xe_vma *vma) { + xe_vma_mem_attr_fini(&vma->attr); + if (xe_vma_is_userptr(vma)) kfree(to_userptr_vma(vma)); else kfree(vma); } +/** + * xe_vma_mem_attr_copy() - copy an xe_vma_mem_attr structure. + * @to: Destination. + * @from: Source. + * + * Copies an xe_vma_mem_attr structure taking care to get reference + * counting of individual members right. + */ +void xe_vma_mem_attr_copy(struct xe_vma_mem_attr *to, struct xe_vma_mem_attr *from) +{ + xe_vma_mem_attr_fini(to); + *to = *from; + if (to->preferred_loc.dpagemap) + drm_pagemap_get(to->preferred_loc.dpagemap); +} + static struct xe_vma *xe_vma_create(struct xe_vm *vm, struct xe_bo *bo, u64 bo_offset_or_userptr, @@ -1015,8 +1038,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm, if (vm->xe->info.has_atomic_enable_pte_bit) vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT; - vma->attr = *attr; - + xe_vma_mem_attr_copy(&vma->attr, attr); if (bo) { struct drm_gpuvm_bo *vm_bo; @@ -4240,7 +4262,7 @@ static int xe_vm_alloc_vma(struct xe_vm *vm, struct drm_gpuva_op *__op; unsigned int vma_flags = 0; bool remap_op = false; - struct xe_vma_mem_attr tmp_attr; + struct xe_vma_mem_attr tmp_attr = {}; u16 default_pat; int err; @@ -4333,7 +4355,7 @@ static int xe_vm_alloc_vma(struct xe_vm *vm, * VMA, so they can be assigned to newly MAP created vma. */ if (is_madvise) - tmp_attr = vma->attr; + xe_vma_mem_attr_copy(&tmp_attr, &vma->attr); xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL); } else if (__op->op == DRM_GPUVA_OP_MAP) { @@ -4343,12 +4365,13 @@ static int xe_vm_alloc_vma(struct xe_vm *vm, * copy them to new vma. */ if (is_madvise) - vma->attr = tmp_attr; + xe_vma_mem_attr_copy(&vma->attr, &tmp_attr); } } xe_vm_unlock(vm); drm_gpuva_ops_free(&vm->gpuvm, ops); + xe_vma_mem_attr_fini(&tmp_attr); return 0; unwind_ops: @@ -4406,3 +4429,4 @@ int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t start, uint64_t r return xe_vm_alloc_vma(vm, &map_req, false); } + diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h index ef8a5019574e..d328d31afe8e 100644 --- a/drivers/gpu/drm/xe/xe_vm.h +++ b/drivers/gpu/drm/xe/xe_vm.h @@ -411,4 +411,5 @@ static inline struct drm_exec *xe_vm_validation_exec(struct xe_vm *vm) #define xe_vm_has_valid_gpu_mapping(tile, tile_present, tile_invalidated) \ ((READ_ONCE(tile_present) & ~READ_ONCE(tile_invalidated)) & BIT((tile)->id)) +void xe_vma_mem_attr_copy(struct xe_vma_mem_attr *to, struct xe_vma_mem_attr *from); #endif diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c index cad3cf627c3f..9553008409d1 100644 --- a/drivers/gpu/drm/xe/xe_vm_madvise.c +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c @@ -95,6 +95,7 @@ static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm, */ vmas[i]->attr.preferred_loc.migration_policy = op->preferred_mem_loc.migration_policy; + vmas[i]->attr.preferred_loc.dpagemap = NULL; } } } diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h index 0d09a322199d..ca489aa7c652 100644 --- a/drivers/gpu/drm/xe/xe_vm_types.h +++ b/drivers/gpu/drm/xe/xe_vm_types.h @@ -20,6 +20,8 @@ #include "xe_range_fence.h" #include "xe_userptr.h" +struct drm_pagemap; + struct xe_bo; struct xe_svm_range; struct xe_sync_entry; @@ -65,6 +67,13 @@ struct xe_vma_mem_attr { * closest device memory respectively. */ u32 devmem_fd; + /** + * @preferred_loc.dpagemap: Reference-counted pointer to the drm_pagemap preferred + * for migration on a SVM page-fault. The pointer is protected by the + * vm lock, and is %NULL if @devmem_fd should be consulted for special + * values. + */ + struct drm_pagemap *dpagemap; } preferred_loc; /** -- 2.51.1