From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 88004D41D68 for ; Thu, 11 Dec 2025 17:00:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E694E10E892; Thu, 11 Dec 2025 17:00:15 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="FPpHrvOe"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.15]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8D3A210E876; Thu, 11 Dec 2025 17:00:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1765472414; x=1797008414; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=q6C/b+hf5PWSz9GlE86YbWfpecmVr/uHXmEDRcpjjCk=; b=FPpHrvOey6q779wImvP2AZS0GzhdXKOW/TziIj36l6Dn5ZEU+wp+Y7Ux PtaY6Y6TZ+UcOAH6CFaycxu7HSWwbPC+Tlx1MeR8Q4IaG3O7hsBlO5CW7 DZvD2Vk2BCAmNxbOi95JCDG2ZZX2W7w0AaxiZd91eNMK7or8B4srMJMrq ZF81rMggPqYKLqowIs/SyryVe55KmOHHEJA0J+Wp1AqMkaPHnDw9jL2fo U16t0k1wOWe3AuhI2bA+sEMimro1PRbcCc1CbVPA9kWAalzWVyV8uBQEd /NKKLgV/Bgy0HIdQYJ7Ld3KzJuJriua7jITPak1txUgdCRt9AOuMDE07f w==; X-CSE-ConnectionGUID: zfrZNpweRkerccAIppHBcA== X-CSE-MsgGUID: Mvy05koaR6uQL/UzCJbFsw== X-IronPort-AV: E=McAfee;i="6800,10657,11639"; a="71083110" X-IronPort-AV: E=Sophos;i="6.21,141,1763452800"; d="scan'208";a="71083110" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2025 09:00:14 -0800 X-CSE-ConnectionGUID: H0VvA09BTimb76edIo3wEg== X-CSE-MsgGUID: XIIJrRY4T42V+nt0YgpuvA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,141,1763452800"; d="scan'208";a="196849617" Received: from egrumbac-mobl6.ger.corp.intel.com (HELO fedora) ([10.245.244.197]) by orviesa007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2025 09:00:11 -0800 From: =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= , Matthew Brost , dri-devel@lists.freedesktop.org, himal.prasad.ghimiray@intel.com, apopple@nvidia.com, airlied@gmail.com, Simona Vetter , felix.kuehling@amd.com, =?UTF-8?q?Christian=20K=C3=B6nig?= , dakr@kernel.org, "Mrozek, Michal" , Joonas Lahtinen Subject: [PATCH v4 10/22] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner Date: Thu, 11 Dec 2025 17:58:57 +0100 Message-ID: <20251211165909.219710-11-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251211165909.219710-1-thomas.hellstrom@linux.intel.com> References: <20251211165909.219710-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Register a driver-wide owner list, provide a callback to identify fast interconnects and use the drm_pagemap_util helper to allocate or reuse a suitable owner struct. For now we consider pagemaps on different tiles on the same device as having fast interconnect and thus the same owner. v2: - Fix up the error onion unwind in xe_pagemap_create(). (Matt Brost) Signed-off-by: Thomas Hellström Reviewed-by: Matthew Brost --- drivers/gpu/drm/xe/xe_svm.c | 64 ++++++++++++++++++++++++++++---- drivers/gpu/drm/xe/xe_svm.h | 24 +++++------- drivers/gpu/drm/xe/xe_userptr.c | 2 +- drivers/gpu/drm/xe/xe_vm.c | 2 +- drivers/gpu/drm/xe/xe_vm_types.h | 3 ++ 5 files changed, 71 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c index 90df87b78c3a..82335b942252 100644 --- a/drivers/gpu/drm/xe/xe_svm.c +++ b/drivers/gpu/drm/xe/xe_svm.c @@ -22,8 +22,17 @@ #include "xe_vm_types.h" #include "xe_vram_types.h" +/* Identifies subclasses of struct drm_pagemap_peer */ +#define XE_PEER_PAGEMAP ((void *)0ul) +#define XE_PEER_VM ((void *)1ul) + static int xe_svm_get_pagemaps(struct xe_vm *vm); +void *xe_svm_private_page_owner(struct xe_vm *vm, bool force_smem) +{ + return force_smem ? NULL : vm->svm.peer.owner; +} + static bool xe_svm_range_in_vram(struct xe_svm_range *range) { /* @@ -812,6 +821,25 @@ static void xe_svm_put_pagemaps(struct xe_vm *vm) } } +static struct device *xe_peer_to_dev(struct drm_pagemap_peer *peer) +{ + if (peer->private == XE_PEER_PAGEMAP) + return container_of(peer, struct xe_pagemap, peer)->dpagemap.drm->dev; + + return container_of(peer, struct xe_vm, svm.peer)->xe->drm.dev; +} + +static bool xe_has_interconnect(struct drm_pagemap_peer *peer1, + struct drm_pagemap_peer *peer2) +{ + struct device *dev1 = xe_peer_to_dev(peer1); + struct device *dev2 = xe_peer_to_dev(peer2); + + return dev1 == dev2; +} + +static DRM_PAGEMAP_OWNER_LIST_DEFINE(xe_owner_list); + /** * xe_svm_init() - SVM initialize * @vm: The VM. @@ -830,10 +858,18 @@ int xe_svm_init(struct xe_vm *vm) INIT_WORK(&vm->svm.garbage_collector.work, xe_svm_garbage_collector_work_func); - err = xe_svm_get_pagemaps(vm); + vm->svm.peer.private = XE_PEER_VM; + err = drm_pagemap_acquire_owner(&vm->svm.peer, &xe_owner_list, + xe_has_interconnect); if (err) return err; + err = xe_svm_get_pagemaps(vm); + if (err) { + drm_pagemap_release_owner(&vm->svm.peer); + return err; + } + err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM", &vm->xe->drm, current->mm, 0, vm->size, xe_modparam.svm_notifier_size * SZ_1M, @@ -843,6 +879,7 @@ int xe_svm_init(struct xe_vm *vm) if (err) { xe_svm_put_pagemaps(vm); + drm_pagemap_release_owner(&vm->svm.peer); return err; } } else { @@ -865,6 +902,7 @@ void xe_svm_close(struct xe_vm *vm) xe_assert(vm->xe, xe_vm_is_closed(vm)); flush_work(&vm->svm.garbage_collector.work); xe_svm_put_pagemaps(vm); + drm_pagemap_release_owner(&vm->svm.peer); } /** @@ -1012,7 +1050,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap, xe_pm_runtime_get_noresume(xe); err = drm_pagemap_migrate_to_devmem(&bo->devmem_allocation, mm, start, end, timeslice_ms, - xe_svm_devm_owner(xe)); + xpagemap->pagemap.owner); if (err) xe_svm_devmem_release(&bo->devmem_allocation); xe_bo_unlock(bo); @@ -1127,7 +1165,6 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma, .devmem_only = need_vram && devmem_possible, .timeslice_ms = need_vram && devmem_possible ? vm->xe->atomic_svm_timeslice_ms : 0, - .device_private_page_owner = xe_svm_devm_owner(vm->xe), }; struct xe_validation_ctx vctx; struct drm_exec exec; @@ -1151,8 +1188,8 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma, return err; dpagemap = xe_vma_resolve_pagemap(vma, tile); - if (!dpagemap && !ctx.devmem_only) - ctx.device_private_page_owner = NULL; + ctx.device_private_page_owner = + xe_svm_private_page_owner(vm, !dpagemap && !ctx.devmem_only); range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx); if (IS_ERR(range)) @@ -1576,6 +1613,8 @@ static void xe_pagemap_destroy_work(struct work_struct *work) pagemap->range.end - pagemap->range.start + 1); drm_dev_exit(idx); } + + drm_pagemap_release_owner(&xpagemap->peer); kfree(xpagemap); } @@ -1626,6 +1665,7 @@ static struct xe_pagemap *xe_pagemap_create(struct xe_device *xe, struct xe_vram dpagemap = &xpagemap->dpagemap; INIT_WORK(&xpagemap->destroy_work, xe_pagemap_destroy_work); xpagemap->vr = vr; + xpagemap->peer.private = XE_PEER_PAGEMAP; err = drm_pagemap_init(dpagemap, pagemap, &xe->drm, &xe_drm_pagemap_ops); if (err) @@ -1638,21 +1678,29 @@ static struct xe_pagemap *xe_pagemap_create(struct xe_device *xe, struct xe_vram goto out_err; } + err = drm_pagemap_acquire_owner(&xpagemap->peer, &xe_owner_list, + xe_has_interconnect); + if (err) + goto out_no_owner; + pagemap->type = MEMORY_DEVICE_PRIVATE; pagemap->range.start = res->start; pagemap->range.end = res->end; pagemap->nr_range = 1; - pagemap->owner = xe_svm_devm_owner(xe); + pagemap->owner = xpagemap->peer.owner; pagemap->ops = drm_pagemap_pagemap_ops_get(); addr = devm_memremap_pages(dev, pagemap); if (IS_ERR(addr)) { err = PTR_ERR(addr); - devm_release_mem_region(dev, res->start, res->end - res->start + 1); - goto out_err; + goto out_no_pages; } xpagemap->hpa_base = res->start; return xpagemap; +out_no_pages: + drm_pagemap_release_owner(&xpagemap->peer); +out_no_owner: + devm_release_mem_region(dev, res->start, res->end - res->start + 1); out_err: drm_pagemap_put(dpagemap); return ERR_PTR(err); diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h index 8a49ff17ef0c..5adce108f7eb 100644 --- a/drivers/gpu/drm/xe/xe_svm.h +++ b/drivers/gpu/drm/xe/xe_svm.h @@ -6,24 +6,11 @@ #ifndef _XE_SVM_H_ #define _XE_SVM_H_ -struct xe_device; - -/** - * xe_svm_devm_owner() - Return the owner of device private memory - * @xe: The xe device. - * - * Return: The owner of this device's device private memory to use in - * hmm_range_fault()- - */ -static inline void *xe_svm_devm_owner(struct xe_device *xe) -{ - return xe; -} - #if IS_ENABLED(CONFIG_DRM_XE_GPUSVM) #include #include +#include #define XE_INTERCONNECT_VRAM DRM_INTERCONNECT_DRIVER @@ -65,6 +52,7 @@ struct xe_svm_range { * @pagemap: The struct dev_pagemap providing the struct pages. * @dpagemap: The drm_pagemap managing allocation and migration. * @destroy_work: Handles asnynchronous destruction and caching. + * @peer: Used for pagemap owner computation. * @hpa_base: The host physical address base for the managemd memory. * @vr: Backpointer to the xe_vram region. */ @@ -72,6 +60,7 @@ struct xe_pagemap { struct dev_pagemap pagemap; struct drm_pagemap dpagemap; struct work_struct destroy_work; + struct drm_pagemap_peer peer; resource_size_t hpa_base; struct xe_vram_region *vr; }; @@ -131,6 +120,8 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end); struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile); +void *xe_svm_private_page_owner(struct xe_vm *vm, bool force_smem); + /** * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping * @range: SVM range @@ -368,6 +359,11 @@ struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *t return NULL; } +static inline void *xe_svm_private_page_owner(struct xe_vm *vm, bool force_smem) +{ + return NULL; +} + static inline void xe_svm_flush(struct xe_vm *vm) { } diff --git a/drivers/gpu/drm/xe/xe_userptr.c b/drivers/gpu/drm/xe/xe_userptr.c index 0d9130b1958a..e120323c43bc 100644 --- a/drivers/gpu/drm/xe/xe_userptr.c +++ b/drivers/gpu/drm/xe/xe_userptr.c @@ -55,7 +55,7 @@ int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma) struct xe_device *xe = vm->xe; struct drm_gpusvm_ctx ctx = { .read_only = xe_vma_read_only(vma), - .device_private_page_owner = xe_svm_devm_owner(xe), + .device_private_page_owner = xe_svm_private_page_owner(vm, false), .allow_mixed = true, }; diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index c2012d20faa6..743f45727252 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -2905,7 +2905,7 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op) ctx.read_only = xe_vma_read_only(vma); ctx.devmem_possible = devmem_possible; ctx.check_pages_threshold = devmem_possible ? SZ_64K : 0; - ctx.device_private_page_owner = xe_svm_devm_owner(vm->xe); + ctx.device_private_page_owner = xe_svm_private_page_owner(vm, !tile); /* TODO: Threading the migration */ xa_for_each(&op->prefetch_range.range, i, svm_range) { diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h index 42c5510f12d5..62a9e16352ba 100644 --- a/drivers/gpu/drm/xe/xe_vm_types.h +++ b/drivers/gpu/drm/xe/xe_vm_types.h @@ -8,6 +8,7 @@ #include #include +#include #include #include @@ -192,6 +193,8 @@ struct xe_vm { struct work_struct work; } garbage_collector; struct xe_pagemap *pagemaps[XE_MAX_TILES_PER_DEVICE]; + /** @svm.peer: Used for pagemap connectivity computations. */ + struct drm_pagemap_peer peer; } svm; struct xe_device *xe; -- 2.51.1