From: Junhua Shen <Junhua.Shen@amd.com>
To: <Alexander.Deucher@amd.com>, <Felix.Kuehling@amd.com>,
<Christian.Koenig@amd.com>, <Oak.Zeng@amd.com>,
<Jenny-Jing.Liu@amd.com>, <Philip.Yang@amd.com>,
<Xiaogang.Chen@amd.com>, <Ray.Huang@amd.com>,
<honglei1.huang@amd.com>, <Lingshan.Zhu@amd.com>
Cc: <amd-gfx@lists.freedesktop.org>,
<dri-devel@lists.freedesktop.org>,
Junhua Shen <Junhua.Shen@amd.com>
Subject: [PATCH v3 1/5] drm/amdgpu: add VRAM migration infrastructure for drm_pagemap
Date: Mon, 27 Apr 2026 18:05:18 +0800 [thread overview]
Message-ID: <20260427100522.7014-2-Junhua.Shen@amd.com> (raw)
In-Reply-To: <20260427100522.7014-1-Junhua.Shen@amd.com>
Add the drm_pagemap-based VRAM migration infrastructure:
- Define struct amdgpu_pagemap wrapping dev_pagemap + drm_pagemap
- Define AMDGPU_SVM_PGMAP_OWNER() and AMDGPU_INTERCONNECT_VRAM macros
- Implement amdgpu_svm_migration_init() to register ZONE_DEVICE via
devm_memremap_pages() and initialize the drm_pagemap
- Add amdgpu_pagemap pointer (apagemap) to struct amdgpu_device
Signed-off-by: Junhua Shen <Junhua.Shen@amd.com>
---
drivers/gpu/drm/amd/amdgpu/Makefile | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 8 +
drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.c | 179 ++++++++++++++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.h | 98 +++++++++++
4 files changed, 289 insertions(+), 2 deletions(-)
create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.c
create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.h
diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 7700f81a246e..e64abb5c8ab8 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -323,12 +323,14 @@ amdgpu-$(CONFIG_HMM_MIRROR) += amdgpu_hmm.o
# svm support
amdgpu-$(CONFIG_DRM_AMDGPU_SVM) += amdgpu_svm.o amdgpu_svm_attr.o \
- amdgpu_svm_range.o
+ amdgpu_svm_range.o amdgpu_migrate.o
.PHONY: clean-svm
clean-svm:
rm -f $(obj)/amdgpu_svm.o $(obj)/amdgpu_svm_attr.o $(obj)/amdgpu_svm_range.o \
- $(obj)/.amdgpu_svm.o.cmd $(obj)/.amdgpu_svm_attr.o.cmd $(obj)/.amdgpu_svm_range.o.cmd
+ $(obj)/amdgpu_migrate.o \
+ $(obj)/.amdgpu_svm.o.cmd $(obj)/.amdgpu_svm_attr.o.cmd $(obj)/.amdgpu_svm_range.o.cmd \
+ $(obj)/.amdgpu_migrate.o.cmd
include $(FULL_AMD_PATH)/pm/Makefile
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 49e7881750fa..fe6ba9911d9f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -325,6 +325,7 @@ struct amdgpu_fpriv;
struct amdgpu_bo_va_mapping;
struct kfd_vm_fault_info;
struct amdgpu_hive_info;
+struct amdgpu_pagemap;
struct amdgpu_reset_context;
struct amdgpu_reset_control;
struct amdgpu_coredump_info;
@@ -1200,6 +1201,13 @@ struct amdgpu_device {
struct amdgpu_uma_carveout_info uma_info;
+#if IS_ENABLED(CONFIG_DRM_AMDGPU_SVM)
+ /* SVM VRAM migration via drm_pagemap (drm_gpusvm path).
+ * Allocated in amdgpu_svm_migration_init(), NULL if SVM disabled.
+ */
+ struct amdgpu_pagemap *apagemap;
+#endif
+
/* KFD
* Must be last --ends in a flexible-array member.
*/
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.c
new file mode 100644
index 000000000000..170e2eadc106
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.c
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/*
+ * Copyright 2026 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+/**
+ * DOC: AMDGPU SVM Migration
+ *
+ * This file implements the drm_pagemap-based migration infrastructure for
+ * AMDGPU SVM. It provides the callbacks that the DRM GPUSVM / drm_pagemap
+ * framework needs to:
+ *
+ * 1. Map ZONE_DEVICE pages to GPU-visible VRAM MC addresses (device_map)
+ * 2. Allocate VRAM and migrate pages from system memory (populate_mm)
+ * 3. Copy data between RAM and VRAM using SDMA (copy_to_devmem / copy_to_ram)
+ * 4. Release VRAM backing when pages migrate back to system memory (devmem_release)
+ *
+ * Architecture overview::
+ *
+ * adev->apagemap->dpagemap (struct drm_pagemap)
+ * .ops = &amdgpu_svm_drm_pagemap_ops
+ * |
+ * +---+-------------------+
+ * | |
+ * .populate_mm .device_map
+ * (alloc BO + migrate) (page -> VRAM MC addr)
+ * |
+ * v
+ * drm_pagemap_devmem_ops (per-BO migration mechanics)
+ * .populate_devmem_pfn -> BO buddy blocks -> PFN array
+ * .copy_to_devmem -> SDMA copy RAM -> VRAM
+ * .copy_to_ram -> SDMA copy VRAM -> RAM
+ * .devmem_release -> release BO reference
+ *
+ * The three address spaces involved:
+ *
+ * VRAM offset [0, real_vram_size) - buddy allocator managed
+ * + hpa_base
+ * HPA / PFN [hpa_base, hpa_base+..) - ZONE_DEVICE struct page management
+ * + vm_manager.vram_base_offset
+ * PTE address [vram_base_offset, ..] - GPU page table entries (from MMHUB FB_OFFSET)
+ */
+
+#include <drm/drm_pagemap.h>
+#include <linux/memremap.h>
+#include <linux/migrate.h>
+
+#include "amdgpu_amdkfd.h"
+#include "amdgpu_migrate.h"
+#include "amdgpu.h"
+
+static inline struct amdgpu_pagemap *
+dpagemap_to_apagemap(struct drm_pagemap *dpagemap)
+{
+ return container_of(dpagemap, struct amdgpu_pagemap, dpagemap);
+}
+
+static inline struct amdgpu_device *
+dpagemap_to_adev(struct drm_pagemap *dpagemap)
+{
+ return drm_to_adev(dpagemap->drm);
+}
+
+/**
+ * amdgpu_svm_page_to_apagemap - Get amdgpu_pagemap from a ZONE_DEVICE page
+ * @page: A ZONE_DEVICE page backed by VRAM
+ *
+ * Follows: page -> pgmap -> container_of(apagemap)
+ */
+static inline struct amdgpu_pagemap *
+amdgpu_svm_page_to_apagemap(struct page *page)
+{
+ struct dev_pagemap *pgmap = page_pgmap(page);
+
+ return container_of(pgmap, struct amdgpu_pagemap, pgmap);
+}
+
+
+const struct drm_pagemap_ops amdgpu_svm_drm_pagemap_ops = { };
+
+/**
+ * amdgpu_svm_migration_init - Register ZONE_DEVICE and initialize drm_pagemap
+ * @adev: AMDGPU device to set up VRAM migration for
+ *
+ * Allocates a ZONE_DEVICE region covering the GPU's VRAM, registers it
+ * via devm_memremap_pages() with drm_pagemap's generic dev_pagemap_ops,
+ * and then initializes the drm_pagemap (dpagemap) that provides the
+ * device_map / populate_mm callbacks for the DRM GPUSVM migration path.
+ *
+ * For XGMI-connected CPUs, uses the device's aperture directly
+ * (MEMORY_DEVICE_COHERENT). For discrete GPUs, requests a free
+ * iomem region for MEMORY_DEVICE_PRIVATE pages.
+ *
+ * Return: 0 on success, -EINVAL if GPU IP too old, negative error on failure
+ */
+int amdgpu_svm_migration_init(struct amdgpu_device *adev)
+{
+ struct amdgpu_pagemap *svm_dm;
+ struct dev_pagemap *pgmap;
+ struct resource *res = NULL;
+ unsigned long size;
+ void *r;
+
+ if (amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(9, 0, 1))
+ return -EINVAL;
+
+ if (adev->apu_prefer_gtt)
+ return 0;
+
+ if (adev->apagemap && adev->apagemap->initialized)
+ return 0;
+
+ svm_dm = devm_kzalloc(adev->dev, sizeof(*svm_dm), GFP_KERNEL);
+ if (!svm_dm)
+ return -ENOMEM;
+
+ pgmap = &svm_dm->pgmap;
+
+ size = ALIGN(adev->gmc.real_vram_size, 2ULL << 20);
+ if (adev->gmc.xgmi.connected_to_cpu) {
+ pgmap->range.start = adev->gmc.aper_base;
+ pgmap->range.end = adev->gmc.aper_base + adev->gmc.aper_size - 1;
+ pgmap->type = MEMORY_DEVICE_COHERENT;
+ } else {
+ res = devm_request_free_mem_region(adev->dev, &iomem_resource, size);
+ if (IS_ERR(res))
+ return PTR_ERR(res);
+ pgmap->range.start = res->start;
+ pgmap->range.end = res->end;
+ pgmap->type = MEMORY_DEVICE_PRIVATE;
+ }
+
+ pgmap->nr_range = 1;
+ pgmap->flags = 0;
+ pgmap->ops = drm_pagemap_pagemap_ops_get();
+ pgmap->owner = AMDGPU_SVM_PGMAP_OWNER(adev);
+
+ r = devm_memremap_pages(adev->dev, pgmap);
+ if (IS_ERR(r)) {
+ dev_err(adev->dev, "SVM: failed to register HMM device memory\n");
+ if (pgmap->type == MEMORY_DEVICE_PRIVATE && res)
+ devm_release_mem_region(adev->dev, res->start, resource_size(res));
+ pgmap->type = 0;
+ return PTR_ERR(r);
+ }
+
+ if (drm_pagemap_init(&svm_dm->dpagemap, pgmap, adev_to_drm(adev),
+ &amdgpu_svm_drm_pagemap_ops)) {
+ dev_err(adev->dev, "SVM: failed to init drm_pagemap\n");
+ return -EINVAL;
+ }
+ svm_dm->adev = adev;
+ svm_dm->hpa_base = pgmap->range.start;
+ svm_dm->initialized = true;
+ adev->apagemap = svm_dm;
+
+ dev_info(adev->dev, "SVM: registered %ldMB device memory, hpa_base=0x%llx\n",
+ size >> 20, svm_dm->hpa_base);
+ return 0;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.h
new file mode 100644
index 000000000000..e20698fb1597
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_migrate.h
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/*
+ * Copyright 2026 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __AMDGPU_MIGRATE_H__
+#define __AMDGPU_MIGRATE_H__
+
+#include <drm/drm_pagemap.h>
+#include <linux/memremap.h>
+
+struct amdgpu_device;
+
+/*
+ * AMDGPU_INTERCONNECT_VRAM - Protocol identifier for local VRAM access.
+ *
+ * Used in drm_pagemap_addr to distinguish device-local VRAM addresses from
+ * DMA-mapped system memory addresses. drm_gpusvm_get_pages() uses this to
+ * identify pages that are already in device memory and need no DMA mapping.
+ *
+ * Value must be non-zero (0 == DRM_INTERCONNECT_SYSTEM).
+ */
+#define AMDGPU_INTERCONNECT_VRAM 1
+
+/*
+ * AMDGPU_SVM_PGMAP_OWNER - Unique owner token for dev_pagemap registration.
+ *
+ * migrate_vma_setup() uses pgmap->owner to distinguish "own" device pages
+ * from "foreign" device pages (e.g., another GPU in an XGMI hive).
+ * Pages whose page->pgmap->owner matches the migration source are skipped
+ * (they're already in the right place).
+ *
+ * For XGMI hive: all GPUs in the hive share the same owner (the hive pointer)
+ * so intra-hive pages are treated as local.
+ * For standalone GPU: use the adev pointer itself as a unique per-device token.
+ */
+#define AMDGPU_SVM_PGMAP_OWNER(adev) \
+ ((adev)->hive ? (void *)(adev)->hive : (void *)(adev))
+
+/**
+ * struct amdgpu_pagemap - VRAM migration infrastructure for drm_pagemap
+ * @dpagemap: DRM pagemap wrapper providing device_map/populate_mm callbacks
+ * @adev: back-pointer to the owning amdgpu_device
+ * @hpa_base: host physical address base of the ZONE_DEVICE region
+ * @initialized: set to true after successful registration
+ * @pgmap: the dev_pagemap registered with devm_memremap_pages();
+ * must be last — contains a flexible-array member (ranges[])
+ *
+ * Allocated with devm_kzalloc() in amdgpu_svm_migration_init() and stored
+ * as adev->apagemap. Lifetime is tied to the device via devres.
+ */
+struct amdgpu_pagemap {
+ struct drm_pagemap dpagemap;
+ struct amdgpu_device *adev;
+ resource_size_t hpa_base;
+ bool initialized;
+ struct dev_pagemap pgmap; /* must be last — flex-array */
+};
+
+#if IS_ENABLED(CONFIG_DRM_AMDGPU_SVM)
+int amdgpu_svm_migration_init(struct amdgpu_device *adev);
+#else
+static inline
+int amdgpu_svm_migration_init(struct amdgpu_device *adev)
+{
+ return 0;
+}
+#endif
+
+/**
+ * amdgpu_svm_drm_pagemap_ops - drm_pagemap_ops for AMDGPU VRAM migration
+ *
+ * Provides:
+ * .device_map - Convert ZONE_DEVICE page to VRAM address
+ * .populate_mm - Allocate VRAM BO and migrate pages from system memory
+ */
+extern const struct drm_pagemap_ops amdgpu_svm_drm_pagemap_ops;
+
+#endif /* __AMDGPU_MIGRATE_H__ */
--
2.34.1
next prev parent reply other threads:[~2026-04-27 10:05 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-27 10:05 [PATCH v3 0/5] drm/amdgpu: SVM VRAM migration via drm_pagemap Junhua Shen
2026-04-27 10:05 ` Junhua Shen [this message]
2026-04-27 10:05 ` [PATCH v3 2/5] drm/amdgpu: implement drm_pagemap SDMA migration callbacks Junhua Shen
2026-04-27 22:20 ` Felix Kuehling
2026-04-28 7:39 ` Junhua Shen
2026-04-27 10:05 ` [PATCH v3 3/5] drm/amdgpu: introduce SVM range migration decision layer Junhua Shen
2026-04-27 10:05 ` [PATCH v3 4/5] drm/amdgpu: add SVM attr prefetch/force-trigger functionality Junhua Shen
2026-04-27 10:05 ` [PATCH v3 5/5] drm/amdgpu: integrate VRAM migration into SVM range map path Junhua Shen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260427100522.7014-2-Junhua.Shen@amd.com \
--to=junhua.shen@amd.com \
--cc=Alexander.Deucher@amd.com \
--cc=Christian.Koenig@amd.com \
--cc=Felix.Kuehling@amd.com \
--cc=Jenny-Jing.Liu@amd.com \
--cc=Lingshan.Zhu@amd.com \
--cc=Oak.Zeng@amd.com \
--cc=Philip.Yang@amd.com \
--cc=Ray.Huang@amd.com \
--cc=Xiaogang.Chen@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=honglei1.huang@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox