From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4A533C3455 for ; Thu, 7 May 2026 09:58:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.131 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778147914; cv=none; b=kTEkSiICkpCPGyVZmG225EQYFwl4NYoqNee83d/VDe51mTEgnB2/zanhF+J8QbgqWIq0PrEXE7vbuCLVDGSXsViYsIPBvOPP7R2IrQtlJ9yAoiBoG4S5EhPzkujcy75A3DP/cTuruHpkPp30tIuJeHIgLWPjjq1z6ZhHvsHnPQE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778147914; c=relaxed/simple; bh=CCkIWiotjtSzHBf+z63D9hLtlPn/8MvU5CzD773OZX8=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=TVIIwJEBs6kXMeROrYNyUpONX/0GEV8Ppa+ueDxpYAP8keoV1COi4o1P8r0EbdJj3HHwGewLxuYyDl57YC9k6C/8/lqqCLQMZcIMYJuWyEqbKDO2Sf6PajQDcMwXkLejTLLxfI90QX28nZh+hTY5kU4h/67o+v7VRPfaKI1OIgM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=j9EfTDpc; arc=none smtp.client-ip=115.124.30.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="j9EfTDpc" DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1778147903; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; bh=Guzl8KlL1Kp9opMOIFQEpshAevM9uQ4PUucMWqrUqWE=; b=j9EfTDpcr6Elh/NpgSlpG8AMbwVom+0RZAA0X3TkoW6ByzojE4lzqlzRaCL+DyKBdbVen+gplZ95OWh3kyQ4Dr2h4IBmijWPbzpvz5ktVGycSdFzfM5OBO+9im0s7E5YDK4fpYfC3FDbJpAJGwNyrXK2dqS/WhKBjgc3LG0ZUKA= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam011083073210;MF=ying.huang@linux.alibaba.com;NM=1;PH=DS;RN=44;SR=0;TI=SMTPD_---0X2UBJON_1778147898; Received: from DESKTOP-5N7EMDA(mailfrom:ying.huang@linux.alibaba.com fp:SMTPD_---0X2UBJON_1778147898 cluster:ay36) by smtp.aliyun-inc.com; Thu, 07 May 2026 17:58:19 +0800 From: "Huang, Ying" To: Shivank Garg Cc: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: Re: [PATCH 0/7] Accelerate page migration with batch copying and hardware offload In-Reply-To: <20260428155043.39251-2-shivankg@amd.com> (Shivank Garg's message of "Tue, 28 Apr 2026 15:50:37 +0000") References: <20260428155043.39251-2-shivankg@amd.com> Date: Thu, 07 May 2026 17:58:17 +0800 Message-ID: <87a4ub35ja.fsf@DESKTOP-5N7EMDA> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Shivank Garg writes: > This is the fifth RFC of the patchset to enhance page migration by > batching folio-copy operations and enabling acceleration via DMA offload. > > Single-threaded, folio-by-folio copying bottlenecks page migration in > modern systems with deep memory hierarchies, especially for large folios > where copy overhead dominates, leaving significant hardware potential > untapped. > > By batching the copy phase, we create an opportunity for hardware > acceleration. This series builds the framework and provides a DMA > offload driver (dcbm) as a reference implementation, targeting bulk > migration workloads where offloading the copy improves throughput > and latency while freeing the CPU cycles. > > See the RFC V3 cover letter [2] for motivation. > > Changelog since V4: > ------------------- > > 1. Renamed PAGE_* migration state flags to FOLIO_*. (David) > 2. Use the new folio->migrate_info field instead of folio->private > for migration state. (David) > 3. Fold folios_mc_copy patch in batch-copy implementation patch. (David) > 3. Renamed migrate_offload_start()/stop() to register()/unregister(). > (Huang, Ying) > 4. Dropped should_batch() callback from struct migrator. Reason-based > policy now lives in migrate_pages_batch(). Migrators can still skip > a batch they don't want (size based policy). (Huang, Ying) > 5. CONFIG_MIGRATION_COPY_OFFLOAD is now hidden and selected by the > migrator driver. CONFIG_DCBM_DMA is tristate. (Huang Ying, Gregory Price). > 6. Wrapped the SRCU + static_call dispatch in a small helper. (Huang, Ying) > 7. Requir m->owner in migrate_offload_register(), SRCU sync at > unregister relies on it. Counters are atomic_long_t to avoid lock-order > issue. > 9. Moved DCBM sysfs from /sys/kernel/dcbm to /sys/module/dcbm (Huang, Ying) > 10. Rebased on v7.1-rc1. > > > DESIGN: > ------- > > New Migration Flow: > > [ migrate_pages_batch() ] > | > |--> do_batch = migrate_offload_do_batch(reason) // core filters by migration reason > | > |--> for each folio: > | migrate_folio_unmap() // unmap the folio > | | > | +--> (success): > | if do_batch && folio_supports_batch_copy(): > | -> unmap_batch / dst_batch // batch list for copy offloading > | else: > | -> unmap_single / dst_single // single lists for per-folio CPU copy > | > |--> try_to_unmap_flush() // single batched TLB flush > | > |--> Batch copy (if unmap_batch not empty): > | - Migrator is configurable at runtime via sysfs. > | > | static_call(migrate_offload_copy) // Pluggable Migrators > | / | \ > | v v v > | [ Default ] [ DMA Offload ] [ ... ] > | > | On -EOPNOTSUPP or other error, batch falls back to per-folio CPU copy. > | > +--> migrate_folios_move() // metadata, update PTEs, finalize > (batch list with already_copied=true, single list with false) > > Offload Registration: > > Driver fills struct migrator { .name, .offload_copy, .owner } and calls > migrate_offload_register(). This: > - Pins the module via try_module_get() > - Patches the migrate_offload_copy() static_call target > - Enables the migrate_offload_enabled static branch > > migrate_offload_unregister() disables the static branch and reverts > the static_call, then synchronize_srcu() waits for in-flight migrations > before module_put(). > > PERFORMANCE RESULTS: > -------------------- > > Re-ran the V4 workload on v7.1-rc1 with this series; relative > speedups match V4 (~6x for 2MB folios at 16 DMA channels). No design > change in V5 alters this picture; please refer to the V4 cover letter > for the throughput tables [1]. > > > PLAN: > ----- > > Patches 1-4 (the batching infrastructure) don't depend on the migrator > interface, so if it helps I can split them off and post them ahead of > the migrator and DCBM bits, which still have a few open questions to > work through. > > I would appreciate guidance on splitting the infrastructure portion > ahead of the migrator interface if that matches maintainers' preference. > > OPEN QUESTIONS: > --------------- > > 1. Should the batch path run without a registered migrator? Patches 1-4 > are self-contained and use folios_mc_copy() (CPU). I have several > options like making batch path always-on for eligible folios, or > giving admin an option to flip the static branch, or keep the gate. > I'm leaning toward always-on. > > 2. Carrying already_copied via folio->migrate_info vs changing the > migrate_folio() callback signature (Huang, Ying). I went with the > field for now to avoid touching every fs callback before the design > settles. Happy to revisit. Personally, I still prefer to change migrate_folio() callbacks for better readability. > 3. Per-caller offload selection: Today eligibility is by migrate_reason > only. Some are latency-tolerant, others may be not. Is reason the > right granularity, or do we want a per-caller hint? > > 4. Cgroup integration: How should per-cgroup be accounted for different > migrators (e.g.: any accounting for DMA-busy time)? > > 5. Tuning migrate_pages callers for offloading. For instance, in > compaction COMPACT_CLUSTER_MAX = 32 caps DMA's payoff for compaction > (V4 experiment). > > 6. Where do batch-size thresholds live, and how are they tuned? Per > Huang Ying's split, that policy lives in the migrator. DCBM has no > threshold today. Open whether it should later be a per-migrator > sysfs knob or hard-coded; probably clearer once a second migrator > (SDXI, mtcopy) shows the trade-off. > > > FOLLOW-UPS: > -------------- > > 1. dmaengine_prep_dma_memcpy_sg() in DCBM (Vinod Koul). The SG-prep > variant cuts per-batch prep/submit cost (=CPU savings), but ptdma does > not implement the SG hook yet [10]. The end-to-end migration throughput > delta is small because per-descriptor execute time dominates. > I'll post the ptdma SG hook + DCBM switch as a follow-up. > > 2. SDXI as a second migrator. The SDXI series [11] is in review. SDXI is > a generic memcpy engine without DMA_PRIVATE, so channel acquisition > goes through dma_find_channel() or async_tx rather than > dma_request_chan_by_mask(). I have a local DCBM variant working on top > of the SDXI driver. I'm planning to send it as a follow-up once the > SDXI series settles. > > 3. IOMMU SG merging in DCBM (Gregory). dma_map_sgtable() may merge > contiguous PFNs unevenly, so src.nents != dst.nents. DCBM falls back > to CPU for safety. Though I haven't seen it on Zen3 + PTDMA. I'll > understand this and address it a follow-up. > > 4. Revisit Multi-threaded CPU copy migrator once the infra is settled. > > EARLIER POSTINGS: > ----------------- > [1] RFC V4: https://lore.kernel.org/all/20260309120725.308854-3-shivankg@amd.com > [2] RFC V3: https://lore.kernel.org/all/20250923174752.35701-1-shivankg@amd.com > [3] RFC V2: https://lore.kernel.org/all/20250319192211.10092-1-shivankg@amd.com > [4] RFC V1: https://lore.kernel.org/all/20240614221525.19170-1-shivankg@amd.com > [5] RFC from Zi Yan: https://lore.kernel.org/all/20250103172419.4148674-1-ziy@nvidia.com > > RELATED DISCUSSIONS: > -------------------- > [6] MM-alignment Session [Nov 12, 2025]: > https://lore.kernel.org/linux-mm/bd6a3c75-b9f0-cbcf-f7c4-1ef5dff06d24@google.com > [7] Linux Memory Hotness and Promotion call [Nov 6, 2025]: > https://lore.kernel.org/linux-mm/8ff2fd10-c9ac-4912-cf56-7ecd4afd2770@google.com > [8] LSFMM 2025: > https://lore.kernel.org/all/cf6fc05d-c0b0-4de3-985e-5403977aa3aa@amd.com > [9] OSS India: > https://ossindia2025.sched.com/event/23Jk1 > [10] DMA_MEMCPY_SG comparison: > https://lore.kernel.org/linux-mm/3e73addb-ac01-4a05-bc75-c6c1c56072df@amd.com > [11] SDXI V1: > https://lore.kernel.org/all/20260410-sdxi-base-v1-0-1d184cb5c60a@amd.com > > Thanks to everyone who reviewed, tested or participated in discussions > around this series. Your feedback helped me throughout the development > process. > > Best Regards, > Shivank > > > Shivank Garg (6): > mm/migrate: rename PAGE_ migration flags to FOLIO_ > mm/migrate: use migrate_info field instead of private > mm/migrate: skip data copy for already-copied folios > mm/migrate: add batch-copy path in migrate_pages_batch > mm/migrate: add copy offload registration infrastructure > drivers/migrate_offload: add DMA batch copy driver (dcbm) > > Zi Yan (1): > mm/migrate: adjust NR_MAX_BATCHED_MIGRATION for testing > > drivers/Kconfig | 2 + > drivers/Makefile | 2 + > drivers/migrate_offload/Kconfig | 9 + > drivers/migrate_offload/Makefile | 1 + > drivers/migrate_offload/dcbm/Makefile | 1 + > drivers/migrate_offload/dcbm/dcbm.c | 440 ++++++++++++++++++++++++++ > include/linux/migrate_copy_offload.h | 44 +++ > include/linux/mm.h | 2 + > include/linux/mm_types.h | 1 + > mm/Kconfig | 6 + > mm/Makefile | 1 + > mm/migrate.c | 211 ++++++++---- > mm/migrate_copy_offload.c | 94 ++++++ > mm/util.c | 30 ++ > 14 files changed, 784 insertions(+), 60 deletions(-) > create mode 100644 drivers/migrate_offload/Kconfig > create mode 100644 drivers/migrate_offload/Makefile > create mode 100644 drivers/migrate_offload/dcbm/Makefile > create mode 100644 drivers/migrate_offload/dcbm/dcbm.c > create mode 100644 include/linux/migrate_copy_offload.h > create mode 100644 mm/migrate_copy_offload.c > > > base-commit: 254f49634ee16a731174d2ae34bc50bd5f45e731 --- Best Regards, Huang, Ying