The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* Re: [RFC PATCH 1/1] mm: batch page copies in folio_copy() and folio_mc_copy()
       [not found] ` <20260427142036.111940-4-shivankg@amd.com>
@ 2026-05-12  9:31   ` David Hildenbrand (Arm)
  0 siblings, 0 replies; only message in thread
From: David Hildenbrand (Arm) @ 2026-05-12  9:31 UTC (permalink / raw)
  To: Shivank Garg, Andrew Morton, linux-mm, linux-kernel, x86
  Cc: Lorenzo Stoakes, Liam R . Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H . Peter Anvin, Ankur Arora,
	Bharata B Rao, Hrushikesh Salunke, David Rientjes

On 4/27/26 16:20, Shivank Garg wrote:
> Rewrite folio_copy() and folio_mc_copy() as thin wrappers around new
> batched helpers copy_highpages() and copy_mc_highpages().
> 
> The current implementations iterate copy_highpage() (or its #MC-aware
> variant) per 4 KB page. For a single 2 MB folio that loop runs 512
> times and pays, per page:
> 
>   - kmap_local_page() / kunmap_local()
>   - cond_resched()
>   - one invocation of the architecture copy_page()/memcpy() primitive
> 
> The new helpers issue a single copy_mc_to_kernel()/memcpy() over
> the whole contiguous range when CONFIG_HIGHMEM is off and no
> architecture overrides (__HAVE_ARCH_COPY_HIGHPAGE) copy_highpage().
> HIGHMEM and arch overrides keep the existing per-page path.
> 
> Tested on dual-socket AMD EPYC 9655 (Zen 5) with a CXL.mem node.
> In-kernel folio_mc_copy() microbenchmark on 2 MB folios, source
> evicted from cache before each iteration and measured throughput:
> 
>   direction         baseline GB/s   optimized GB/s   speedup
>   DRAM0 -> DRAM1     18.65 ± 1.37    38.03 ± 3.21     2.04x
>   DRAM0 -> CXL       25.46 ± 2.89    39.29 ± 1.17     1.54x
>   CXL   -> DRAM0     20.61 ± 3.95    35.07 ± 0.62     1.70x
> 
> End-to-end move_pages(2) throughput on anonymous 2 MB mTHP folios,
> 1 GB migrated per run:
> 
>   direction         baseline GB/s   optimized GB/s   speedup
>   DRAM0 -> DRAM1      7.20 ± 0.03     8.01 ± 0.02     1.11x
>   DRAM0 -> CXL       11.12 ± 0.15    13.07 ± 0.03     1.18x
>   DRAM1 -> DRAM0      7.21 ± 0.02     7.95 ± 0.02     1.10x
>   CXL   -> DRAM0      9.10 ± 0.05     9.49 ± 0.01     1.04x
> 
> On AMD EPYC 7713 (Zen 3 / Milan, REP_GOOD without FSRM/ERMS) the
> folio_copy() bulk path regresses because memcpy() falls through to
> memcpy_orig (an unrolled movq loop), which is slower than the
> per-page copy_page() (microcoded rep movsq) it replaces. 

Do you know what the reason for that fallback is? Could it be fixed (e.g., when
we detect page alignment or sth like that?)

-- 
Cheers,

David

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2026-05-12  9:31 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20260427142036.111940-2-shivankg@amd.com>
     [not found] ` <20260427142036.111940-4-shivankg@amd.com>
2026-05-12  9:31   ` [RFC PATCH 1/1] mm: batch page copies in folio_copy() and folio_mc_copy() David Hildenbrand (Arm)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox