From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15D32C48260 for ; Thu, 8 Feb 2024 06:42:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 97D7D6B0093; Thu, 8 Feb 2024 01:42:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 92D886B0098; Thu, 8 Feb 2024 01:42:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D6426B0099; Thu, 8 Feb 2024 01:42:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6A75F6B0093 for ; Thu, 8 Feb 2024 01:42:25 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 29E6A1608C5 for ; Thu, 8 Feb 2024 06:42:25 +0000 (UTC) X-FDA: 81767692650.06.DD57477 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id 7DF8B140010 for ; Thu, 8 Feb 2024 06:42:23 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=eXfgqLd9; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707374543; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=D8Hqxybc98s++8TK7HMOz18F2Y85+97Dro2V8E2D8Tc=; b=dCHxzuNjuNWbP21es0WejDwExDQ2nrtZV4XXPweXeajbixyeFMzTls2g0lilUUtCL7X8uM 1b9nH5IdkHSer9kOPsBCkfCo9y7AjC7j0lwjofzuQPB0B3OWwhaWMoK/+alxmrokkXLIvF d6V2gf3TRpR++uDBJnatkMslFaH7fLo= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=eXfgqLd9; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf26.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707374543; a=rsa-sha256; cv=none; b=zXXTpVtpSmfHrI0iDgkZ54/8l+RNUGxqNim4W8xPkCOhA0CJn0wkO7Y2NsINH59eVYx5CC mQ1AckbvIF17gyGHT7jdbZ3Sen4BCkevLMyXumbLkmaVo/giuwV0IUcQe8h6jBwu5ehW1i GRtjWtlOWYA2IjNnHx3P8VZp168q5ts= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 42F9761B8D; Thu, 8 Feb 2024 06:42:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2F50CC433F1; Thu, 8 Feb 2024 06:42:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1707374541; bh=Q0QBXUvi2DClMvfJ1phvhOK/w5j08YYDMbbsStc2yQY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=eXfgqLd9QUTkGv1W/ehUJa6sFq/3CyaP+4cUEidecXWVvmJDu0n0P2UECcJisR0V2 nnothRp+ORVk0TwoCH5+vNTVME7CguMXp89J9uvilciQbgHWYfuEWVDbdHKWwSNNqe 5wbTtAY2q2gPs3rWjC7SlZQ/ijex0YMpSx+folwS99tLA9ZOiYYITQKkCrNzvOaqUn iMy2sDTQ/gPsjSG6OLfzWCW8/vPiLhkOmtQjiRoIvv2V/+O3l61ylegyS3T8TFd/bI rcGpkK9gsaHzsD813FYOI9Ou4R8DWf6hWVerVSdfv4BAO+ExblwYIjadHW9Le24NAi oMymplkqG1vyA== Date: Thu, 8 Feb 2024 08:41:56 +0200 From: Mike Rapoport To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Ryan Roberts , Russell King , Catalin Marinas , Will Deacon , Dinh Nguyen , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , "David S. Miller" , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org Subject: Re: [PATCH v3 13/15] mm/memory: optimize fork() with PTE-mapped THP Message-ID: References: <20240129124649.189745-1-david@redhat.com> <20240129124649.189745-14-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240129124649.189745-14-david@redhat.com> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7DF8B140010 X-Stat-Signature: 86g7cs5jpugiajeb194cy66w9qoftm1z X-HE-Tag: 1707374543-643235 X-HE-Meta: U2FsdGVkX189d4wkDDklFnzpVQNfJ3JF9FGlwFVVa74VjULw4pW58/XU+mqh0BrpQo93tjcViB9qx2MWKs/70zOP0GkaokUbO4pQNxoBvm9u0morUqad7UH4NKuNn0uWxzq8h8X10gzNm8vlubFgf00Vww38Ozda3fW5svMBu+Zx3k0wmRBAFSovOjpGTd4W50H8dfKODpXVSCknBXggiB7o5zB70q5G2waOBqq1VC/1gmCuUGySVeholCWrGN1NKg/H0fHp2CQxLt9wv8X6wcTB6H/Ucw88B9lmfvJVh9eSVG/b5IkZgzFrSBEUbF5GfD81Eqmn+59+a5ooNAFCgy++o+znnCeufpSdcOFKMmrTwWuwPZIqgunLLg5mxyjZvol20CCzC7JkPCXeO4eL2Iy/WazoCfP3d2kAwO3GQKcaEatWXfHTI+XztT9r4XBv4sNw4TrkiDG+wYD24Rw5DRdq37YryRelMUzzcTfAgKPdYAw09ZK++I/Cp30BN8j4puRZn1eWeL3fDTPoJFrl0lnEHFdKgGMouhsAQuSvPqJ850vesF80KdgNG2OJaH29vVkUGpGG9jQHlT60SW9Y4QGAy6apd7dC6eP6MDXOwrg5/MX4Czi1bg4YqisQ75tlOEx8npL8agyGCX+2W4KB4eU7DH3vkwvi9lC8lOTVTJq9UpEDHKGfxmjHMM6eizTrj9QgBNYDbanC5L9jD6tn+jBLR8J60dJyQVl6TA2XhVLJDGl0SOhOHQ0P3ehF7hk9KB1NlkyB0N59zqSs6VqTynA5pSE9sM2DsqLYI373kPUOGV0lG3/Q+GmXJ5llcUPPwJGoH/0jY5PatodUaXjsDJjwm7Ihsp6wqt+Utc7wvE9IhCtLTBPuHg6mIllJIgP75HqdPLbz6xk+auGzAg3vsdpe2p04r3PFGFeCPJcb8QTiHJ7wvKT9trcEbAUqZU8jgzDa63ByKQrnrRKypzS iNFc6Lk2 6fGAZMvYaHDFfMphNHfd0ANlu2GUhEXP8/4v5yDT9U0Zk1Gs9XI3D5SFhr0PQHyMk82LIsGstqnWVokPZQ35omC5Cq/0lmu/37zBDBi3aKPPG2fYFcZVQKb6gKPhHF9mEjUO5tYVY75y/lQfgZRMXQ3F5109PUasdmxZnJRxQqTjYUT757jiF4nmTWxooQxLFFXNPxMgcHdzHVG03kOy0Y6fVX3thQl+XJ9sYAUTxyVKg91kER+Cvf1vygSYCV6B3QvE3Ca9haznbgsU7PBMazcCF5lk0yUboOce4p9ecMJGcmzm4nzxTW/DS7dNaWcD8KM/mexaJUZ+/riSAF/gNcrmmdzHJqTrcF6mS0NQHqQrSeWc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jan 29, 2024 at 01:46:47PM +0100, David Hildenbrand wrote: > Let's implement PTE batching when consecutive (present) PTEs map > consecutive pages of the same large folio, and all other PTE bits besides > the PFNs are equal. > > We will optimize folio_pte_batch() separately, to ignore selected > PTE bits. This patch is based on work by Ryan Roberts. > > Use __always_inline for __copy_present_ptes() and keep the handling for > single PTEs completely separate from the multi-PTE case: we really want > the compiler to optimize for the single-PTE case with small folios, to > not degrade performance. > > Note that PTE batching will never exceed a single page table and will > always stay within VMA boundaries. > > Further, processing PTE-mapped THP that maybe pinned and have > PageAnonExclusive set on at least one subpage should work as expected, > but there is room for improvement: We will repeatedly (1) detect a PTE > batch (2) detect that we have to copy a page (3) fall back and allocate a > single page to copy a single page. For now we won't care as pinned pages > are a corner case, and we should rather look into maintaining only a > single PageAnonExclusive bit for large folios. > > Reviewed-by: Ryan Roberts > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > include/linux/pgtable.h | 31 +++++++++++ > mm/memory.c | 112 +++++++++++++++++++++++++++++++++------- > 2 files changed, 124 insertions(+), 19 deletions(-) > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index 351cd9dc7194..aab227e12493 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -650,6 +650,37 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres > } > #endif > > +#ifndef wrprotect_ptes > +/** > + * wrprotect_ptes - Write-protect PTEs that map consecutive pages of the same > + * folio. > + * @mm: Address space the pages are mapped into. > + * @addr: Address the first page is mapped at. > + * @ptep: Page table pointer for the first entry. > + * @nr: Number of entries to write-protect. > + * > + * May be overridden by the architecture; otherwise, implemented as a simple > + * loop over ptep_set_wrprotect(). > + * > + * Note that PTE bits in the PTE range besides the PFN can differ. For example, > + * some PTEs might be write-protected. > + * > + * Context: The caller holds the page table lock. The PTEs map consecutive > + * pages that belong to the same folio. The PTEs are all in the same PMD. > + */ > +static inline void wrprotect_ptes(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, unsigned int nr) > +{ > + for (;;) { > + ptep_set_wrprotect(mm, addr, ptep); > + if (--nr == 0) > + break; > + ptep++; > + addr += PAGE_SIZE; > + } > +} > +#endif > + > /* > * On some architectures hardware does not set page access bit when accessing > * memory page, it is responsibility of software setting this bit. It brings > diff --git a/mm/memory.c b/mm/memory.c > index 41b24da5be38..86f8a0021c8e 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -930,15 +930,15 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma > return 0; > } > > -static inline void __copy_present_pte(struct vm_area_struct *dst_vma, > +static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma, > struct vm_area_struct *src_vma, pte_t *dst_pte, pte_t *src_pte, > - pte_t pte, unsigned long addr) > + pte_t pte, unsigned long addr, int nr) > { > struct mm_struct *src_mm = src_vma->vm_mm; > > /* If it's a COW mapping, write protect it both processes. */ > if (is_cow_mapping(src_vma->vm_flags) && pte_write(pte)) { > - ptep_set_wrprotect(src_mm, addr, src_pte); > + wrprotect_ptes(src_mm, addr, src_pte, nr); > pte = pte_wrprotect(pte); > } > > @@ -950,26 +950,93 @@ static inline void __copy_present_pte(struct vm_area_struct *dst_vma, > if (!userfaultfd_wp(dst_vma)) > pte = pte_clear_uffd_wp(pte); > > - set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); > + set_ptes(dst_vma->vm_mm, addr, dst_pte, pte, nr); > +} > + > +/* > + * Detect a PTE batch: consecutive (present) PTEs that map consecutive > + * pages of the same folio. > + * > + * All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN. > + */ > +static inline int folio_pte_batch(struct folio *folio, unsigned long addr, > + pte_t *start_ptep, pte_t pte, int max_nr) > +{ > + unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); > + const pte_t *end_ptep = start_ptep + max_nr; > + pte_t expected_pte = pte_next_pfn(pte); > + pte_t *ptep = start_ptep + 1; > + > + VM_WARN_ON_FOLIO(!pte_present(pte), folio); > + > + while (ptep != end_ptep) { > + pte = ptep_get(ptep); > + > + if (!pte_same(pte, expected_pte)) > + break; > + > + /* > + * Stop immediately once we reached the end of the folio. In > + * corner cases the next PFN might fall into a different > + * folio. > + */ > + if (pte_pfn(pte) == folio_end_pfn) > + break; > + > + expected_pte = pte_next_pfn(expected_pte); > + ptep++; > + } > + > + return ptep - start_ptep; > } > > /* > - * Copy one pte. Returns 0 if succeeded, or -EAGAIN if one preallocated page > - * is required to copy this pte. > + * Copy one present PTE, trying to batch-process subsequent PTEs that map > + * consecutive pages of the same folio by copying them as well. > + * > + * Returns -EAGAIN if one preallocated page is required to copy the next PTE. > + * Otherwise, returns the number of copied PTEs (at least 1). > */ > static inline int > -copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > +copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > pte_t *dst_pte, pte_t *src_pte, pte_t pte, unsigned long addr, > - int *rss, struct folio **prealloc) > + int max_nr, int *rss, struct folio **prealloc) > { > struct page *page; > struct folio *folio; > + int err, nr; > > page = vm_normal_page(src_vma, addr, pte); > if (unlikely(!page)) > goto copy_pte; > > folio = page_folio(page); > + > + /* > + * If we likely have to copy, just don't bother with batching. Make > + * sure that the common "small folio" case is as fast as possible > + * by keeping the batching logic separate. > + */ > + if (unlikely(!*prealloc && folio_test_large(folio) && max_nr != 1)) { > + nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr); > + folio_ref_add(folio, nr); > + if (folio_test_anon(folio)) { > + if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, > + nr, src_vma))) { > + folio_ref_sub(folio, nr); > + return -EAGAIN; > + } > + rss[MM_ANONPAGES] += nr; > + VM_WARN_ON_FOLIO(PageAnonExclusive(page), folio); > + } else { > + folio_dup_file_rmap_ptes(folio, page, nr); > + rss[mm_counter_file(folio)] += nr; > + } > + __copy_present_ptes(dst_vma, src_vma, dst_pte, src_pte, pte, > + addr, nr); > + return nr; > + } > + > folio_get(folio); > if (folio_test_anon(folio)) { > /* > @@ -981,8 +1048,9 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > if (unlikely(folio_try_dup_anon_rmap_pte(folio, page, src_vma))) { > /* Page may be pinned, we have to copy. */ > folio_put(folio); > - return copy_present_page(dst_vma, src_vma, dst_pte, src_pte, > - addr, rss, prealloc, page); > + err = copy_present_page(dst_vma, src_vma, dst_pte, src_pte, > + addr, rss, prealloc, page); > + return err ? err : 1; > } > rss[MM_ANONPAGES]++; > VM_WARN_ON_FOLIO(PageAnonExclusive(page), folio); > @@ -992,8 +1060,8 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > } > > copy_pte: > - __copy_present_pte(dst_vma, src_vma, dst_pte, src_pte, pte, addr); > - return 0; > + __copy_present_ptes(dst_vma, src_vma, dst_pte, src_pte, pte, addr, 1); > + return 1; > } > > static inline struct folio *folio_prealloc(struct mm_struct *src_mm, > @@ -1030,10 +1098,11 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > pte_t *src_pte, *dst_pte; > pte_t ptent; > spinlock_t *src_ptl, *dst_ptl; > - int progress, ret = 0; > + int progress, max_nr, ret = 0; > int rss[NR_MM_COUNTERS]; > swp_entry_t entry = (swp_entry_t){0}; > struct folio *prealloc = NULL; > + int nr; > > again: > progress = 0; > @@ -1064,6 +1133,8 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > arch_enter_lazy_mmu_mode(); > > do { > + nr = 1; > + > /* > * We are holding two locks at this point - either of them > * could generate latencies in another task on another CPU. > @@ -1100,9 +1171,10 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > */ > WARN_ON_ONCE(ret != -ENOENT); > } > - /* copy_present_pte() will clear `*prealloc' if consumed */ > - ret = copy_present_pte(dst_vma, src_vma, dst_pte, src_pte, > - ptent, addr, rss, &prealloc); > + /* copy_present_ptes() will clear `*prealloc' if consumed */ > + max_nr = (end - addr) / PAGE_SIZE; > + ret = copy_present_ptes(dst_vma, src_vma, dst_pte, src_pte, > + ptent, addr, max_nr, rss, &prealloc); > /* > * If we need a pre-allocated page for this pte, drop the > * locks, allocate, and try again. > @@ -1119,8 +1191,10 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > folio_put(prealloc); > prealloc = NULL; > } > - progress += 8; > - } while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end); > + nr = ret; > + progress += 8 * nr; > + } while (dst_pte += nr, src_pte += nr, addr += PAGE_SIZE * nr, > + addr != end); > > arch_leave_lazy_mmu_mode(); > pte_unmap_unlock(orig_src_pte, src_ptl); > @@ -1141,7 +1215,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > prealloc = folio_prealloc(src_mm, src_vma, addr, false); > if (!prealloc) > return -ENOMEM; > - } else if (ret) { > + } else if (ret < 0) { > VM_WARN_ON_ONCE(1); > } > > -- > 2.43.0 > > -- Sincerely yours, Mike.