From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 127FDC54EBE for ; Mon, 16 Jan 2023 19:18:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A42236B0071; Mon, 16 Jan 2023 14:18:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F2066B0078; Mon, 16 Jan 2023 14:18:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BA7A6B007B; Mon, 16 Jan 2023 14:18:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 7B0296B0071 for ; Mon, 16 Jan 2023 14:18:21 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5802D1C630D for ; Mon, 16 Jan 2023 19:18:21 +0000 (UTC) X-FDA: 80361623202.29.BFB9B45 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id CBEAB10000C for ; Mon, 16 Jan 2023 19:18:18 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kk+g6L2E; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673896698; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NDIWkGGbBKrAilD8l7TWoddU9+OxDWOGEfHNflGqd50=; b=SyxlkAoqJWkznIj5rUs95UpowJGY67hU1wdLJFrUE3CGxjVRGo7xWYICfnPRvuEh82C2Gh P74r417ON8wNtE7GnODmoYPkPToT627NLtFHEGt4OpOcgSjPTj5od/MLhcpIBoEaQE4j92 jG3aalZcAJNITqNmiwBN87p9ADRkCQI= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kk+g6L2E; dmarc=none; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673896698; a=rsa-sha256; cv=none; b=i71GE6zy2hUWV/+tzUTrqNS4xJx4OGVOebcpBws99LmBO0ZOpwWaKT6pSveqKaqv6S9VIm 85PzRMImQSkdldjQ/3ao3DKKmcpyaqOSsmnn55dIqe9hGxTKbEV2N3ngs/APrlx7rJo+dC U0TUy3eHjIjHeKqbmGZVlKkPq1wkJLU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=NDIWkGGbBKrAilD8l7TWoddU9+OxDWOGEfHNflGqd50=; b=kk+g6L2E7CB44NeKEVxYarM84v U63V/h9H8Ssl6J/8EWO5pHzsURzM1Mqg4TsmaPK/2KMrsEdG9+b2sY2P9KCJybBtVIw593UE5cIVA Bez66U2AruMO1F07tbraUlxEdH5yRwK0w3RqLoslhYlz4StPU+WJ3RXKRwAKmrE253/sDUH6YbZ/7 Yl8XVHezNOjgYl/U/oofscYylAsm0n85EnBwJUr11uroa+1vWW8p3fzm1/h8LZd/9VHKzYim8Bn2t GzT/IM54R7tM5NlfmQobnuECvude/hTe36uhcNnID7k/Cu41llrvd23Km9u4zWowZcQv6EEhl2E1O MT/jQyXg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pHUzn-00904n-1I; Mon, 16 Jan 2023 19:18:15 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 4/5] mm: Use a folio in copy_pte_range() Date: Mon, 16 Jan 2023 19:18:12 +0000 Message-Id: <20230116191813.2145215-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230116191813.2145215-1-willy@infradead.org> References: <20230116191813.2145215-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: CBEAB10000C X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 95j6opejme9ybfpg7tea3sg1r1e1bruf X-HE-Tag: 1673896698-871307 X-HE-Meta: U2FsdGVkX1+cxT+/tpiH3pTAwioqJm3yApB3P9T2WZybbVeseXDdx52R7VEOoJO2hOu4fjUWZaLksWvKq38xUmnGIQXFzbYWyZyXR6jB/KcLBIFUKnL9HSaczl7RKyz8i93k/6kTzcMRRSxZ3ON0QhBjGnEo5CzrdWZXkxH1SfW+2MX+LX6mFyhJxtj8IG1U7es3UuSkW9+6UuQH+c6JAM2Btq/3imJTKOihMv9UeQPRhjwoJ/aTfiK3eEdVNKlmgSRrt8UJ4TK5plJxPF+I+P/l0dELjypQeA/QguYsrp0NfXqCI/s0/0hjsRXsOQcEUKU81PPhXBdfc4pT06Xmav4Moxdc1jM6r8lnBmAP4tou4OqTsgKereTad1G2ohD/OI5V+idbJtvkZnCv/NG/AgAv5o/NIgfdxtBRTcQgLtn2E9lzUtVA1FgpN24HoaBAnZlJ6HtUB5cbeQMwmqZdJsxgFyrPqR5azitgTRuVRh8pUx3a4IuMVCLDml2yol+XuyBdUFj5LonQI2xXWwqjatm4JqhlVunngOHPgboS9Ruwjka+RVj973zER4X+qFoN6CWEBEiMOpdxBA/zWbbLqL8VM/G1A/64yXcjqFO5XpHBtEUzeC5SobNeJl24z/Ug6ZCfUIvxx2JZ2GFTrao3b9zbsGMewAd5spunP5kRthTGNnGR8p4Zpy3oWs2y53XsI7kUVW6thjaXah1d0F13S96I/86O/GmNYPP672iQxZ8CK+dHHu1OlsGveMzAzpc5xmGEiEvNZ4SMcK4H8QBefmvet0WA18OZizi8pyhPlSw742SQcLpChrW0nc0jYn+j+54JK80A6yQZ1bOnbEkPO/hG9qbBORkqMl1C6llPtPEcpZUwwOKjsTQnER+DOsNlZyuJ0oVtXzByN+F2SIjb9eab+MU4dG84rBrXQz7jCIJve3PybPGdiw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allocate an order-0 folio instead of a page and pass it all the way down the call chain. Removes dozens of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 51 +++++++++++++++++++++++++-------------------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index dc8a6fd45958..7aa741a3cd9f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -863,13 +863,13 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, static inline int copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, - struct page **prealloc, struct page *page) + struct folio **prealloc, struct page *page) { - struct page *new_page; + struct folio *new_folio; pte_t pte; - new_page = *prealloc; - if (!new_page) + new_folio = *prealloc; + if (!new_folio) return -EAGAIN; /* @@ -877,14 +877,14 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma * over and copy the page & arm it. */ *prealloc = NULL; - copy_user_highpage(new_page, page, addr, src_vma); - __SetPageUptodate(new_page); - page_add_new_anon_rmap(new_page, dst_vma, addr); - lru_cache_add_inactive_or_unevictable(new_page, dst_vma); - rss[mm_counter(new_page)]++; + copy_user_highpage(&new_folio->page, page, addr, src_vma); + __folio_mark_uptodate(new_folio); + folio_add_new_anon_rmap(new_folio, dst_vma, addr); + folio_add_lru_vma(new_folio, dst_vma); + rss[MM_ANONPAGES]++; /* All done, just insert the new page copy in the child */ - pte = mk_pte(new_page, dst_vma->vm_page_prot); + pte = mk_pte(&new_folio->page, dst_vma->vm_page_prot); pte = maybe_mkwrite(pte_mkdirty(pte), dst_vma); if (userfaultfd_pte_wp(dst_vma, *src_pte)) /* Uffd-wp needs to be delivered to dest pte as well */ @@ -900,7 +900,7 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma static inline int copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, - struct page **prealloc) + struct folio **prealloc) { struct mm_struct *src_mm = src_vma->vm_mm; unsigned long vm_flags = src_vma->vm_flags; @@ -922,11 +922,11 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, return copy_present_page(dst_vma, src_vma, dst_pte, src_pte, addr, rss, prealloc, page); } - rss[mm_counter(page)]++; + rss[MM_ANONPAGES]++; } else if (page) { get_page(page); page_dup_file_rmap(page, false); - rss[mm_counter(page)]++; + rss[mm_counter_file(page)]++; } /* @@ -954,23 +954,22 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, return 0; } -static inline struct page * -page_copy_prealloc(struct mm_struct *src_mm, struct vm_area_struct *vma, - unsigned long addr) +static inline struct folio *page_copy_prealloc(struct mm_struct *src_mm, + struct vm_area_struct *vma, unsigned long addr) { - struct page *new_page; + struct folio *new_folio; - new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, addr); - if (!new_page) + new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, addr, false); + if (!new_folio) return NULL; - if (mem_cgroup_charge(page_folio(new_page), src_mm, GFP_KERNEL)) { - put_page(new_page); + if (mem_cgroup_charge(new_folio, src_mm, GFP_KERNEL)) { + folio_put(new_folio); return NULL; } - cgroup_throttle_swaprate(new_page, GFP_KERNEL); + cgroup_throttle_swaprate(&new_folio->page, GFP_KERNEL); - return new_page; + return new_folio; } static int @@ -986,7 +985,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, int progress, ret = 0; int rss[NR_MM_COUNTERS]; swp_entry_t entry = (swp_entry_t){0}; - struct page *prealloc = NULL; + struct folio *prealloc = NULL; again: progress = 0; @@ -1056,7 +1055,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, * will allocate page according to address). This * could only happen if one pinned pte changed. */ - put_page(prealloc); + folio_put(prealloc); prealloc = NULL; } progress += 8; @@ -1093,7 +1092,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, goto again; out: if (unlikely(prealloc)) - put_page(prealloc); + folio_put(prealloc); return ret; } -- 2.35.1