From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E97E3C433F5 for ; Mon, 20 Dec 2021 18:53:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 287946B0071; Mon, 20 Dec 2021 13:53:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 20E526B0073; Mon, 20 Dec 2021 13:53:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 088186B0074; Mon, 20 Dec 2021 13:53:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id E93F06B0071 for ; Mon, 20 Dec 2021 13:53:26 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9B473894A7 for ; Mon, 20 Dec 2021 18:53:26 +0000 (UTC) X-FDA: 78939070812.25.36265E6 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id 4D614180041 for ; Mon, 20 Dec 2021 18:53:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=kj2QTGmFgV7Djw7LNhrL5ROHsmbIWJF49HXpYPf8/UU=; b=G4W2D+AlLdNiv8olc5agr0Mi+h 4GfDiEqN4qlk9+PrawUwlcprCA++0YwRh7IkFF33ev1js3rxkRRs5EfJO1Jy51/cd01bRVDGPvjwb c1RmwtI/BekNVLw3rqWV31vOCyGklOEVfVNiI8ZnBDNj6xWAHoSW3q/oRxOX7eC6dt5IZicxOSWAA /mJZunkex8i0f4L0kJAa5NAX2OvM5np6uCth6gukXAxux9NprSalVJHYB6XHGxBH4QUuf26j/3oeq 1VwjhLn5rUVmgwqXZT8mAQ0MGDZX/RpLVFkOGxx4wUJGzxxJsjRHWI1m4YaH0Mvd0284zlXSbPbpc zfTlnE1Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mzNmN-001psh-7z; Mon, 20 Dec 2021 18:52:59 +0000 Date: Mon, 20 Dec 2021 18:52:59 +0000 From: Matthew Wilcox To: David Hildenbrand Cc: Linus Torvalds , Nadav Amit , Jason Gunthorpe , Linux Kernel Mailing List , Andrew Morton , Hugh Dickins , David Rientjes , Shakeel Butt , John Hubbard , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Vlastimil Babka , Jann Horn , Michal Hocko , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Linux-MM , "open list:KERNEL SELFTEST FRAMEWORK" , "open list:DOCUMENTATION" Subject: Re: [PATCH v1 06/11] mm: support GUP-triggered unsharing via FAULT_FLAG_UNSHARE (!hugetlb) Message-ID: References: <5CA1D89F-9DDB-4F91-8929-FE29BB79A653@vmware.com> <4D97206A-3B32-4818-9980-8F24BC57E289@vmware.com> <5A7D771C-FF95-465E-95F6-CD249FE28381@vmware.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: bn9hsyyrsf5cdsiok8q7mc3jyphyb9du X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 4D614180041 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=G4W2D+Al; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1640026401-723012 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Dec 20, 2021 at 06:37:30PM +0000, Matthew Wilcox wrote: > +++ b/mm/memory.c > @@ -3626,7 +3626,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); > dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS); > pte = mk_pte(page, vma->vm_page_prot); > - if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) { > + if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) { > pte = maybe_mkwrite(pte_mkdirty(pte), vma); > vmf->flags &= ~FAULT_FLAG_WRITE; > ret |= VM_FAULT_WRITE; [...] > @@ -1673,17 +1665,14 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount, > * reuse_swap_page() returns false, but it may be always overwritten > * (see the other implementation for CONFIG_SWAP=n). > */ > -bool reuse_swap_page(struct page *page, int *total_map_swapcount) > +bool reuse_swap_page(struct page *page) > { > - int count, total_mapcount, total_swapcount; > + int count, total_swapcount; > > VM_BUG_ON_PAGE(!PageLocked(page), page); > if (unlikely(PageKsm(page))) > return false; > - count = page_trans_huge_map_swapcount(page, &total_mapcount, > - &total_swapcount); > - if (total_map_swapcount) > - *total_map_swapcount = total_mapcount + total_swapcount; > + count = page_trans_huge_map_swapcount(page, &total_swapcount); > if (count == 1 && PageSwapCache(page) && > (likely(!PageTransCompound(page)) || > /* The remaining swap count will be freed soon */ It makes me wonder if reuse_swap_page() can also be based on refcount instead of mapcount?