From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qLOOY-0048ed-0Y for linux-arm-kernel@lists.infradead.org; Mon, 17 Jul 2023 13:36:13 +0000 Message-ID: <5df787a0-8e69-2472-cdd6-f96a3f7dfaaf@arm.com> Date: Mon, 17 Jul 2023 14:36:03 +0100 MIME-Version: 1.0 Subject: Re: [PATCH v3 3/4] mm: FLEXIBLE_THP for improved performance References: <20230714160407.4142030-1-ryan.roberts@arm.com> <20230714161733.4144503-3-ryan.roberts@arm.com> <432490d1-8d1e-1742-295a-d6e60a054ab6@arm.com> From: Ryan Roberts In-Reply-To: List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+lwn-linux-arm-kernel=archive.lwn.net@lists.infradead.org List-Archive: To: Yu Zhao , Hugh Dickins , Matthew Wilcox , Andrew Morton Cc: "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Catalin Marinas , Will Deacon , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org >>>> +static int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio) >>>> +{ >>>> + int i; >>>> + gfp_t gfp; >>>> + pte_t *pte; >>>> + unsigned long addr; >>>> + struct vm_area_struct *vma = vmf->vma; >>>> + int prefer = anon_folio_order(vma); >>>> + int orders[] = { >>>> + prefer, >>>> + prefer > PAGE_ALLOC_COSTLY_ORDER ? PAGE_ALLOC_COSTLY_ORDER : 0, >>>> + 0, >>>> + }; >>>> + >>>> + *folio = NULL; >>>> + >>>> + if (vmf_orig_pte_uffd_wp(vmf)) >>>> + goto fallback; >>>> + >>>> + for (i = 0; orders[i]; i++) { >>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]); >>>> + if (addr >= vma->vm_start && >>>> + addr + (PAGE_SIZE << orders[i]) <= vma->vm_end) >>>> + break; >>>> + } >>>> + >>>> + if (!orders[i]) >>>> + goto fallback; >>>> + >>>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >>>> + if (!pte) >>>> + return -EAGAIN; >>> >>> It would be a bug if this happens. So probably -EINVAL? >> >> Not sure what you mean? Hugh Dickins' series that went into v6.5-rc1 makes it >> possible for pte_offset_map() to fail (if I understood correctly) and we have to >> handle this. The intent is that we will return from the fault without making any >> change, then we will refault and try again. > > Thanks for checking that -- it's very relevant. One detail is that > that series doesn't affect anon. IOW, collapsing PTEs into a PMD can't > happen while we are holding mmap_lock for read here, and therefore, > the race that could cause pte_offset_map() on shmem/file PTEs to fail > doesn't apply here. But Hugh's patches have changed do_anonymous_page() to handle failure from pte_offset_map_lock(). So I was just following that pattern. If this really can't happen, then I'd rather WARN/BUG on it, and simplify alloc_anon_folio()'s prototype to just return a `struct folio *` (and if it's null that means ENOMEM). Hugh, perhaps you can comment? As an aside, it was my understanding from LWN, that we are now using a per-VMA lock so presumably we don't hold mmap_lock for read here? Or perhaps that only applies to file-backed memory? > > +Hugh Dickins for further consultation if you need it. > >>>> + >>>> + for (; orders[i]; i++) { >>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]); >>>> + vmf->pte = pte + pte_index(addr); >>>> + if (!vmf_pte_range_changed(vmf, 1 << orders[i])) >>>> + break; >>>> + } >>>> + >>>> + vmf->pte = NULL; >>>> + pte_unmap(pte); >>>> + >>>> + gfp = vma_thp_gfp_mask(vma); >>>> + >>>> + for (; orders[i]; i++) { >>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << orders[i]); >>>> + *folio = vma_alloc_folio(gfp, orders[i], vma, addr, true); >>>> + if (*folio) { >>>> + clear_huge_page(&(*folio)->page, addr, 1 << orders[i]); >>>> + return 0; >>>> + } >>>> + } >>>> + >>>> +fallback: >>>> + *folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); >>>> + return *folio ? 0 : -ENOMEM; >>>> +} >>>> +#else >>>> +static inline int alloc_anon_folio(struct vm_fault *vmf, struct folio **folio) >>> >>> Drop "inline" (it doesn't do anything in .c). >> >> There are 38 instances of inline in memory.c alone, so looks like a well used >> convention, even if the compiler may choose to ignore. Perhaps you can educate >> me; what's the benefit of dropping it? > > I'll let Willy and Andrew educate both of us :) > > +Matthew Wilcox +Andrew Morton please. Thank you. > >>> The rest looks good to me. >> >> Great - just incase it wasn't obvious, I decided not to overwrite vmf->address >> with the aligned version, as you suggested > > Yes, I've noticed. Not overwriting has its own merits for sure. > >> for 2 reasons; 1) address is const >> in the struct, so would have had to change that. 2) there is a uffd path that >> can be taken after the vmf->address fixup would have occured and the path >> consumes that member, so it would have had to be un-fixed-up making it more >> messy than the way I opted for. >> >> Thanks for the quick review as always! _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel