From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7509C61D85 for ; Tue, 21 Nov 2023 15:15:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To: Subject:MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8Ocs6rDuqAZMe/vSX1qtbjQ/dEzQmBaWiauDzRKvE2Q=; b=zvJsNLOC6ExTww bewgx4D9AgAR6Xuigt0xEhukp+Yr+Xl9tMBvQ9Jl9jhi+II3GWBXZdEiBhPZGgWhL74NDda0Cc5hE k3gWGKXQGqbSUpwqsR9/NSlxPb6L6GivN0S63+EefVrNrIJEx/caRA1IQZ05gBToC1e11ufoPv27J tU8hHpCq3LeQC/PirjSVTpMV7HrS8GDZWafcXSeK3pUNjl+q965lFvnj+jzNFynTOElW1QUZIGzfP CqLAwQoLnbSy6TrYol1FjuksKBTRWVq4eu+1dkSqYrURLqUyRUt5UjcoDFORX2eEg0Q7+w0UoCSNZ lGcsai5COozVQYq5q9nA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1r5SSx-00H7XA-27; Tue, 21 Nov 2023 15:15:07 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1r5SSu-00H7Uh-0G for linux-arm-kernel@lists.infradead.org; Tue, 21 Nov 2023 15:15:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7E87DFEC; Tue, 21 Nov 2023 07:15:46 -0800 (PST) Received: from [10.1.26.189] (XHFQ2J9959.cambridge.arm.com [10.1.26.189]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AA1F03F7A6; Tue, 21 Nov 2023 07:14:56 -0800 (PST) Message-ID: Date: Tue, 21 Nov 2023 15:14:55 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 12/14] arm64/mm: Wire up PTE_CONT for user mappings Content-Language: en-GB To: Alistair Popple Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231115163018.1303287-1-ryan.roberts@arm.com> <20231115163018.1303287-13-ryan.roberts@arm.com> <87v89vmjus.fsf@nvdebian.thelocal> From: Ryan Roberts In-Reply-To: <87v89vmjus.fsf@nvdebian.thelocal> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231121_071504_239785_700E6A28 X-CRM114-Status: GOOD ( 27.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 21/11/2023 11:22, Alistair Popple wrote: > > Ryan Roberts writes: > > [...] > >> +static void contpte_fold(struct mm_struct *mm, unsigned long addr, >> + pte_t *ptep, pte_t pte, bool fold) >> +{ >> + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); >> + unsigned long start_addr; >> + pte_t *start_ptep; >> + int i; >> + >> + start_ptep = ptep = contpte_align_down(ptep); >> + start_addr = addr = ALIGN_DOWN(addr, CONT_PTE_SIZE); >> + pte = pfn_pte(ALIGN_DOWN(pte_pfn(pte), CONT_PTES), pte_pgprot(pte)); >> + pte = fold ? pte_mkcont(pte) : pte_mknoncont(pte); >> + >> + for (i = 0; i < CONT_PTES; i++, ptep++, addr += PAGE_SIZE) { >> + pte_t ptent = __ptep_get_and_clear(mm, addr, ptep); >> + >> + if (pte_dirty(ptent)) >> + pte = pte_mkdirty(pte); >> + >> + if (pte_young(ptent)) >> + pte = pte_mkyoung(pte); >> + } >> + >> + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); >> + >> + __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES); >> +} >> + >> +void __contpte_try_fold(struct mm_struct *mm, unsigned long addr, >> + pte_t *ptep, pte_t pte) >> +{ >> + /* >> + * We have already checked that the virtual and pysical addresses are >> + * correctly aligned for a contpte mapping in contpte_try_fold() so the >> + * remaining checks are to ensure that the contpte range is fully >> + * covered by a single folio, and ensure that all the ptes are valid >> + * with contiguous PFNs and matching prots. We ignore the state of the >> + * access and dirty bits for the purpose of deciding if its a contiguous >> + * range; the folding process will generate a single contpte entry which >> + * has a single access and dirty bit. Those 2 bits are the logical OR of >> + * their respective bits in the constituent pte entries. In order to >> + * ensure the contpte range is covered by a single folio, we must >> + * recover the folio from the pfn, but special mappings don't have a >> + * folio backing them. Fortunately contpte_try_fold() already checked >> + * that the pte is not special - we never try to fold special mappings. >> + * Note we can't use vm_normal_page() for this since we don't have the >> + * vma. >> + */ >> + >> + struct page *page = pte_page(pte); >> + struct folio *folio = page_folio(page); >> + unsigned long folio_saddr = addr - (page - &folio->page) * PAGE_SIZE; >> + unsigned long folio_eaddr = folio_saddr + folio_nr_pages(folio) * PAGE_SIZE; >> + unsigned long cont_saddr = ALIGN_DOWN(addr, CONT_PTE_SIZE); >> + unsigned long cont_eaddr = cont_saddr + CONT_PTE_SIZE; >> + unsigned long pfn; >> + pgprot_t prot; >> + pte_t subpte; >> + pte_t *orig_ptep; >> + int i; >> + >> + if (folio_saddr > cont_saddr || folio_eaddr < cont_eaddr) >> + return; >> + >> + pfn = pte_pfn(pte) - ((addr - cont_saddr) >> PAGE_SHIFT); >> + prot = pte_pgprot(pte_mkold(pte_mkclean(pte))); >> + orig_ptep = ptep; >> + ptep = contpte_align_down(ptep); >> + >> + for (i = 0; i < CONT_PTES; i++, ptep++, pfn++) { >> + subpte = __ptep_get(ptep); >> + subpte = pte_mkold(pte_mkclean(subpte)); >> + >> + if (!pte_valid(subpte) || >> + pte_pfn(subpte) != pfn || >> + pgprot_val(pte_pgprot(subpte)) != pgprot_val(prot)) >> + return; >> + } >> + >> + contpte_fold(mm, addr, orig_ptep, pte, true); >> +} >> +EXPORT_SYMBOL(__contpte_try_fold); >> + >> +void __contpte_try_unfold(struct mm_struct *mm, unsigned long addr, >> + pte_t *ptep, pte_t pte) >> +{ >> + /* >> + * We have already checked that the ptes are contiguous in >> + * contpte_try_unfold(), so we can unfold unconditionally here. >> + */ >> + >> + contpte_fold(mm, addr, ptep, pte, false); > > I'm still working my way through the series but Thanks for taking the time to review! > calling a fold during an > unfold stood out as it seemed wrong. Obviously further reading revealed > the boolean flag that changes the functions meaning but I think it would > be better to refactor that. Yes that sounds reasonable. > > We could easily rename contpte_fold() to eg. set_cont_ptes() and factor > the pte calculation loop into a separate helper > (eg. calculate_contpte_dirty_young() or some hopefully better name) > called further up the stack. That has an added benefit of providing a > spot to add the nice comment for young/dirty rules you provided in the > patch description ;-) > > In other words we'd have something like: > > void __contpte_try_unfold() { > pte = calculate_contpte_dirty_young(mm, addr, ptep, pte); > pte = pte_mknoncont(pte); > set_cont_ptes(mm, addr, ptep, pte); > } My concern with this approach is that calculate_contpte_dirty_young() has side effects; it has to clear each PTE as it loops through it prevent a race between our reading access/dirty and another thread causing access/dirty to be set. So its not just a "calculation", its the teardown portion of the process too. I guess its a taste thing, so happy for it to be argued the other way, but I would prefer to keep it all together in one function. How about renaming contpte_fold() to contpte_convert() or contpte_repaint() (other suggestions welcome), and extracting the pte_mkcont()/pte_mknoncont() part (so we can remove the bool param): void __contpte_try_unfold() { pte = pte_mknoncont(pte); contpte_convert(mm, addr, ptep, pte); } Thanks, Ryan > > Which IMHO is more immediately understandable. > > - Alistair > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel