From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E105CCD5BA1 for ; Thu, 5 Sep 2024 09:53:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 69F6F6B00E7; Thu, 5 Sep 2024 05:53:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 64EA16B0285; Thu, 5 Sep 2024 05:53:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C9246B0286; Thu, 5 Sep 2024 05:53:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id F3E726B00E7 for ; Thu, 5 Sep 2024 05:53:39 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 9D8AB809F7 for ; Thu, 5 Sep 2024 09:53:39 +0000 (UTC) X-FDA: 82530222558.16.283FD97 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf23.hostedemail.com (Postfix) with ESMTP id 9F937140004 for ; Thu, 5 Sep 2024 09:53:37 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf23.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725529922; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8wa/vFagf3UqoNvUJx0syryF32oopoMrqV1QWLZXqwQ=; b=cBXvVYfhuhfIc41xjkngFUjKx2U0q8MAdvDp8XWKXQmWmW1os9rm3CBCJaskAtlhwS2fJK OfPLq8vydBxZu3NMIKhy2UZto7keKG3HfBxTVo92j4dn+pGoaKw29g1A6r+qrZS/v5ULOo TDxZ8YPhynlucoVk+hIuRApQq+oTBDY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725529922; a=rsa-sha256; cv=none; b=a+dozqLtJbzDM0gCpFxEKcWWXJe7eAAqOdghIT0+1js+1QeTs4BtY/Xx75/Nnu1MPeL1PN 8cGebdumBEmJqtV2BV5zUHOtPTZGv+ZdJG1MObV6vurI+EGTALBzVCyBR78CCJHJDHmabP MNGXjIzYLshmh+aPINKATIwKl/Fz57w= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf23.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 258ABFEC; Thu, 5 Sep 2024 02:54:03 -0700 (PDT) Received: from [10.162.43.13] (e116581.arm.com [10.162.43.13]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3DFED3F73F; Thu, 5 Sep 2024 02:53:27 -0700 (PDT) Message-ID: Date: Thu, 5 Sep 2024 15:23:24 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 2/2] mm: Allocate THP on hugezeropage wp-fault To: Kefeng Wang , "Kirill A. Shutemov" Cc: akpm@linux-foundation.org, david@redhat.com, willy@infradead.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, mark.rutland@arm.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, jglisse@google.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20240904100923.290042-1-dev.jain@arm.com> <20240904100923.290042-3-dev.jain@arm.com> Content-Language: en-US From: Dev Jain In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 9F937140004 X-Stat-Signature: uyxbdik65fx67yuy19jiheaa9r1tpwe4 X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1725530017-82152 X-HE-Meta: U2FsdGVkX18/CItp7bXkOAofiEGho7loXqSQEgnPKKEsiuV91GR3G2hh2WNOLckqHxxX9ngtZNxhXj9/q7Syk0pZaah3TMgCW0UI5o/HxF+dTzQThw6I1GbJv3YcViAZOOu4DKkx3dfkAP4L1bsnV06wFYgjImwUf4QhJoZy6rzNIJCMlieY1RGzqg7XIkc6Eio01iL2Uqav1dzgIfxHnORbVGv5pcqMv+t1IM+ZlqYTfPvHzo3AO8bw2Q865+b54e8gHW6EuXPpyG6QVF75RP2Wot4c6WI0zb/YsJzDX38ijZqLFe56wSeFGrEF2sXLOT1+R4UOtiLMOAEquP7ZnpnwYhtXhiDkIBSJa4hIREbzaDr7D+Ms8i4clfAq6zZ6Jx42ZvU6JoLJ6ewvBH8/7/9ovwCKD3ePTkUjrZaPQtRqib/c5GeTRmkBwODc19qFo7vA2Fk+/rcO07ZSB/hk4cBnCBVilVHpe56e9kMCRUyXV6f6FcL/IFJGcNzvqbMTf2FzCwEs2kGqApmwAEYcRY5R/x27h0qj1igjhy857ZkM2EWIQO9d2KD0DNuDDE9nT/NoGTKbLem97KtLPcgq4mOQ+U8khcoTaipNhaQbmyZZCc1dg9JGrAq/nk0fJs6IG7huDOGf+XAiU7XOQqKRc7PrPb2QCdKzYE2Qpr2YCdibJJqNMn5y04JB1uDpW3HtQfFFJszAuhp8xltBFe0TyQAKMErqO5ZHHPfxNs2zKqcddi/Y6Er3CVW6+Hz/uFBYUsjLkDAUpKsTQDO7qXdBp0eFcZLCNa6nFTB77XBuwnP6XSXvECfYkOyVW1oyTNM7HOT6q4TEq5qQUnW6qHF3FbyS7gy7GECKUKv7Kb68z7YtWABFlfVvmJ/oeSQgg6HEg3qkUIZJB3Xf7GxGjzWBKzZ0Wx0v6klgbv8D8GNixzdkN8wfslBhKub4DH06Xj5Evnp+7i5XZpyPofdkm13 /X/CHoCS Ki1L4mWl4YU8w2BiUItElQthn0jyLyTb5gAqxDiNTW4OMugPyDDETtfvUkwg8olK66paZOvYFIL2wlZjTrd4FGFZKy/ZIuA5JCyHvuUz8NwZRKcgtu4L4alnJd0tFGRZqZBmsaf+sbvLoDsfYp7nSEwzWNxdyf1zxz4BJFrK+/CdW1FpmDMTSCeOViejnQ1e/dsZKRZD01bbcmAvmHESgC2GGsrcaLbrcZ6TSgKdxOjw1sWk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 9/5/24 15:11, Kefeng Wang wrote: > > > On 2024/9/5 16:26, Kirill A. Shutemov wrote: >> On Wed, Sep 04, 2024 at 03:39:23PM +0530, Dev Jain wrote: >>> Introduce do_huge_zero_wp_pmd() to handle wp-fault on a hugezeropage >>> and >>> replace it with a PMD-mapped THP. Change the helpers introduced in the >>> previous patch to flush TLB entry corresponding to the hugezeropage, >>> and preserve PMD uffd-wp marker. In case of failure, fallback to >>> splitting the PMD. > > We met same issue, and a very similar function in our kernel, >>> >>> Signed-off-by: Dev Jain >>> --- >>>   include/linux/huge_mm.h |  6 ++++ >>>   mm/huge_memory.c        | 79 >>> +++++++++++++++++++++++++++++++++++------ >>>   mm/memory.c             |  5 +-- >>>   3 files changed, 78 insertions(+), 12 deletions(-) >>> >>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>> index e25d9ebfdf89..fdd2cf473a3c 100644 >>> --- a/include/linux/huge_mm.h >>> +++ b/include/linux/huge_mm.h >>> @@ -9,6 +9,12 @@ >>>   #include >>>     vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf); >>> +vm_fault_t thp_fault_alloc(gfp_t gfp, int order, struct >>> vm_area_struct *vma, >>> +               unsigned long haddr, struct folio **foliop, >>> +               unsigned long addr); >>> +void map_pmd_thp(struct folio *folio, struct vm_fault *vmf, >>> +         struct vm_area_struct *vma, unsigned long haddr, >>> +         pgtable_t pgtable); >> >> Why? I don't see users outside huge_memory.c. >> >>>   int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, >>>             pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, >>>             struct vm_area_struct *dst_vma, struct vm_area_struct >>> *src_vma); >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>> index 58125fbcc532..150163ad77d3 100644 >>> --- a/mm/huge_memory.c >>> +++ b/mm/huge_memory.c >>> @@ -943,9 +943,9 @@ unsigned long thp_get_unmapped_area(struct file >>> *filp, unsigned long addr, >>>   } >>>   EXPORT_SYMBOL_GPL(thp_get_unmapped_area); >>>   -static vm_fault_t thp_fault_alloc(gfp_t gfp, int order, struct >>> vm_area_struct *vma, >>> -                  unsigned long haddr, struct folio **foliop, >>> -                  unsigned long addr) >>> +vm_fault_t thp_fault_alloc(gfp_t gfp, int order, struct >>> vm_area_struct *vma, >>> +               unsigned long haddr, struct folio **foliop, >>> +               unsigned long addr) >>>   { >>>       struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr, >>> true); >>>   @@ -984,21 +984,29 @@ static void __thp_fault_success_stats(struct >>> vm_area_struct *vma, int order) >>>       count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); >>>   } >>>   -static void map_pmd_thp(struct folio *folio, struct vm_fault *vmf, >>> -            struct vm_area_struct *vma, unsigned long haddr, >>> -            pgtable_t pgtable) >>> +void map_pmd_thp(struct folio *folio, struct vm_fault *vmf, >>> +         struct vm_area_struct *vma, unsigned long haddr, >>> +         pgtable_t pgtable) >>>   { >>> -    pmd_t entry; >>> +    pmd_t entry, old_pmd; >>> +    bool is_pmd_none = pmd_none(*vmf->pmd); >>>         entry = mk_huge_pmd(&folio->page, vma->vm_page_prot); >>>       entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); >>>       folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); >>>       folio_add_lru_vma(folio, vma); >>> -    pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); >>> +    if (!is_pmd_none) { >>> +        old_pmd = pmdp_huge_clear_flush(vma, haddr, vmf->pmd); >>> +        if (pmd_uffd_wp(old_pmd)) >>> +            entry = pmd_mkuffd_wp(entry); >>> +    } >>> +    if (pgtable) >>> +        pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); >>>       set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); >>>       update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); >>>       add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); >>> -    mm_inc_nr_ptes(vma->vm_mm); >>> +    if (is_pmd_none) >>> +        mm_inc_nr_ptes(vma->vm_mm); >>>   } >>>     static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault >>> *vmf) >>> @@ -1576,6 +1584,50 @@ void huge_pmd_set_accessed(struct vm_fault *vmf) >>>       spin_unlock(vmf->ptl); >>>   } >>>   +static vm_fault_t do_huge_zero_wp_pmd_locked(struct vm_fault *vmf, >>> +                         unsigned long haddr, >>> +                         struct folio *folio) >> >> Why the helper is needed? Cannot it be just opencodded in >> do_huge_zero_wp_pmd()? >> >>> +{ >>> +    struct vm_area_struct *vma = vmf->vma; >>> +    vm_fault_t ret = 0; >>> + >>> +    ret = check_stable_address_space(vma->vm_mm); >>> +    if (ret) >>> +        goto out; >>> +    map_pmd_thp(folio, vmf, vma, haddr, NULL); >>> +out: >>> +    return ret; >>> +} >>> + >>> +static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf, >>> unsigned long haddr) >>> +{ >>> +    struct vm_area_struct *vma = vmf->vma; >>> +    gfp_t gfp = vma_thp_gfp_mask(vma); >>> +    struct mmu_notifier_range range; >>> +    struct folio *folio = NULL; >>> +    vm_fault_t ret = 0; >>> + >>> +    ret = thp_fault_alloc(gfp, HPAGE_PMD_ORDER, vma, haddr, &folio, >>> +                  vmf->address); >>> +    if (ret) >>> +        goto out; >>> + >>> +    mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, >>> vma->vm_mm, haddr, >>> +                haddr + HPAGE_PMD_SIZE); >>> +    mmu_notifier_invalidate_range_start(&range); >>> +    vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); >>> +    if (unlikely(!pmd_same(pmdp_get(vmf->pmd), vmf->orig_pmd))) >>> +        goto unlock; >>> +    ret = do_huge_zero_wp_pmd_locked(vmf, haddr, folio); >>> +    if (!ret) >>> +        __thp_fault_success_stats(vma, HPAGE_PMD_ORDER); >>> +unlock: >>> +    spin_unlock(vmf->ptl); >>> +    mmu_notifier_invalidate_range_end(&range); > > the folio need to released when !pmd_same() and > check_stable_address_space() return VM_FAULT_SIGBUS. Yes, thanks. > >>> +out: >>> +    return ret; >>> +} >>> + >>> >>