From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A2A3FCD4F59 for ; Thu, 5 Sep 2024 07:19:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:To:References:Message-Id: Content-Transfer-Encoding:Cc:Date:In-Reply-To:From:Subject:Mime-Version: Content-Type:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=T5FTfSc9tY5acYoQHjJ+ZBc2V1eSvg3XTLgGUoFubUE=; b=tHR7vKrc03JnbVK9epByc0ATWd DgkYSby++xyC5ZezHWn30fp8wx0jtpN3BjqrOO8sDYpPqDPA8aJNulth2MLiNxK0GdPqMmkuQ4Sws VglgrRUBco/fm10bPG15vcSxC4r82Ni2/Ps6Ut2sEPEnKMpMmSpwwd16JHfeMlWcvhNQuuA6xWCdR RaLLwKBYQz5Z5BoagblINpH5G97XKCzr0vK7NRE7rLq0T4uYlt+PPWCT/jbjnPYJ9QsqgfhC3AjWa bjUbxiT4HPdJ9PQ5hZ1TGr5lJGtL2SLTX1g34V6l2if+gJG0qIwFDdW5LrxPUhhmy6OR8QEgt4NTP 6uYioQnQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sm6mR-00000007LTy-40W3; Thu, 05 Sep 2024 07:19:47 +0000 Received: from out-184.mta0.migadu.com ([91.218.175.184]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sm6lT-00000007LJY-0H5Y for linux-arm-kernel@lists.infradead.org; Thu, 05 Sep 2024 07:18:49 +0000 Content-Type: text/plain; charset=us-ascii DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1725520725; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T5FTfSc9tY5acYoQHjJ+ZBc2V1eSvg3XTLgGUoFubUE=; b=dTTEQmPDGj+7RglANjOO+JH0gBypKgwITBbfCfuZQGhtYghyVvL1Xr5PdWB/5k0gtIgSu3 kTBOUAi4gr1JjWOxr2Dg+EZKbh0VaXgUksSLWQ7gmyiuLZZ8w/FlUx1QBP9gFaTnqh0Y+6 s710lUGxLXjgUFPSKnt6EzoE/r88fBo= Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3776.700.51\)) Subject: Re: [PATCH v2 07/14] mm: khugepaged: collapse_pte_mapped_thp() use pte_offset_map_rw_nolock() X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <7f22c46c-2119-4de6-9d58-efcab05b5751@bytedance.com> Date: Thu, 5 Sep 2024 15:18:05 +0800 Cc: David Hildenbrand , Hugh Dickins , Matthew Wilcox , "Vlastimil Babka (SUSE)" , Andrew Morton , Mike Rapoport , Vishal Moola , Peter Xu , Ryan Roberts , christophe.leroy2@cs-soprasteria.com, LKML , Linux Memory Management List , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org Content-Transfer-Encoding: quoted-printable Message-Id: <69F12D50-BCA9-4874-B558-71008EF82674@linux.dev> References: <24be821f-a95f-47f1-879a-c392a79072cc@linux.dev> <05955456-8743-448A-B7A4-BC45FABEA628@linux.dev> <7f22c46c-2119-4de6-9d58-efcab05b5751@bytedance.com> To: Qi Zheng X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240905_001847_555120_6EECD966 X-CRM114-Status: GOOD ( 19.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org > On Sep 5, 2024, at 14:41, Qi Zheng wrote: >=20 >=20 >=20 > On 2024/9/5 14:32, Muchun Song wrote: >>> On Aug 30, 2024, at 14:54, Qi Zheng = wrote: >>>=20 >>>=20 >>>=20 >>> On 2024/8/29 16:10, Muchun Song wrote: >>>> On 2024/8/22 15:13, Qi Zheng wrote: >>>>> In collapse_pte_mapped_thp(), we may modify the pte and pmd entry = after >>>>> acquring the ptl, so convert it to using = pte_offset_map_rw_nolock(). At >>>>> this time, the write lock of mmap_lock is not held, and the = pte_same() >>>>> check is not performed after the PTL held. So we should get = pgt_pmd and do >>>>> pmd_same() check after the ptl held. >>>>>=20 >>>>> For the case where the ptl is released first and then the pml is = acquired, >>>>> the PTE page may have been freed, so we must do pmd_same() check = before >>>>> reacquiring the ptl. >>>>>=20 >>>>> Signed-off-by: Qi Zheng >>>>> --- >>>>> mm/khugepaged.c | 16 +++++++++++++++- >>>>> 1 file changed, 15 insertions(+), 1 deletion(-) >>>>>=20 >>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >>>>> index 53bfa7f4b7f82..15d3f7f3c65f2 100644 >>>>> --- a/mm/khugepaged.c >>>>> +++ b/mm/khugepaged.c >>>>> @@ -1604,7 +1604,7 @@ int collapse_pte_mapped_thp(struct mm_struct = *mm, unsigned long addr, >>>>> if (userfaultfd_armed(vma) && !(vma->vm_flags & VM_SHARED)) >>>>> pml =3D pmd_lock(mm, pmd); >>>>> - start_pte =3D pte_offset_map_nolock(mm, pmd, haddr, &ptl); >>>>> + start_pte =3D pte_offset_map_rw_nolock(mm, pmd, haddr, = &pgt_pmd, &ptl); >>>>> if (!start_pte) /* mmap_lock + page lock should = prevent this */ >>>>> goto abort; >>>>> if (!pml) >>>>> @@ -1612,6 +1612,9 @@ int collapse_pte_mapped_thp(struct mm_struct = *mm, unsigned long addr, >>>>> else if (ptl !=3D pml) >>>>> spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); >>>>> + if (unlikely(!pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) >>>>> + goto abort; >>>>> + >>>>> /* step 2: clear page table and adjust rmap */ >>>>> for (i =3D 0, addr =3D haddr, pte =3D start_pte; >>>>> i < HPAGE_PMD_NR; i++, addr +=3D PAGE_SIZE, pte++) { >>>>> @@ -1657,6 +1660,16 @@ int collapse_pte_mapped_thp(struct = mm_struct *mm, unsigned long addr, >>>>> /* step 4: remove empty page table */ >>>>> if (!pml) { >>>>> pml =3D pmd_lock(mm, pmd); >>>>> + /* >>>>> + * We called pte_unmap() and release the ptl before = acquiring >>>>> + * the pml, which means we left the RCU critical section, = so the >>>>> + * PTE page may have been freed, so we must do pmd_same() = check >>>>> + * before reacquiring the ptl. >>>>> + */ >>>>> + if (unlikely(!pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) = { >>>>> + spin_unlock(pml); >>>>> + goto pmd_change; >>>> Seems we forget to flush TLB since we've cleared some pte entry? >>>=20 >>> See comment above the ptep_clear(): >>>=20 >>> /* >>> * Must clear entry, or a racing truncate may re-remove it. >>> * TLB flush can be left until pmdp_collapse_flush() does it. >>> * PTE dirty? Shmem page is already dirty; file is read-only. >>> */ >>>=20 >>> The TLB flush was handed over to pmdp_collapse_flush(). If a >> But you skipped pmdp_collapse_flush(). >=20 > I skip it only in !pmd_same() case, at which time it must be cleared > by other thread, which will be responsible for flushing TLB: WOW! AMAZING! You are right. >=20 > CPU 0 CPU 1 > pmd_clear > spin_unlock > flushing tlb > spin_lock > if (!pmd_same)=20 > goto pmd_change; > pmdp_collapse_flush >=20 > Did I miss something? >=20 >>> concurrent thread free the PTE page at this time, the TLB will >>> also be flushed after pmd_clear(). >>>=20 >>>>> + } >>>>> if (ptl !=3D pml) >>>>> spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); >>>>> } >>>>> @@ -1688,6 +1701,7 @@ int collapse_pte_mapped_thp(struct mm_struct = *mm, unsigned long addr, >>>>> pte_unmap_unlock(start_pte, ptl); >>>>> if (pml && pml !=3D ptl) >>>>> spin_unlock(pml); >>>>> +pmd_change: >>>>> if (notified) >>>>> mmu_notifier_invalidate_range_end(&range); >>>>> drop_folio: