From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BE3DACD5BB6 for ; Thu, 5 Sep 2024 12:08:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:From:References:Cc:To:Subject:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=HPUN+LftKuZr9srXj9cbq7IzfWnEz1/nVI7Ulrm9sGw=; b=2EM2ne1LE1Ad3DOIqHO+X2bYwn A5M+XkFLhEyqR3SccPfIcuX89n/kGNBPMxE61zXzStWm7awPfKVdGy535Iovc9mF5Tkqg/+GtBvg0 ah7WelNpJJq2BVbjb9wsVY088euiOeu6UQ1nJ3h3e6VW6ti5o5I9VFXd5rF466HkjmCqQyHIdDaGD M7huZ1P5aaMcLJn5qvdiCdHPkskxXaGUFFoARurhWz1nPsBxE6MnfLUbetyBIso+BGdgo2KTYa9NB g9yQKrDM0F2IzmBB6MHKsfyAZls7cJQP5Cg+pw1XhFde4Jivj5xT31skU5lpLpx8l4w4hcIZrN4ih tlxpcklQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1smBHd-00000008Egx-2BaP; Thu, 05 Sep 2024 12:08:17 +0000 Received: from out-180.mta0.migadu.com ([2001:41d0:1004:224b::b4]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1smBGf-00000008EVo-1TC9 for linux-arm-kernel@lists.infradead.org; Thu, 05 Sep 2024 12:07:19 +0000 Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1725538034; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HPUN+LftKuZr9srXj9cbq7IzfWnEz1/nVI7Ulrm9sGw=; b=JFyaLw6VOQxfB3bon8tNTAbaoyQOjbkkFr9E7c5BXZrhpSY87szyM6HhPhLdhveialXUBU DqzRCadpsUll9LREIS1K/RXRPcaIokvVnVnPDzXmSVZJ2/ObX6ocR1K2ZGP/5TkdOHxCq1 ELIYL97JJZy+oNEpgveNV8ZUXppn1Vk= Date: Thu, 5 Sep 2024 20:07:00 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v3 10/14] mm: page_vma_mapped_walk: map_pte() use pte_offset_map_rw_nolock() To: Qi Zheng Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, david@redhat.com, hughd@google.com, willy@infradead.org, vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org, vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com, christophe.leroy2@cs-soprasteria.com References: <20240904084022.32728-1-zhengqi.arch@bytedance.com> <20240904084022.32728-11-zhengqi.arch@bytedance.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20240904084022.32728-11-zhengqi.arch@bytedance.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240905_050717_834956_F32688FD X-CRM114-Status: GOOD ( 19.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2024/9/4 16:40, Qi Zheng wrote: > In the caller of map_pte(), we may modify the pvmw->pte after acquiring > the pvmw->ptl, so convert it to using pte_offset_map_rw_nolock(). At > this time, the pte_same() check is not performed after the pvmw->ptl held, > so we should get pmdval and do pmd_same() check to ensure the stability of > pvmw->pmd. > > Signed-off-by: Qi Zheng > --- > mm/page_vma_mapped.c | 24 ++++++++++++++++++++---- > 1 file changed, 20 insertions(+), 4 deletions(-) > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index ae5cc42aa2087..f1d73fd448708 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -13,9 +13,11 @@ static inline bool not_found(struct page_vma_mapped_walk *pvmw) > return false; > } > > -static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) > +static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp, > + spinlock_t **ptlp) > { > pte_t ptent; > + pmd_t pmdval; > > if (pvmw->flags & PVMW_SYNC) { > /* Use the stricter lookup */ > @@ -25,6 +27,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) > return !!pvmw->pte; > } > > +again: > /* > * It is important to return the ptl corresponding to pte, > * in case *pvmw->pmd changes underneath us; so we need to > @@ -32,10 +35,11 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) > * proceeds to loop over next ptes, and finds a match later. > * Though, in most cases, page lock already protects this. > */ > - pvmw->pte = pte_offset_map_nolock(pvmw->vma->vm_mm, pvmw->pmd, > - pvmw->address, ptlp); > + pvmw->pte = pte_offset_map_rw_nolock(pvmw->vma->vm_mm, pvmw->pmd, > + pvmw->address, &pmdval, ptlp); > if (!pvmw->pte) > return false; > + *pmdvalp = pmdval; > > ptent = ptep_get(pvmw->pte); > > @@ -69,6 +73,12 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp) > } > pvmw->ptl = *ptlp; > spin_lock(pvmw->ptl); > + > + if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pvmw->pmd)))) { > + spin_unlock(pvmw->ptl); Forgot to clear pvmw->ptl? Or how about moving the assignment for it to the place where the pmd_same check is successful? > + goto again; > + } > + Maybe here is the right place to assign pvmw->ptl. Muchun, Thanks. > return true; > } > > @@ -278,7 +288,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > step_forward(pvmw, PMD_SIZE); > continue; > } > - if (!map_pte(pvmw, &ptl)) { > + if (!map_pte(pvmw, &pmde, &ptl)) { > if (!pvmw->pte) > goto restart; > goto next_pte; > @@ -307,6 +317,12 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > if (!pvmw->ptl) { > pvmw->ptl = ptl; > spin_lock(pvmw->ptl); > + if (unlikely(!pmd_same(pmde, pmdp_get_lockless(pvmw->pmd)))) { > + pte_unmap_unlock(pvmw->pte, pvmw->ptl); > + pvmw->ptl = NULL; > + pvmw->pte = NULL; > + goto restart; > + } > } > goto this_pte; > } while (pvmw->address < end);