From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,FSL_HELO_FAKE, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC096C2D0E4 for ; Fri, 20 Nov 2020 20:24:18 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4AD87206FB for ; Fri, 20 Nov 2020 20:24:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="bjckjii1"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="ejkjlAjx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4AD87206FB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8XdCPsX6Hf4Mk6rsfV0B/Hmv7Jv2tKdmSd9GKq12x2M=; b=bjckjii1Tp4NZSwHYsGjTRONw GDL1HYgZgeOoZ/8sdjUBQifEyWQqvEvJP0PDQuwicYlovlx0RjdOrZ59+JtE37jNHjAA8klA4r2u1 lNnwI+szwXs4Eaj3/aiBQCK5XB8c7lderlgiZfnBTHI8UMZECpiwKyle55kgEMyEe4bheZHw/Mbsm 0W/ND2iUEndpj/e6JGW399eOlY6+Qa5k6GuiQN+ug649b2+bw8OAkF70XNBKUWojuLuf8cZDoVsxP i1ZjGEj0pKJBylJZmkJ6B24aWrPhj5IzVPlybqQO4ZnPtbVqrabldqhajFEX2krY0s1pq1bYQ7Van gfUa2lWuw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kgCvv-0000in-5A; Fri, 20 Nov 2020 20:23:03 +0000 Received: from mail-il1-x141.google.com ([2607:f8b0:4864:20::141]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kgCvr-0000hR-Sg for linux-arm-kernel@lists.infradead.org; Fri, 20 Nov 2020 20:23:01 +0000 Received: by mail-il1-x141.google.com with SMTP id w8so9636838ilg.12 for ; Fri, 20 Nov 2020 12:22:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=Ght5uqwgVPL14mfxNWU5XxDISwt3Xgt2hIhAbuzfV8c=; b=ejkjlAjx11JYmEZSWguAJ4MVnj8U3xGHowplEv09FW1uySVng6ULKj4SlnbOqVdx5r swehtuAowp2YvJtI31szphdVFsF/yJrYb1a8OA13BJG6OMVq7qe69kr5UHhy0VB8WJee bH42WzoOFtnSQMgahkdo/7AR8vEXJFFrlEVfTy3Wt9gZfwPCyW96dJeVCNn975rr1HsL 6IEGyvD4QOBSJjHGLIzbNRkWQ0ZrQ5r69CaetQUFN4KtCgnZA96bin7RdAZILZnQJ8Yi ziFcwqRLJQjT7nCy1YjV7ZDJ+XORppGfEAGYI6Bxl9Ra2xEti+FKKhdETr1l8HQKPP3O nm/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Ght5uqwgVPL14mfxNWU5XxDISwt3Xgt2hIhAbuzfV8c=; b=iMGPt3fNBmxFYtiWvX/b2vy58tWX/0jhVyTmlBvG1uDJiPgo0TFyUs/vrGZSj7EuuH nXYULQPzGxJXBcdHeMTBkwEjUATymXp1vC5hLYohs0gG1OTsRPrqcURV4aZhMcGIf1/g bkwpRu7gSWs+TZ/84iVOUR7ozq3C4hiSIXpj8Ckkg3URd1oMZC7fhBfUA7vJwKqds0si WXrh2PsHerqtyDu/sYF/+lx3TJO9fkSWXQ9NBABx7S2rN/eVIm35lYKxhuMhMRWYquen uqXGu9HBq9TOKy0LzvBEsLN/9N+M1/t9ZDqVfOvvz9sAFw0yhOeF4kK/DPCr2BBsprqG ZuZQ== X-Gm-Message-State: AOAM530Ma8C0bgJT3Uz/Oj/FJ5QsvAXTsfH41lvOE7HyhurWaEVbOxnM D3ckTz7JT1/h07VJgFwfmAEwCQ== X-Google-Smtp-Source: ABdhPJy13/cZN92z4k83AHU6mxhaf2RrSdMXtJT23PYPv818sQhjrZ5FfM+5aa6X7QRQly0YveKeEg== X-Received: by 2002:a92:c089:: with SMTP id h9mr27376075ile.162.1605903777682; Fri, 20 Nov 2020 12:22:57 -0800 (PST) Received: from google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) by smtp.gmail.com with ESMTPSA id s71sm2659348ilb.17.2020.11.20.12.22.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Nov 2020 12:22:57 -0800 (PST) Date: Fri, 20 Nov 2020 13:22:53 -0700 From: Yu Zhao To: Will Deacon Subject: Re: [PATCH 4/6] mm: proc: Invalidate TLB after clearing soft-dirty page state Message-ID: <20201120202253.GB1303870@google.com> References: <20201120143557.6715-1-will@kernel.org> <20201120143557.6715-5-will@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20201120143557.6715-5-will@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201120_152300_077413_90A62B18 X-CRM114-Status: GOOD ( 24.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kernel-team@android.com, Anshuman Khandual , Peter Zijlstra , Catalin Marinas , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Minchan Kim , Linus Torvalds , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Nov 20, 2020 at 02:35:55PM +0000, Will Deacon wrote: > Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double flush"), > TLB invalidation is elided in tlb_finish_mmu() if no entries were batched > via the tlb_remove_*() functions. Consequently, the page-table modifications > performed by clear_refs_write() in response to a write to > /proc//clear_refs do not perform TLB invalidation. Although this is > fine when simply aging the ptes, in the case of clearing the "soft-dirty" > state we can end up with entries where pte_write() is false, yet a > writable mapping remains in the TLB. I don't think we need a TLB flush in this context, same reason as we don't have one in copy_present_pte() which uses ptep_set_wrprotect() to write-protect a src PTE. ptep_modify_prot_start/commit() and ptep_set_wrprotect() guarantee either the dirty bit is set (when a PTE is still writable) or a PF happens (when a PTE has become r/o) when h/w page table walker races with kernel that modifies a PTE using the two APIs. > Fix this by calling tlb_remove_tlb_entry() for each entry being > write-protected when cleating soft-dirty. > > Signed-off-by: Will Deacon > --- > fs/proc/task_mmu.c | 18 +++++++++++------- > 1 file changed, 11 insertions(+), 7 deletions(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index cd03ab9087b0..3308292ee5c5 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -1032,11 +1032,12 @@ enum clear_refs_types { > > struct clear_refs_private { > enum clear_refs_types type; > + struct mmu_gather *tlb; > }; > > #ifdef CONFIG_MEM_SOFT_DIRTY > static inline void clear_soft_dirty(struct vm_area_struct *vma, > - unsigned long addr, pte_t *pte) > + unsigned long addr, pte_t *pte, struct mmu_gather *tlb) > { > /* > * The soft-dirty tracker uses #PF-s to catch writes > @@ -1053,6 +1054,7 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, > ptent = pte_wrprotect(old_pte); > ptent = pte_clear_soft_dirty(ptent); > ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent); > + tlb_remove_tlb_entry(tlb, pte, addr); > } else if (is_swap_pte(ptent)) { > ptent = pte_swp_clear_soft_dirty(ptent); > set_pte_at(vma->vm_mm, addr, pte, ptent); > @@ -1060,14 +1062,14 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, > } > #else > static inline void clear_soft_dirty(struct vm_area_struct *vma, > - unsigned long addr, pte_t *pte) > + unsigned long addr, pte_t *pte, struct mmu_gather *tlb) > { > } > #endif > > #if defined(CONFIG_MEM_SOFT_DIRTY) && defined(CONFIG_TRANSPARENT_HUGEPAGE) > static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, > - unsigned long addr, pmd_t *pmdp) > + unsigned long addr, pmd_t *pmdp, struct mmu_gather *tlb) > { > pmd_t old, pmd = *pmdp; > > @@ -1081,6 +1083,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, > > pmd = pmd_wrprotect(pmd); > pmd = pmd_clear_soft_dirty(pmd); > + tlb_remove_pmd_tlb_entry(tlb, pmdp, addr); > > set_pmd_at(vma->vm_mm, addr, pmdp, pmd); > } else if (is_migration_entry(pmd_to_swp_entry(pmd))) { > @@ -1090,7 +1093,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, > } > #else > static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, > - unsigned long addr, pmd_t *pmdp) > + unsigned long addr, pmd_t *pmdp, struct mmu_gather *tlb) > { > } > #endif > @@ -1107,7 +1110,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, > ptl = pmd_trans_huge_lock(pmd, vma); > if (ptl) { > if (cp->type == CLEAR_REFS_SOFT_DIRTY) { > - clear_soft_dirty_pmd(vma, addr, pmd); > + clear_soft_dirty_pmd(vma, addr, pmd, cp->tlb); > goto out; > } > > @@ -1133,7 +1136,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, > ptent = *pte; > > if (cp->type == CLEAR_REFS_SOFT_DIRTY) { > - clear_soft_dirty(vma, addr, pte); > + clear_soft_dirty(vma, addr, pte, cp->tlb); > continue; > } > > @@ -1212,7 +1215,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, > if (mm) { > struct mmu_notifier_range range; > struct clear_refs_private cp = { > - .type = type, > + .type = type, > + .tlb = &tlb, > }; > > if (type == CLEAR_REFS_MM_HIWATER_RSS) { > -- > 2.29.2.454.gaff20da3a2-goog > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel