From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1F8CC433E1 for ; Mon, 24 Aug 2020 08:33:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAB55206F0 for ; Mon, 24 Aug 2020 08:33:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1598258005; bh=RJOqI9Y/VEL2LCFQfLWO6cFIAWr88mM0h3JoXL/F7lg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=fvuk+1IPm9WYfwMex+w7XFUIq4LBvW9bLdWG5k7IyjMShMNNlXi6a2MfGNeC1vEiM dRV5ZyGYp9qyBoKjCE+AUQ1K5oqkngoT49q9F5aj8k0Yy58m5T7xBSYuSPqHjmwp0u 6XGv0sNrWmq3yEQ2grCaDljayqNQdyvsPpQJirp8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727022AbgHXIdY (ORCPT ); Mon, 24 Aug 2020 04:33:24 -0400 Received: from mail.kernel.org ([198.145.29.99]:40628 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726967AbgHXIdO (ORCPT ); Mon, 24 Aug 2020 04:33:14 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A965A206F0; Mon, 24 Aug 2020 08:33:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1598257994; bh=RJOqI9Y/VEL2LCFQfLWO6cFIAWr88mM0h3JoXL/F7lg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fn/tRO3WDkyQi3N7Zg9Y5Gq7xxGy8tv0TbdDIg01MIm3XOGwZK/EThGTQBoR4gTae cCJsrQvT/ADEIlJ1DDBHi7PQmImGVU31t7+Ub95QjzMEbGD3rovhFnSCIGyQnpnnRg WryCv+7GSEewsYBY2GYmojAOekLyf/LwwV6Kqo4w= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Linus Torvalds , Xu Yu , Johannes Weiner , Catalin Marinas , Will Deacon , Yang Shi Subject: [PATCH 5.8 035/148] mm/memory.c: skip spurious TLB flush for retried page fault Date: Mon, 24 Aug 2020 10:28:53 +0200 Message-Id: <20200824082415.728055711@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200824082413.900489417@linuxfoundation.org> References: <20200824082413.900489417@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yang Shi commit b7333b58f358f38d90d78e00c1ee5dec82df10ad upstream. Recently we found regression when running will_it_scale/page_fault3 test on ARM64. Over 70% down for the multi processes cases and over 20% down for the multi threads cases. It turns out the regression is caused by commit 89b15332af7c ("mm: drop mmap_sem before calling balance_dirty_pages() in write fault"). The test mmaps a memory size file then write to the mapping, this would make all memory dirty and trigger dirty pages throttle, that upstream commit would release mmap_sem then retry the page fault. The retried page fault would see correct PTEs installed then just fall through to spurious TLB flush. The regression is caused by the excessive spurious TLB flush. It is fine on x86 since x86's spurious TLB flush is no-op. We could just skip the spurious TLB flush to mitigate the regression. Suggested-by: Linus Torvalds Reported-by: Xu Yu Debugged-by: Xu Yu Tested-by: Xu Yu Cc: Johannes Weiner Cc: Catalin Marinas Cc: Will Deacon Cc: Signed-off-by: Yang Shi Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/memory.c | 3 +++ 1 file changed, 3 insertions(+) --- a/mm/memory.c +++ b/mm/memory.c @@ -4248,6 +4248,9 @@ static vm_fault_t handle_pte_fault(struc vmf->flags & FAULT_FLAG_WRITE)) { update_mmu_cache(vmf->vma, vmf->address, vmf->pte); } else { + /* Skip spurious TLB flush for retried page fault */ + if (vmf->flags & FAULT_FLAG_TRIED) + goto unlock; /* * This is needed only for protection faults but the arch code * is not yet telling us if this is a protection fault or not.