From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87E66CD128A for ; Wed, 3 Apr 2024 18:32:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Ne9Zk+L7rXSGgiua5jSrbp8kqYQmOhvWlku0nTgK3Us=; b=p5xVcyPVzVKRQH /ZaVOORPf2/tJ34LAnmxN02m2Z5KNNbdLrySnbWjtgIvmVVxK26VfT2Go0gVZnA1VAmoThB12tcY8 7eGSAlxXWwbf+nDpaSEOsYPy2vOgoENcI4aCxEMagSeNvYSiCDkj8f8yNlHj/fsE6sMaJvZJyz9UI 1B1Boq2EhoiXHNv5ZOQd6zyFso0u/fLRwBAPxRGG34Rh0tEa2Jmn6PBz7CBfMXXDiEsLJ3oYb0C3p C6gavx7HRYLb8PolPUzFcasaEmDwbvXuyKfE51O+Pzf8Xi6+tWXQLN055sf14LKpnZEqkSQ/DcaTt 9EfTltHVvZLpuMEE9lzg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs5PZ-0000000HPGE-0Z8G; Wed, 03 Apr 2024 18:32:37 +0000 Received: from sin.source.kernel.org ([2604:1380:40e1:4800::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs5PV-0000000HPEZ-3WW4; Wed, 03 Apr 2024 18:32:35 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id E81B9CE2C12; Wed, 3 Apr 2024 18:32:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 881B8C433C7; Wed, 3 Apr 2024 18:32:27 +0000 (UTC) Date: Wed, 3 Apr 2024 19:32:25 +0100 From: Catalin Marinas To: Kefeng Wang Cc: akpm@linux-foundation.org, Russell King , Will Deacon , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Dave Hansen , Andy Lutomirski , Peter Zijlstra , x86@kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, surenb@google.com Subject: Re: [PATCH 2/7] arm64: mm: accelerate pagefault when VM_FAULT_BADACCESS Message-ID: References: <20240402075142.196265-1-wangkefeng.wang@huawei.com> <20240402075142.196265-3-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240402075142.196265-3-wangkefeng.wang@huawei.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240403_113234_424330_27247E87 X-CRM114-Status: GOOD ( 17.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Apr 02, 2024 at 03:51:37PM +0800, Kefeng Wang wrote: > The vm_flags of vma already checked under per-VMA lock, if it is a > bad access, directly set fault to VM_FAULT_BADACCESS and handle error, > no need to lock_mm_and_find_vma() and check vm_flags again, the latency > time reduce 34% in lmbench 'lat_sig -P 1 prot lat_sig'. > > Signed-off-by: Kefeng Wang > --- > arch/arm64/mm/fault.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c > index 9bb9f395351a..405f9aa831bd 100644 > --- a/arch/arm64/mm/fault.c > +++ b/arch/arm64/mm/fault.c > @@ -572,7 +572,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, > > if (!(vma->vm_flags & vm_flags)) { > vma_end_read(vma); > - goto lock_mmap; > + fault = VM_FAULT_BADACCESS; > + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); > + goto done; > } > fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); > if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) I think this makes sense. A concurrent modification of vma->vm_flags (e.g. mprotect()) would do a vma_start_write(), so no need to recheck again with the mmap lock held. Reviewed-by: Catalin Marinas _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel