From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B8B5ACD128A for ; Wed, 10 Apr 2024 01:30:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:From:References:CC:To:Subject: MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yyQ3Z0QIIHF/UO11a84SYusleUnpgcV8adR84rML7HU=; b=pmzrRvfeQ7RtPA U82196DCkaaxOgb27xV3NPcyT6hcmYoJNMGvWStM3P9j6QZwlv/0fKq2Pypp+FFcQRCBg8seQgTft LW6k9Tr59FSS38oFs53PU0UBNWnm2E+GTgd57kdgyjXTmf/DwxXyzn6k7AN2U3Jr0o4WRjG51X62Y ufcxPAYitPFtbzSxyEHA85hhyk6VGDfi/SZkmdeLmDqKXuWeiXaHqApWH4+RpP5jGPa3WUx8F3DjP tp+LcMGmttZcEmu8Rlw4yVb7TqzeqqUzzMmnAmAelAN4wGjlxBA8xItbpej5O8nWhQtkC6BkEwzVd jQgOlHzaJ1QuWm+0C22Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ruMnD-00000004bpL-0AtN; Wed, 10 Apr 2024 01:30:27 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ruMn9-00000004bmV-2TFY for linux-arm-kernel@lists.infradead.org; Wed, 10 Apr 2024 01:30:26 +0000 Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VDlYf2c6wzwRSj; Wed, 10 Apr 2024 09:27:18 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 6C6621403D3; Wed, 10 Apr 2024 09:30:13 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 10 Apr 2024 09:30:13 +0800 Message-ID: Date: Wed, 10 Apr 2024 09:30:12 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] arm64: mm: drop VM_FAULT_BADMAP/VM_FAULT_BADACCESS Content-Language: en-US To: Catalin Marinas CC: Andrew Morton , Russell King , Will Deacon , , References: <20240407081211.2292362-1-wangkefeng.wang@huawei.com> <20240407081211.2292362-2-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240409_183024_083387_F4D674D8 X-CRM114-Status: GOOD ( 26.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2024/4/9 22:28, Catalin Marinas wrote: > Hi Kefeng, > > On Sun, Apr 07, 2024 at 04:12:10PM +0800, Kefeng Wang wrote: >> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c >> index 405f9aa831bd..61a2acae0dca 100644 >> --- a/arch/arm64/mm/fault.c >> +++ b/arch/arm64/mm/fault.c >> @@ -500,9 +500,6 @@ static bool is_write_abort(unsigned long esr) >> return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM); >> } >> >> -#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000) >> -#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000) >> - >> static int __kprobes do_page_fault(unsigned long far, unsigned long esr, >> struct pt_regs *regs) >> { >> @@ -513,6 +510,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, >> unsigned int mm_flags = FAULT_FLAG_DEFAULT; >> unsigned long addr = untagged_addr(far); >> struct vm_area_struct *vma; >> + int si_code; > > I think we should initialise this to 0. Currently all paths seem to set > si_code to something meaningful but I'm not sure the last 'else' close > in this patch is guaranteed to always cover exactly those earlier code > paths updating si_code. I'm not talking about the 'goto bad_area' paths > since they set 'fault' to 0 but the fall through after the second (under > the mm lock) handle_mm_fault(). Recheck it, without this patch, the second handle_mm_fault() never return VM_FAULT_BADACCESS, but could return VM_FAULT_SIGSEGV(maybe other), which not handled in the other error path, handle_mm_fault ret = sanitize_fault_flags(vma, &flags); if (!arch_vma_access_permitted()) ret = VM_FAULT_SIGSEGV; so the orignal logical will set si_code to SEGV_MAPERR fault == VM_FAULT_BADACCESS ? SEGV_ACCERR : SEGV_MAPERR, therefore, i think we should set the default si_code to SEGV_MAPERR. > >> if (kprobe_page_fault(regs, esr)) >> return 0; >> @@ -572,9 +570,10 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, >> >> if (!(vma->vm_flags & vm_flags)) { >> vma_end_read(vma); >> - fault = VM_FAULT_BADACCESS; >> + fault = 0; >> + si_code = SEGV_ACCERR; >> count_vm_vma_lock_event(VMA_LOCK_SUCCESS); >> - goto done; >> + goto bad_area; >> } >> fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); >> if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) >> @@ -599,15 +598,18 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, >> retry: >> vma = lock_mm_and_find_vma(mm, addr, regs); >> if (unlikely(!vma)) { >> - fault = VM_FAULT_BADMAP; >> - goto done; >> + fault = 0; >> + si_code = SEGV_MAPERR; >> + goto bad_area; >> } >> >> - if (!(vma->vm_flags & vm_flags)) >> - fault = VM_FAULT_BADACCESS; >> - else >> - fault = handle_mm_fault(vma, addr, mm_flags, regs); >> + if (!(vma->vm_flags & vm_flags)) { >> + fault = 0; >> + si_code = SEGV_ACCERR; >> + goto bad_area; >> + } > > What's releasing the mm lock here? Prior to this change, it is falling > through to mmap_read_unlock() below or handle_mm_fault() was releasing > the lock (VM_FAULT_RETRY, VM_FAULT_COMPLETED). Indeed, will fix, > >> >> + fault = handle_mm_fault(vma, addr, mm_flags, regs); >> /* Quick path to respond to signals */ >> if (fault_signal_pending(fault, regs)) { >> if (!user_mode(regs)) >> @@ -626,13 +628,11 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, >> mmap_read_unlock(mm); >> >> done: >> - /* >> - * Handle the "normal" (no error) case first. >> - */ >> - if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | >> - VM_FAULT_BADACCESS)))) >> + /* Handle the "normal" (no error) case first. */ >> + if (likely(!(fault & VM_FAULT_ERROR))) >> return 0; >> >> +bad_area: >> /* >> * If we are in kernel mode at this point, we have no context to >> * handle this fault with. >> @@ -667,13 +667,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, >> >> arm64_force_sig_mceerr(BUS_MCEERR_AR, far, lsb, inf->name); >> } else { >> - /* >> - * Something tried to access memory that isn't in our memory >> - * map. >> - */ >> - arm64_force_sig_fault(SIGSEGV, >> - fault == VM_FAULT_BADACCESS ? SEGV_ACCERR : SEGV_MAPERR, >> - far, inf->name); >> + /* Something tried to access memory that out of memory map */ >> + arm64_force_sig_fault(SIGSEGV, si_code, far, inf->name); >> } > > We can get to the 'else' close after the second handle_mm_fault(). Do we > guarantee that 'fault == 0' in this last block? If not, maybe a warning > and some safe initialisation for 'si_code' to avoid leaking stack data. As analyzed above, it is sufficient that make si_code to SEGV_MAPPER by default, right? Thanks. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel