From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BD06C433F5 for ; Fri, 1 Oct 2021 12:29:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 584C461A8B for ; Fri, 1 Oct 2021 12:29:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 584C461A8B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NVdazEgn5Kc3uneZ/wG1PvVSYP0rUApyMh1moVDOxa0=; b=FCuGaxYf7gjjc8 S1FO7dkVcQzX2CkkysFeODc9nozzHko1q+0AT/X4tpUIFzNVTWXEnTnXRP4Th0LzJPbpK3hWp8+O+ zNiJZqjP9IxDnyz/qkvAny4sqiOrYpYjYbXO1fkCYjr64e4038xv94o2b53R5Oumy2DTqMZsnWpr7 W3fDIStdofGRh2/aHthsAwMkld9zyXLWlGnjpSrA6RMijlaTJqfS5/zZXSH0JbE7g+GpuDBPZrzMb RGKgQ5WY3qdn5njJpnZV37qgX8kqS40y4cMeIFZxBE8mYxJcuKtjsfjZh0uMsVEDMT0LswSmxpDMY xHy96yTKv0DN7lRyzI6w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mWHdM-000JJ5-F2; Fri, 01 Oct 2021 12:27:24 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mWHdG-000JH9-2q for linux-arm-kernel@lists.infradead.org; Fri, 01 Oct 2021 12:27:20 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8BE1B106F; Fri, 1 Oct 2021 05:27:15 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.20.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 897F93F70D; Fri, 1 Oct 2021 05:27:12 -0700 (PDT) Date: Fri, 1 Oct 2021 13:27:06 +0100 From: Mark Rutland To: Josh Poimboeuf Cc: Peter Zijlstra , Dmitry Vyukov , syzbot , Linux ARM , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com, viro@zeniv.linux.org.uk, will@kernel.org, x86@kernel.org, live-patching@vger.kernel.org, Thomas Gleixner Subject: Re: [syzbot] upstream test error: KASAN: invalid-access Read in __entry_tramp_text_end Message-ID: <20211001122706.GA66786@C02TD0UTHF1T.local> References: <20210927171812.GB9201@C02TD0UTHF1T.local> <20210928103543.GF1924@C02TD0UTHF1T.local> <20210929013637.bcarm56e4mqo3ndt@treble> <20210929085035.GA33284@C02TD0UTHF1T.local> <20210929103730.GC33284@C02TD0UTHF1T.local> <20210930192638.xwemcsohivoynwx3@treble> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210930192638.xwemcsohivoynwx3@treble> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211001_052718_292877_9B38F433 X-CRM114-Status: GOOD ( 38.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Sep 30, 2021 at 12:26:38PM -0700, Josh Poimboeuf wrote: > On Wed, Sep 29, 2021 at 01:43:23PM +0200, Peter Zijlstra wrote: > > On Wed, Sep 29, 2021 at 11:37:30AM +0100, Mark Rutland wrote: > > > > > > This is because _ASM_EXTABLE only generates data for another section. > > > > There doesn't need to be code continuity between these two asm > > > > statements. > > > > > > I think you've missed my point. It doesn't matter that the > > > asm_volatile_goto() doesn't contain code, and this is solely about the > > > *state* expected at entry/exit from each asm block being different. > > > > Urgh.. indeed :/ > > So much for that idea :-/ > > To fix the issue of the wrong .fixup code symbol names getting printed, > we could (as Mark suggested) add a '__fixup_text_start' symbol at the > start of the .fixup section. And then remove all other symbols in the > .fixup section. Just to be clear, that was just as a "make debugging slightly less painful" aid, not as a fix for reliable stackrtrace and all that. > For x86, that means removing the kvm_fastop_exception symbol and a few > others. That way it's all anonymous code, displayed by the kernel as > "__fixup_text_start+0x1234". Which isn't all that useful, but still > better than printing the wrong symbol. > > But there's still a bigger problem: the function with the faulting > instruction doesn't get reported in the stack trace. > > For example, in the up-thread bug report, __d_lookup() bug report > doesn't get printed, even though its anonymous .fixup code is running in > the context of the function and will be branching back to it shortly. > > Even worse, this means livepatch is broken, because if for example > __d_lookup()'s .fixup code gets preempted, __d_lookup() can get skipped > by a reliable stack trace. > > So we may need to get rid of .fixup altogether. Especially for arches > which support livepatch. > > We can replace some of the custom .fixup handlers with generic handlers > like x86 does, which do the fixup work in exception context. This > generally works better for more generic work like putting an error code > in a certain register and resuming execution at the subsequent > instruction. I reckon even ignoring the unwind problems this'd be a good thing since it'd save on redundant copies of the fixup logic that happen to be identical, and the common cases like uaccess all fall into this shape. As for how to do that, in the past Peter and I had come up with some assembler trickery to get the name of the error code register encoded into the extable info: https://lore.kernel.org/lkml/20170207111011.GB28790@leverpostej/ https://lore.kernel.org/lkml/20170207160300.GB26173@leverpostej/ https://lore.kernel.org/lkml/20170208091250.GT6515@twins.programming.kicks-ass.net/ ... but maybe that's already solved on x86 in a different way? > However a lot of the .fixup code is rather custom and doesn't > necessarily work well with that model. Looking at arm64, even where we'd need custom handlers it does appear we could mostly do that out-of-line in the exception handler. The more exotic cases are largely in out-of-line asm functions, where we can move the fixups within the function, after the usual return. I reckon we can handle the fixups for load_unaligned_zeropad() in the exception handler. Is there anything specific that you think is painful in the exception handler? > In such cases we could just move the .fixup code into the function > (inline for older compilers; out-of-line for compilers that support > CC_HAS_ASM_GOTO_OUTPUT). > > Alternatively we could convert each .fixup code fragment into a proper > function which returns to a specified resume point in the function, and > then have the exception handler emulate a call to it like we do with > int3_emulate_call(). For arm64 this would be somewhat unfortunate for inline asm due to our calling convention -- we'd have to clobber the LR, and we'd need to force the creation of a frame record in the caller which would otherwise not be necessary. Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel