From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.6 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 933DDC49EA6 for ; Thu, 24 Jun 2021 17:29:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 66D78613F2 for ; Thu, 24 Jun 2021 17:29:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 66D78613F2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:Cc:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=LOhNfSagRUSQ1YZUABj3Gf/Qk4UJ6ES9Q+3xsSSZpWE=; b=tcpLqltB/SXlHHoaCGAblmQNPI 2OpRl+Q6V0KVMl1cEKW9qUFoocfv/9dsW5xct710MVn6RNSkuXs853zYFTjWg9cA6zAQYKi2qv8ES wDvQNYGbuJd/ES53GwlYzhzYk0I1RtNGVKgZHhwmnFPm5YLG1pJQxItjXpmn9A665ovExtzSxIXlZ PlxabNXDU4L13F0DWam1Y01im7RCDGbDKf/XRq1E6S/Ya8cCdIePuuN7t6Mhbfb2Le9qBKlJ90FOr dAJzuH5AatEAw1sFDgXyywx/a7htcb6c8d/hV4Wlp7ZrS6T5c9YdDTv1hR+2UfR6l2yltmFiM/baa Dg/teKpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lwT8U-00FdqJ-Uk; Thu, 24 Jun 2021 17:27:31 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lwT66-00Fd4q-Iz for linux-arm-kernel@lists.infradead.org; Thu, 24 Jun 2021 17:25:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A1800ED1; Thu, 24 Jun 2021 10:24:59 -0700 (PDT) Received: from [10.57.9.136] (unknown [10.57.9.136]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 609573F719; Thu, 24 Jun 2021 10:24:57 -0700 (PDT) Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() To: Al Viro Cc: Matthew Wilcox , Christoph Hellwig , Chen Huang , Mark Rutland , Andrew Morton , Stephen Rothwell , Randy Dunlap , Catalin Marinas , Will Deacon , Linux ARM , linux-mm , open list References: <20210623132223.GA96264@C02TD0UTHF1T.local> <1c635945-fb25-8871-7b34-f475f75b2caf@huawei.com> <27fbb8c1-2a65-738f-6bec-13f450395ab7@arm.com> <7896a3c7-2e14-d0f4-dbb9-286b6f7181b5@arm.com> From: Robin Murphy Message-ID: <1aa40be9-2a47-007a-990f-a7eea6721a23@arm.com> Date: Thu, 24 Jun 2021 18:24:52 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 MIME-Version: 1.0 In-Reply-To: Content-Language: en-GB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210624_102502_768757_A58024FE X-CRM114-Status: GOOD ( 21.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2021-06-24 17:39, Al Viro wrote: > On Thu, Jun 24, 2021 at 05:38:35PM +0100, Robin Murphy wrote: >> On 2021-06-24 17:27, Al Viro wrote: >>> On Thu, Jun 24, 2021 at 02:22:27PM +0100, Robin Murphy wrote: >>> >>>> FWIW I think the only way to make the kernel behaviour any more robust here >>>> would be to make the whole uaccess API more expressive, such that rather >>>> than simply saying "I only got this far" it could actually differentiate >>>> between stopping due to a fault which may be recoverable and worth retrying, >>>> and one which definitely isn't. >>> >>> ... and propagate that "more expressive" information through what, 3 or 4 >>> levels in the call chain? >>> >>> From include/linux/uaccess.h: >>> >>> * If raw_copy_{to,from}_user(to, from, size) returns N, size - N bytes starting >>> * at to must become equal to the bytes fetched from the corresponding area >>> * starting at from. All data past to + size - N must be left unmodified. >>> * >>> * If copying succeeds, the return value must be 0. If some data cannot be >>> * fetched, it is permitted to copy less than had been fetched; the only >>> * hard requirement is that not storing anything at all (i.e. returning size) >>> * should happen only when nothing could be copied. In other words, you don't >>> * have to squeeze as much as possible - it is allowed, but not necessary. >>> >>> arm64 instances violate the aforementioned hard requirement. Please, fix >>> it there; it's not hard. All you need is an exception handler in .Ltiny15 >>> that would fall back to (short) byte-by-byte copy if the faulting address >>> happened to be unaligned. Or just do one-byte copy, not that it had been >>> considerably cheaper than a loop. Will be cheaper than propagating that extra >>> information up the call chain, let alone paying for extra ->write_begin() >>> and ->write_end() for single byte in generic_perform_write(). >> >> And what do we do if we then continue to fault with an external abort >> because whatever it is that warranted being mapped as Device-type memory in >> the first place doesn't support byte accesses? > > If it does not support byte access, it would've failed on fault-in. OK, if I'm understanding the code correctly and fault-in touches the exact byte that copy_to_user() is going to start on, and faulting anywhere *after* that byte is still OK, then that seems mostly workable, although there are still potential corner cases like a device register accepting byte reads but not byte writes. Basically if privileged userspace is going to do dumb things with mmap()ed MMIO, the kernel can't *guarantee* to save it from itself without a hell of a lot of invasive work for no other gain. Sure we can add some extra fallback paths in our arch code for a best-effort attempt to mitigate alignment faults - revamping the usercopy routines is on my to-do list so I'll bear this in mind, and I think it's basically the same idea we mooted some time ago for tag faults anyway - but I'm sure someone will inevitably still find some new way to trip it up. Fortunately on modern systems many of the aforementioned dumb things won't actually fault synchronously, so even if triggered by a usercopy accesses the payback will come slightly later via asynchronous SError and be considerably more terminal. Thanks, Robin. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel