From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2177C11F67 for ; Tue, 29 Jun 2021 10:09:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5507961DC8 for ; Tue, 29 Jun 2021 10:09:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5507961DC8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8516D8D013D; Tue, 29 Jun 2021 06:09:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D9A48D00F0; Tue, 29 Jun 2021 06:09:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6534A8D013D; Tue, 29 Jun 2021 06:09:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 3A3D78D00F0 for ; Tue, 29 Jun 2021 06:09:54 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id F1F6C181019D9 for ; Tue, 29 Jun 2021 10:09:53 +0000 (UTC) X-FDA: 78306340266.16.68255FF Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id 4AE9F500170A for ; Tue, 29 Jun 2021 10:09:53 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2DF651042; Tue, 29 Jun 2021 03:01:39 -0700 (PDT) Received: from [10.57.46.146] (unknown [10.57.46.146]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E36AC3F694; Tue, 29 Jun 2021 03:01:36 -0700 (PDT) Subject: Re: [BUG] arm64: an infinite loop in generic_perform_write() To: Catalin Marinas Cc: Chen Huang , Al Viro , Matthew Wilcox , Christoph Hellwig , Mark Rutland , Andrew Morton , Stephen Rothwell , Randy Dunlap , Will Deacon , Linux ARM , linux-mm , open list References: <1c635945-fb25-8871-7b34-f475f75b2caf@huawei.com> <27fbb8c1-2a65-738f-6bec-13f450395ab7@arm.com> <20210624185554.GC25097@arm.com> <20210625103905.GA20835@arm.com> <7f14271a-9b2f-1afc-3caf-c4e5b36efa73@arm.com> <20210629083052.GA10900@arm.com> From: Robin Murphy Message-ID: Date: Tue, 29 Jun 2021 11:01:31 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210629083052.GA10900@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf01.hostedemail.com: domain of robin.murphy@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=robin.murphy@arm.com X-Rspamd-Server: rspam02 X-Stat-Signature: 3cz9izzigo9cztu75yz18affsuik4fcz X-Rspamd-Queue-Id: 4AE9F500170A X-HE-Tag: 1624961393-837422 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021-06-29 09:30, Catalin Marinas wrote: > On Mon, Jun 28, 2021 at 05:22:30PM +0100, Robin Murphy wrote: >> From: Robin Murphy >> Subject: [PATCH] arm64: Avoid premature usercopy failure >> >> Al reminds us that the usercopy API must only return complete failure >> if absolutely nothing could be copied. Currently, if userspace does >> something silly like giving us an unaligned pointer to Device memory, >> or a size which overruns MTE tag bounds, we may fail to honour that >> requirement when faulting on a multi-byte access even though a smaller >> access could have succeeded. >> >> Add a mitigation to the fixup routines to fall back to a single-byte >> copy if we faulted on a larger access before anything has been written >> to the destination, to guarantee making *some* forward progress. We >> needn't be too concerned about the overall performance since this should >> only occur when callers are doing something a bit dodgy in the first >> place. Particularly broken userspace might still be able to trick >> generic_perform_write() into an infinite loop by targeting write() at >> an mmap() of some read-only device register where the fault-in load >> succeeds but any store synchronously aborts such that copy_to_user() is >> genuinely unable to make progress, but, well, don't do that... >> >> Reported-by: Chen Huang >> Suggested-by: Al Viro >> Signed-off-by: Robin Murphy > > Thanks Robin for putting this together. I'll write some MTE kselftests > to check for regressions in the future. > >> diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S >> index 95cd62d67371..5b720a29a242 100644 >> --- a/arch/arm64/lib/copy_from_user.S >> +++ b/arch/arm64/lib/copy_from_user.S >> @@ -29,7 +29,7 @@ >> .endm >> .macro ldrh1 reg, ptr, val >> - user_ldst 9998f, ldtrh, \reg, \ptr, \val >> + user_ldst 9997f, ldtrh, \reg, \ptr, \val >> .endm >> .macro strh1 reg, ptr, val >> @@ -37,7 +37,7 @@ >> .endm >> .macro ldr1 reg, ptr, val >> - user_ldst 9998f, ldtr, \reg, \ptr, \val >> + user_ldst 9997f, ldtr, \reg, \ptr, \val >> .endm >> .macro str1 reg, ptr, val >> @@ -45,7 +45,7 @@ >> .endm >> .macro ldp1 reg1, reg2, ptr, val >> - user_ldp 9998f, \reg1, \reg2, \ptr, \val >> + user_ldp 9997f, \reg1, \reg2, \ptr, \val >> .endm >> .macro stp1 reg1, reg2, ptr, val >> @@ -53,8 +53,10 @@ >> .endm >> end .req x5 >> +srcin .req x15 >> SYM_FUNC_START(__arch_copy_from_user) >> add end, x0, x2 >> + mov srcin, x1 >> #include "copy_template.S" >> mov x0, #0 // Nothing to copy >> ret >> @@ -63,6 +65,12 @@ EXPORT_SYMBOL(__arch_copy_from_user) >> .section .fixup,"ax" >> .align 2 >> +9997: cmp dst, dstin >> + b.ne 9998f >> + // Before being absolutely sure we couldn't copy anything, try harder >> +USER(9998f, ldtrb tmp1w, [srcin]) >> + strb tmp1w, [dstin] >> + add dst, dstin, #1 > > Nitpick: can we do just strb tmb1w, [dst], #1? It matches the strb1 > macro in this file. Oh, of course; I think I befuddled myself there and failed to consider that by this point we've already mandated that dstin == dst (so that we can use srcin because src may have advanced already), so in fact we *can* use dst as the base register to avoid the shuffling, and post-index this one. I'll clean that up. > Either way, it looks fine to me. > > Reviewed-by: Catalin Marinas Thanks! Robin.