From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 780B7C433FE for ; Thu, 2 Dec 2021 16:10:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=eu5Iv1aNBnJ7ihQ8aOffVVUh8FmTl63DK8AGVpNjhMs=; b=N5uxeYiRGGJM+T R58Blj4GdrSBxRRTLd3TPjd7T1Ve8yBgIDpCjJr9T1xuM/0iPf6Rk3PGAjPBxk07nVLklO2I9etvr xXa1BY/X89y0lwzX++LaL3hCmyDeD7IEipqClIi0Mhk5dzF2fNssrhu7QVlozgZONByOZr65n+jHw 1+KmopGs+zcgCyhQuF5MDTnh8BcIG2AUy2EvZoswfpqOfb8Loj6s1sQSjEEKu4zZhK+9CUYXEYCl7 m13rCnKHsMr5ZzeP/OopjpiYL429Q5kkBDS4aXOkknXaDzJYzz/WMnTVcv7Rb9cTZo9UYgFTAZhcu 2HIH919Hnh1DnfPTN8pQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1msoe7-00Cx9n-Je; Thu, 02 Dec 2021 16:09:19 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1msoe3-00Cx7w-1R for linux-arm-kernel@lists.infradead.org; Thu, 02 Dec 2021 16:09:16 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1EEBC142F; Thu, 2 Dec 2021 08:09:12 -0800 (PST) Received: from arm.com (arrakis.cambridge.arm.com [10.1.196.175]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6E0F73F73B; Thu, 2 Dec 2021 08:09:10 -0800 (PST) Date: Thu, 2 Dec 2021 16:09:08 +0000 From: Catalin Marinas To: Mark Rutland Cc: Linus Torvalds , Andreas Gruenbacher , Josef Bacik , David Sterba , Al Viro , Andrew Morton , Will Deacon , Matthew Wilcox , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-btrfs@vger.kernel.org Subject: Re: [PATCH v2 3/4] arm64: Add support for user sub-page fault probing Message-ID: References: <20211201193750.2097885-1-catalin.marinas@arm.com> <20211201193750.2097885-4-catalin.marinas@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211202_080915_225140_9CCD6100 X-CRM114-Status: GOOD ( 28.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Mark, On Wed, Dec 01, 2021 at 08:29:06PM +0000, Mark Rutland wrote: > On Wed, Dec 01, 2021 at 07:37:49PM +0000, Catalin Marinas wrote: > > +/* > > + * Return 0 on success, the number of bytes not accessed otherwise. > > + */ > > +static inline size_t __mte_probe_user_range(const char __user *uaddr, > > + size_t size, bool skip_first) > > +{ > > + const char __user *end = uaddr + size; > > + int err = 0; > > + char val; > > + > > + uaddr = PTR_ALIGN_DOWN(uaddr, MTE_GRANULE_SIZE); > > + if (skip_first) > > + uaddr += MTE_GRANULE_SIZE; > > Do we need the skipping for a functional reason, or is that an optimization? An optimisation and very likely not noticeable. Given that we'd do a read following a put_user() or get_user() earlier, the cacheline was allocated and another load may be nearly as fast as the uaddr increment. > From the comments in probe_subpage_writeable() and > probe_subpage_safe_writeable() I wasn't sure if the skipping was because we > *don't need to* check the first granule, or because we *must not* check the > first granule. The "don't need to" part. But thinking about this, I'll just drop it as it's confusing. > > + while (uaddr < end) { > > + /* > > + * A read is sufficient for MTE, the caller should have probed > > + * for the pte write permission if required. > > + */ > > + __raw_get_user(val, uaddr, err); > > + if (err) > > + return end - uaddr; > > + uaddr += MTE_GRANULE_SIZE; > > + } > > I think we may need to account for the residue from PTR_ALIGN_DOWN(), or we can > report more bytes not copied than was passed in `size` in the first place, > which I think might confused some callers. > > Consider MTE_GRANULE_SIZE is 16, uaddr is 31, and size is 1 (so end is 32). We > align uaddr down to 16, and if we fail the first access we return (32 - 16), > i.e. 16. Good point. This is fine if we skip the first byte but not otherwise. Planning to fold in this diff: diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index bcbd24b97917..213b30841beb 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -451,15 +451,17 @@ static inline int __copy_from_user_flushcache(void *dst, const void __user *src, * Return 0 on success, the number of bytes not accessed otherwise. */ static inline size_t __mte_probe_user_range(const char __user *uaddr, - size_t size, bool skip_first) + size_t size) { const char __user *end = uaddr + size; int err = 0; char val; - uaddr = PTR_ALIGN_DOWN(uaddr, MTE_GRANULE_SIZE); - if (skip_first) - uaddr += MTE_GRANULE_SIZE; + __raw_get_user(val, uaddr, err); + if (err) + return size; + + uaddr = PTR_ALIGN(uaddr, MTE_GRANULE_SIZE); while (uaddr < end) { /* * A read is sufficient for MTE, the caller should have probed @@ -480,8 +482,7 @@ static inline size_t probe_subpage_writeable(const void __user *uaddr, { if (!system_supports_mte()) return 0; - /* first put_user() done in the caller */ - return __mte_probe_user_range(uaddr, size, true); + return __mte_probe_user_range(uaddr, size); } static inline size_t probe_subpage_safe_writeable(const void __user *uaddr, @@ -489,8 +490,7 @@ static inline size_t probe_subpage_safe_writeable(const void __user *uaddr, { if (!system_supports_mte()) return 0; - /* the caller used GUP, don't skip the first granule */ - return __mte_probe_user_range(uaddr, size, false); + return __mte_probe_user_range(uaddr, size); } static inline size_t probe_subpage_readable(const void __user *uaddr, @@ -498,8 +498,7 @@ static inline size_t probe_subpage_readable(const void __user *uaddr, { if (!system_supports_mte()) return 0; - /* first get_user() done in the caller */ - return __mte_probe_user_range(uaddr, size, true); + return __mte_probe_user_range(uaddr, size); } #endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */ -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel