From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C2A9C10F14 for ; Tue, 8 Oct 2019 19:59:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3B48B218AC for ; Tue, 8 Oct 2019 19:59:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730051AbfJHT7C (ORCPT ); Tue, 8 Oct 2019 15:59:02 -0400 Received: from zeniv.linux.org.uk ([195.92.253.2]:34206 "EHLO ZenIV.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729436AbfJHT7C (ORCPT ); Tue, 8 Oct 2019 15:59:02 -0400 Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92.2 #3 (Red Hat Linux)) id 1iHvdK-0001OS-Ea; Tue, 08 Oct 2019 19:58:58 +0000 Date: Tue, 8 Oct 2019 20:58:58 +0100 From: Al Viro To: Linus Torvalds Cc: Guenter Roeck , Linux Kernel Mailing List , linux-fsdevel Subject: Re: [PATCH] Convert filldir[64]() from __put_user() to unsafe_put_user() Message-ID: <20191008195858.GV26530@ZenIV.linux.org.uk> References: <20191006222046.GA18027@roeck-us.net> <5f06c138-d59a-d811-c886-9e73ce51924c@roeck-us.net> <20191007012437.GK26530@ZenIV.linux.org.uk> <20191007025046.GL26530@ZenIV.linux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.12.1 (2019-06-15) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Mon, Oct 07, 2019 at 11:26:35AM -0700, Linus Torvalds wrote: > The good news is that right now x86 is the only architecture that does > that user_access_begin(), so we don't need to worry about anything > else. Apparently the ARM people haven't had enough performance > problems with the PAN bit for them to care. Take a look at this: static inline unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned long ret; if (__builtin_constant_p(n) && (n <= 8)) { ret = 1; switch (n) { case 1: barrier_nospec(); __get_user_size(*(u8 *)to, from, 1, ret); break; case 2: barrier_nospec(); __get_user_size(*(u16 *)to, from, 2, ret); break; case 4: barrier_nospec(); __get_user_size(*(u32 *)to, from, 4, ret); break; case 8: barrier_nospec(); __get_user_size(*(u64 *)to, from, 8, ret); break; } if (ret == 0) return 0; } barrier_nospec(); allow_read_from_user(from, n); ret = __copy_tofrom_user((__force void __user *)to, from, n); prevent_read_from_user(from, n); return ret; } That's powerpc. And while the constant-sized bits are probably pretty useless there as well, note the allow_read_from_user()/prevent_read_from_user() part. Looks suspiciously similar to user_access_begin()/user_access_end()... The difference is, they have separate "for read" and "for write" primitives and they want the range in their user_access_end() analogue. Separating the read and write isn't a problem for callers (we want them close to the actual memory accesses). Passing the range to user_access_end() just might be tolerable, unless it makes you throw up...