From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756682AbcGGIAz (ORCPT ); Thu, 7 Jul 2016 04:00:55 -0400 Received: from mout.kundenserver.de ([212.227.126.133]:60571 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756515AbcGGH7r (ORCPT ); Thu, 7 Jul 2016 03:59:47 -0400 From: Arnd Bergmann To: linuxppc-dev@lists.ozlabs.org Cc: Kees Cook , linux-kernel@vger.kernel.org, Jan Kara , kernel-hardening@lists.openwall.com, Catalin Marinas , Will Deacon , linux-mm@kvack.org, sparclinux@vger.kernel.org, linux-ia64@vger.kernel.org, Christoph Lameter , Andrea Arcangeli , linux-arch@vger.kernel.org, x86@kernel.org, Russell King , linux-arm-kernel@lists.infradead.org, PaX Team , Borislav Petkov , Mathias Krause , Fenghua Yu , Rik van Riel , David Rientjes , Tony Luck , Andy Lutomirski , Joonsoo Kim , Dmitry Vyukov , Laura Abbott , Brad Spengler , Ard Biesheuvel , Pekka Enberg , Casey Schaufler , Andrew Morton , "David S. Miller" Subject: Re: [PATCH 1/9] mm: Hardened usercopy Date: Thu, 07 Jul 2016 10:01:36 +0200 Message-ID: <3418914.byvl8Wuxlf@wuerfel> User-Agent: KMail/5.1.3 (Linux/4.4.0-22-generic; KDE/5.18.0; x86_64; ; ) In-Reply-To: <1467843928-29351-2-git-send-email-keescook@chromium.org> References: <1467843928-29351-1-git-send-email-keescook@chromium.org> <1467843928-29351-2-git-send-email-keescook@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-Provags-ID: V03:K0:lxZBJey9bADbK3Exji0XVEGnMnUHMLPGJzvyHI0A86zgMLMQurC 3E0Q6bmqLM1bjHHed3RTPFpWZ0uLk3xdtjkFK1x9TI37EUVAkHZGBmOumAOgq0ewdoERBSr Vs1SzfHZ/CNrKB4bH2RHnCLCISUU2p3xWiL6XIYbM7SU13nYiFa26PJST20KqopR914Qmt6 yJem5Dlu0h3j80+uCH/8A== X-UI-Out-Filterresults: notjunk:1;V01:K0:ODmQBU88toY=:UR29jBVkL4uRZwmh0srRMR 2GZFWcpWE1kEYWHBV02ySe8mwDPdzlFvsgu7ekelEMeOAZ96V8Ehwoa7QkbLVbHFivkdvlG91 445vM1puNrOyl4zrua0k8qYCjBcd7uobOb+C6ey5OvXAKRHTNxVaCPaqutQQmBsoori/NYGkJ 6ViSPLMMd8/IEf5/WN6dDeLxTOI/IevQAPRrfWr5Rt9Cv76LubL6aiA9hHXzDAqN2as8NeSvt mqdX3w1Zw1YluGIvHMsfUtMejrfZ4SsNRpCJ0xLBjopJrQv5DdTYDruerQgYKA20D/ykQaD8c U5uWt4uBkA+N5EtGH9QjnxlFplXg+O/crnUxcriXy1jZ7aQfEnid3k2SfofAc80Cx5JI0acyF N1ffSwks8ElR1f7X9o6/TWcWiNsvePWlFI+IPnb25rtVhb4fOnNSXsIHUP2yg1mXf00f4xcIb sh5xc/fDVi9IdQEbB0LmcDLFDI7B8G/jWwm3MYwsvqx2kMZ/SzF5j/ycMjo5SxNpyuPpurH4E JgmjD4IGM38V4IPc4ouHZjEbeXZDhpjzYHfTHzjyPOYyJO3/pZTenNqcVdLnIrKw4ub22W1BD UoesnFRM3ptXJztsS6B3KP04uktk56jyM/qv4CbkrONUWo9CEz03b/AKlw6OkWCrgGvuMOi2U JChZqNgGe/4WTHhMtJLrdyNsV0mV1rOWkN/T6cFgzpSyF+IzRiRrdpSZmj1zcRuj/rUj9h54v QxFg2zOsiJQncztY Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wednesday, July 6, 2016 3:25:20 PM CEST Kees Cook wrote: > This is the start of porting PAX_USERCOPY into the mainline kernel. This > is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The > work is based on code by PaX Team and Brad Spengler, and an earlier port > from Casey Schaufler. Additional non-slab page tests are from Rik van Riel. > > This patch contains the logic for validating several conditions when > performing copy_to_user() and copy_from_user() on the kernel object > being copied to/from: > - address range doesn't wrap around > - address range isn't NULL or zero-allocated (with a non-zero copy size) > - if on the slab allocator: > - object size must be less than or equal to copy size (when check is > implemented in the allocator, which appear in subsequent patches) > - otherwise, object must not span page allocations > - if on the stack > - object must not extend before/after the current process task > - object must be contained by the current stack frame (when there is > arch/build support for identifying stack frames) > - object must not overlap with kernel text > > Signed-off-by: Kees Cook Nice! I have a few further thoughts, most of which have probably been considered before: > +static inline const char *check_bogus_address(const void *ptr, unsigned long n) > +{ > + /* Reject if object wraps past end of memory. */ > + if (ptr + n < ptr) > + return ""; > + > + /* Reject if NULL or ZERO-allocation. */ > + if (ZERO_OR_NULL_PTR(ptr)) > + return ""; > + > + return NULL; > +} This checks against address (void*)16, but I guess on most architectures the lowest possible kernel address is much higher. While there may not be much that to exploit if the expected kernel address points to userland, forbidding any obviously incorrect address that is outside of the kernel may be easier. Even on architectures like s390 that start the kernel memory at (void *)0x0, the lowest address to which we may want to do a copy_to_user would be much higher than (void*)0x16. > + > + /* Allow kernel rodata region (if not marked as Reserved). */ > + if (ptr >= (const void *)__start_rodata && > + end <= (const void *)__end_rodata) > + return NULL; Should we explicitly forbid writing to rodata, or is it enough to rely on page protection here? > + /* Allow kernel bss region (if not marked as Reserved). */ > + if (ptr >= (const void *)__bss_start && > + end <= (const void *)__bss_stop) > + return NULL; accesses to .data/.rodata/.bss are probably not performance critical, so we could go further here and check the kallsyms table to ensure that we are not spanning multiple symbols here. For stuff that is performance critical, should there be a way to opt out of the checks, or do we assume it already uses functions that avoid the checks? I looked at the file and network I/O path briefly and they seem to use kmap_atomic() to get to the user pages at least in some of the common cases (but I may well be missing important ones). Arnd