From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755017AbcIEIra (ORCPT ); Mon, 5 Sep 2016 04:47:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56930 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753280AbcIEIr1 (ORCPT ); Mon, 5 Sep 2016 04:47:27 -0400 Date: Mon, 5 Sep 2016 10:47:22 +0200 From: Jiri Olsa To: Andi Kleen Cc: Jiri Olsa , lkml , Kees Cook , Ingo Molnar , Adrian Hunter , KAMEZAWA Hiroyuki , Linus Torvalds Subject: Re: [PATCH] fs/proc/kcore.c: Omit kernel text area for hardened usercopy feature Message-ID: <20160905084722.GA3134@krava> References: <1472819145-27260-1-git-send-email-jolsa@kernel.org> <20160902151713.GM5871@two.firstfloor.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160902151713.GM5871@two.firstfloor.org> User-Agent: Mutt/1.7.0 (2016-08-17) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Mon, 05 Sep 2016 08:47:27 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 02, 2016 at 08:17:13AM -0700, Andi Kleen wrote: > On Fri, Sep 02, 2016 at 02:25:45PM +0200, Jiri Olsa wrote: > > One of the bullets for hardened usercopy feature is: > > - object must not overlap with kernel text > > > > which is what we expose via /proc/kcore. We can hit > > this check and crash the system very easily just by > > reading the text area in kcore file: > > > > usercopy: kernel memory exposure attempt detected from ffffffff8179a01f () (4065 bytes) > > kernel BUG at mm/usercopy.c:75! > > > > Omitting kernel text area from kcore when there's > > hardened usercopy feature is enabled. > > That will completely break PT decoding, which relies on looking > at the kernel text in /proc/kcore. > > Need a different fix here, perhaps some special copy function > that is not hardened. how about something like this jirka --- diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index c3f291195294..43f5404f0e61 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -726,7 +726,8 @@ copy_from_user(void *to, const void __user *from, unsigned long n) } static inline unsigned long __must_check -copy_to_user(void __user *to, const void *from, unsigned long n) +copy_to_user_check(void __user *to, const void *from, + unsigned long n, bool check) { int sz = __compiletime_object_size(from); @@ -735,7 +736,8 @@ copy_to_user(void __user *to, const void *from, unsigned long n) might_fault(); if (likely(sz < 0 || sz >= n)) { - check_object_size(from, n, true); + if (check) + check_object_size(from, n, true); n = _copy_to_user(to, from, n); } else if (!__builtin_constant_p(n)) copy_user_overflow(sz, n); @@ -745,6 +747,19 @@ copy_to_user(void __user *to, const void *from, unsigned long n) return n; } +static inline unsigned long __must_check +copy_to_user(void __user *to, const void *from, unsigned long n) +{ + return copy_to_user_check(to, from, n, true); +} + +static inline unsigned long __must_check +copy_to_user_nocheck(void __user *to, const void *from, unsigned long n) +{ + return copy_to_user_check(to, from, n, false); +} + + /* * We rely on the nested NMI work to allow atomic faults from the NMI path; the * nested NMI paths are careful to preserve CR2. diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index 673059a109fe..e80e4a146b7d 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -50,7 +50,7 @@ __must_check unsigned long copy_in_user(void __user *to, const void __user *from, unsigned len); static __always_inline __must_check -int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size) +int __copy_from_user_nofaultcheck(void *dst, const void __user *src, unsigned size) { int ret = 0; @@ -112,7 +112,7 @@ int __copy_from_user(void *dst, const void __user *src, unsigned size) { might_fault(); kasan_check_write(dst, size); - return __copy_from_user_nocheck(dst, src, size); + return __copy_from_user_nofaultcheck(dst, src, size); } static __always_inline __must_check @@ -248,7 +248,7 @@ static __must_check __always_inline int __copy_from_user_inatomic(void *dst, const void __user *src, unsigned size) { kasan_check_write(dst, size); - return __copy_from_user_nocheck(dst, src, size); + return __copy_from_user_nofaultcheck(dst, src, size); } static __must_check __always_inline int diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index a939f5ed7f89..c7a22a8a157e 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -516,7 +516,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) if (kern_addr_valid(start)) { unsigned long n; - n = copy_to_user(buffer, (char *)start, tsz); + n = copy_to_user_nocheck(buffer, (char *)start, tsz); /* * We cannot distinguish between fault on source * and fault on destination. When this happens