From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753366AbdAZJkw (ORCPT ); Thu, 26 Jan 2017 04:40:52 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:34111 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753110AbdAZJks (ORCPT ); Thu, 26 Jan 2017 04:40:48 -0500 Date: Thu, 26 Jan 2017 10:40:44 +0100 From: Ingo Molnar To: riel@redhat.com Cc: linux-kernel@vger.kernel.org, luto@kernel.org, yu-cheng.yu@intel.com, dave.hansen@linux.intel.com, bp@suse.de Subject: Re: [PATCH 1/2] x86/fpu: move copyout_from_xsaves bounds check before the copy Message-ID: <20170126094044.GA24499@gmail.com> References: <20170126015759.25871-1-riel@redhat.com> <20170126015759.25871-2-riel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170126015759.25871-2-riel@redhat.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * riel@redhat.com wrote: > From: Rik van Riel > > Userspace may have programs, especially debuggers, that do not know > how large the full XSAVE area space is. They pass in a size argument, > and expect to not get more data than that. > > Unfortunately, the current copyout_from_xsaves does the bounds check > after the copy out to userspace. This could theoretically result > in the kernel scribbling over userspace memory outside of the buffer, > before bailing out of the copy. > > In practice, this is not likely to be an issue, since debuggers are > likely to specify the size they know about, and that size is likely > to exactly match the XSAVE fields they know about. > > However, we could be a little more careful and do the bounds check > before committing ourselves with a copy to userspace. > > Signed-off-by: Rik van Riel > --- > arch/x86/kernel/fpu/xstate.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c > index c24ac1efb12d..c1508d56ecfb 100644 > --- a/arch/x86/kernel/fpu/xstate.c > +++ b/arch/x86/kernel/fpu/xstate.c > @@ -992,13 +992,13 @@ int copyout_from_xsaves(unsigned int pos, unsigned int count, void *kbuf, > offset = xstate_offsets[i]; > size = xstate_sizes[i]; > > + if (offset + size > count) > + break; > + > ret = xstate_copyout(offset, size, kbuf, ubuf, src, 0, count); > > if (ret) > return ret; > - > - if (offset + size >= count) > - break; That's not a robust way to do a bounds check either - what if 'offset' is so large that it overflows and offset + size falls within the 'sane' 0..count range? Also, what about partial copies? Plus, to add insult to injury, xstate_copyout() is a totally unnecessary obfuscation to begin with: - 'start_pos' is always 0 - 'end_pos' is always 'count' - both are const for no good reason: they are not pointers - both are signed for no good reason: they are derived from unsigned types and I don't see how negative values can ever be valid here. So this code: static inline int xstate_copyout(unsigned int pos, unsigned int count, void *kbuf, void __user *ubuf, const void *data, const int start_pos, const int end_pos) { if ((count == 0) || (pos < start_pos)) return 0; if (end_pos < 0 || pos < end_pos) { unsigned int copy = (end_pos < 0 ? count : min(count, end_pos - pos)); if (kbuf) { memcpy(kbuf + pos, data, copy); } else { if (__copy_to_user(ubuf + pos, data, copy)) return -EFAULT; } } return 0; } Is, after all the cleanups and fixes is in reality equivalent to: static inline int __copy_xstate_to_kernel(void *kbuf, const void *data, unsigned int offset, unsigned int size) { memcpy(kbuf + offset, data, size); return 0; } !!! So the real fix is to get rid of xstate_copyout() altogether and just do the memcpy directly - the regset obfuscation actively hid a real bug... Thanks, Ingo