From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753394AbdAZJrk (ORCPT ); Thu, 26 Jan 2017 04:47:40 -0500 Received: from mail-wj0-f196.google.com ([209.85.210.196]:35569 "EHLO mail-wj0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753200AbdAZJrh (ORCPT ); Thu, 26 Jan 2017 04:47:37 -0500 Date: Thu, 26 Jan 2017 10:47:33 +0100 From: Ingo Molnar To: riel@redhat.com Cc: linux-kernel@vger.kernel.org, luto@kernel.org, yu-cheng.yu@intel.com, dave.hansen@linux.intel.com, bp@suse.de Subject: Re: [PATCH 1/2] x86/fpu: move copyout_from_xsaves bounds check before the copy Message-ID: <20170126094733.GA22486@gmail.com> References: <20170126015759.25871-1-riel@redhat.com> <20170126015759.25871-2-riel@redhat.com> <20170126094044.GA24499@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170126094044.GA24499@gmail.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Ingo Molnar wrote: > So this code: > > static inline int xstate_copyout(unsigned int pos, unsigned int count, > void *kbuf, void __user *ubuf, > const void *data, const int start_pos, > const int end_pos) > { > if ((count == 0) || (pos < start_pos)) > return 0; > > if (end_pos < 0 || pos < end_pos) { > unsigned int copy = (end_pos < 0 ? count : min(count, end_pos - pos)); > > if (kbuf) { > memcpy(kbuf + pos, data, copy); > } else { > if (__copy_to_user(ubuf + pos, data, copy)) > return -EFAULT; > } > } > return 0; > } > > Is, after all the cleanups and fixes is in reality equivalent to: > > static inline int > __copy_xstate_to_kernel(void *kbuf, const void *data, > unsigned int offset, unsigned int size) > { > memcpy(kbuf + offset, data, size); > > return 0; > } > > !!! Note that it's not entirely true - for the degenerate case of ptrace() requesting a very small and partial buffer that cannot even fit the headers, this check is still required - so we end up with something like: static inline int __copy_xstate_to_kernel(void *kbuf, const void *data, unsigned int offset, unsigned int size, unsigned int size_total) { if (offset < size_total) { unsigned int copy = min(size, size_total - offset); memcpy(kbuf + offset, data, copy); } return 0; } But it's still an inconsistent mess: we'll do a partial copy in headers but not for xstate components? I believe the right solution is to allow partial copies only if they are at precise xstate (and legacy) component boundaries, and apply this to the header portion as well. This allows user-space to request only the FPU bits for example - but doesn't force the kernel to handle really weird partial copy cases that very few people are testing ... (Unless there's some ABI pattern from debugging applications that I missed?) Thanks, Ingo