From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-x241.google.com (mail-pf0-x241.google.com [IPv6:2607:f8b0:400e:c00::241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3rrFgB1WV6zDqFR for ; Fri, 15 Jul 2016 11:42:14 +1000 (AEST) Received: by mail-pf0-x241.google.com with SMTP id i6so5249282pfe.0 for ; Thu, 14 Jul 2016 18:42:14 -0700 (PDT) Date: Fri, 15 Jul 2016 11:41:51 +1000 From: Balbir Singh To: Rik van Riel Cc: bsingharora@gmail.com, Kees Cook , linux-kernel@vger.kernel.org, Casey Schaufler , PaX Team , Brad Spengler , Russell King , Catalin Marinas , Will Deacon , Ard Biesheuvel , Benjamin Herrenschmidt , Michael Ellerman , Tony Luck , Fenghua Yu , "David S. Miller" , x86@kernel.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Andy Lutomirski , Borislav Petkov , Mathias Krause , Jan Kara , Vitaly Wool , Andrea Arcangeli , Dmitry Vyukov , Laura Abbott , linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, kernel-hardening@lists.openwall.com Subject: Re: [PATCH v2 02/11] mm: Hardened usercopy Message-ID: <20160715014151.GA13944@balbir.ozlabs.ibm.com> Reply-To: bsingharora@gmail.com References: <1468446964-22213-1-git-send-email-keescook@chromium.org> <1468446964-22213-3-git-send-email-keescook@chromium.org> <20160714232019.GA28254@350D> <1468544658.30053.26.camel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 In-Reply-To: <1468544658.30053.26.camel@redhat.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On Thu, Jul 14, 2016 at 09:04:18PM -0400, Rik van Riel wrote: > On Fri, 2016-07-15 at 09:20 +1000, Balbir Singh wrote: > > > > == > > > +    ((unsigned long)end & (unsigned > > > long)PAGE_MASK))) > > > + return NULL; > > > + > > > + /* Allow if start and end are inside the same compound > > > page. */ > > > + endpage = virt_to_head_page(end); > > > + if (likely(endpage == page)) > > > + return NULL; > > > + > > > + /* Allow special areas, device memory, and sometimes > > > kernel data. */ > > > + if (PageReserved(page) && PageReserved(endpage)) > > > + return NULL; > > > > If we came here, it's likely that endpage > page, do we need to check > > that only the first and last pages are reserved? What about the ones > > in > > the middle? > > I think this will be so rare, we can get away with just > checking the beginning and the end. > But do we want to leave a hole where an aware user space can try a longer copy_* to avoid this check? If it is unlikely should we just bite the bullet and do the check for the entire range? Balbir Singh.