From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f70.google.com (mail-oi0-f70.google.com [209.85.218.70]) by kanga.kvack.org (Postfix) with ESMTP id C46E76B025E for ; Wed, 20 Jul 2016 11:36:44 -0400 (EDT) Received: by mail-oi0-f70.google.com with SMTP id q62so86719116oih.0 for ; Wed, 20 Jul 2016 08:36:44 -0700 (PDT) Received: from mail-it0-f42.google.com (mail-it0-f42.google.com. [209.85.214.42]) by mx.google.com with ESMTPS id x68si17801228itg.11.2016.07.20.08.36.44 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 20 Jul 2016 08:36:44 -0700 (PDT) Received: by mail-it0-f42.google.com with SMTP id f6so118081692ith.1 for ; Wed, 20 Jul 2016 08:36:44 -0700 (PDT) Subject: Re: [PATCH v3 02/11] mm: Hardened usercopy References: <1468619065-3222-1-git-send-email-keescook@chromium.org> <1468619065-3222-3-git-send-email-keescook@chromium.org> <1469010283.2800.5.camel@gmail.com> From: Laura Abbott Message-ID: <2aabe10d-2ccb-2ba6-18bb-b7f52d70d36c@redhat.com> Date: Wed, 20 Jul 2016 08:36:38 -0700 MIME-Version: 1.0 In-Reply-To: <1469010283.2800.5.camel@gmail.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Balbir Singh , Kees Cook Cc: LKML , Daniel Micay , Josh Poimboeuf , Rik van Riel , Casey Schaufler , PaX Team , Brad Spengler , Russell King , Catalin Marinas , Will Deacon , Ard Biesheuvel , Benjamin Herrenschmidt , Michael Ellerman , Tony Luck , Fenghua Yu , "David S. Miller" , "x86@kernel.org" , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Andy Lutomirski , Borislav Petkov , Mathias Krause , Jan Kara , Vitaly Wool , Andrea Arcangeli , Dmitry Vyukov , "linux-arm-kernel@lists.infradead.org" , linux-ia64@vger.kernel.org, "linuxppc-dev@lists.ozlabs.org" , sparclinux , linux-arch , Linux-MM , "kernel-hardening@lists.openwall.com" On 07/20/2016 03:24 AM, Balbir Singh wrote: > On Tue, 2016-07-19 at 11:48 -0700, Kees Cook wrote: >> On Mon, Jul 18, 2016 at 6:06 PM, Laura Abbott wrote: >>> >>> On 07/15/2016 02:44 PM, Kees Cook wrote: >>> >>> This doesn't work when copying CMA allocated memory since CMA purposely >>> allocates larger than a page block size without setting head pages. >>> Given CMA may be used with drivers doing zero copy buffers, I think it >>> should be permitted. >>> >>> Something like the following lets it pass (I can clean up and submit >>> the is_migrate_cma_page APIs as a separate patch for review) >> Yeah, this would be great. I'd rather use an accessor to check this >> than a direct check for MIGRATE_CMA. >> >>> */ >>> for (; ptr <= end ; ptr += PAGE_SIZE, page = virt_to_head_page(ptr)) >>> { >>> - if (!PageReserved(page)) >>> + if (!PageReserved(page) && !is_migrate_cma_page(page)) >>> return ""; >>> } >> Yeah, I'll modify this a bit so that which type it starts as is >> maintained for all pages (rather than allowing to flip back and forth >> -- even though that is likely impossible). >> > Sorry, I completely missed the MIGRATE_CMA bits. Could you clarify if you > caught this in testing/review? > > Balbir Singh. > I caught it while looking at the code and then wrote a test case to confirm I was correct because I wasn't sure how to easily find an in tree user. Thanks, Laura -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org