From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A44E3C433EF for ; Mon, 20 Jun 2022 05:16:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C32668E0003; Mon, 20 Jun 2022 01:16:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE1078E0002; Mon, 20 Jun 2022 01:16:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB41A8E0003; Mon, 20 Jun 2022 01:16:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 977448E0002 for ; Mon, 20 Jun 2022 01:16:37 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 616DF6082A for ; Mon, 20 Jun 2022 05:16:37 +0000 (UTC) X-FDA: 79597454034.17.9BC0D2B Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf31.hostedemail.com (Postfix) with ESMTP id BE5E720011 for ; Mon, 20 Jun 2022 05:16:36 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D04A21042; Sun, 19 Jun 2022 22:16:35 -0700 (PDT) Received: from [10.163.42.162] (unknown [10.163.42.162]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE2D23F7D7; Sun, 19 Jun 2022 22:16:32 -0700 (PDT) Message-ID: Date: Mon, 20 Jun 2022 10:46:35 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: Re: [PATCH V3 1/2] mm/mmap: Restrict generic protection_map[] array visibility Content-Language: en-US To: Christophe Leroy , "linux-mm@kvack.org" Cc: "hch@infradead.org" , Andrew Morton , "linux-kernel@vger.kernel.org" , Christoph Hellwig References: <20220616040924.1022607-1-anshuman.khandual@arm.com> <20220616040924.1022607-2-anshuman.khandual@arm.com> <4830e415-cdbb-7050-ebd6-7480493655ef@csgroup.eu> From: Anshuman Khandual In-Reply-To: <4830e415-cdbb-7050-ebd6-7480493655ef@csgroup.eu> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655702197; a=rsa-sha256; cv=none; b=WLMpLiE4zbXryoRKVDxUxe5wHSc5dje58Oa/4djgT5hQhWVVtJXkrcWDQaqwP4LaNHCyDS VYnYMP7AUM4IzBaZ5gWYUDbcVKzyw57fDL59w6rqfWIX+2SdFZusirHaa53WYl2NYLMw8z HhC5ZwfzAvghArYKogI1qzb60fKAFUg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655702197; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PjZFxuXVWWQ+qzI5JEGiln9bDuXp3pbhhuB1K822j5M=; b=azTz0J/PgcYb4rtYbgiO+q0ez2StLq9872snIwMr7xEHI1BePKeJB1NhN7KDGYxFj0BsBT aeMem0tScH+KjMDr3vnnt5pLcUkllEYpI0k8PGZq/AVqk5maY1137Jkq0tE/ZF+YU0I0Gh dQsPWF1JfEMElDuUNKtcs5/JHBa0+tI= ARC-Authentication-Results: i=1; imf31.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf31.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com Authentication-Results: imf31.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf31.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: BE5E720011 X-Stat-Signature: g476smw4jzjnq37xkeswc3un77idpifr X-HE-Tag: 1655702196-912238 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/16/22 11:05, Christophe Leroy wrote: > > Le 16/06/2022 à 06:09, Anshuman Khandual a écrit : >> Restrict generic protection_map[] array visibility only for platforms which >> do not enable ARCH_HAS_VM_GET_PAGE_PROT. For other platforms that do define >> their own vm_get_page_prot() enabling ARCH_HAS_VM_GET_PAGE_PROT, could have >> their private static protection_map[] still implementing an array look up. >> These private protection_map[] array could do without __PXXX/__SXXX macros, >> making them redundant and dropping them off as well. >> >> But platforms which do not define their custom vm_get_page_prot() enabling >> ARCH_HAS_VM_GET_PAGE_PROT, will still have to provide __PXXX/__SXXX macros. >> >> Cc: Andrew Morton >> Cc: linux-mm@kvack.org >> Cc: linux-kernel@vger.kernel.org >> Acked-by: Christoph Hellwig >> Signed-off-by: Anshuman Khandual >> --- >> arch/arm64/include/asm/pgtable-prot.h | 18 ------------------ >> arch/arm64/mm/mmap.c | 21 +++++++++++++++++++++ >> arch/powerpc/include/asm/pgtable.h | 2 ++ >> arch/powerpc/mm/book3s64/pgtable.c | 20 ++++++++++++++++++++ >> arch/sparc/include/asm/pgtable_64.h | 19 ------------------- >> arch/sparc/mm/init_64.c | 3 +++ >> arch/x86/include/asm/pgtable_types.h | 19 ------------------- >> arch/x86/mm/pgprot.c | 19 +++++++++++++++++++ >> include/linux/mm.h | 2 ++ >> mm/mmap.c | 2 +- >> 10 files changed, 68 insertions(+), 57 deletions(-) >> >> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h >> index d564d0ecd4cd..8ed2a80c896e 100644 >> --- a/arch/powerpc/include/asm/pgtable.h >> +++ b/arch/powerpc/include/asm/pgtable.h >> @@ -21,6 +21,7 @@ struct mm_struct; >> #endif /* !CONFIG_PPC_BOOK3S */ >> >> /* Note due to the way vm flags are laid out, the bits are XWR */ >> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT > This ifdef if not necessary for now, it doesn't matter if __P000 etc > still exist thought not used. > >> #define __P000 PAGE_NONE >> #define __P001 PAGE_READONLY >> #define __P010 PAGE_COPY >> @@ -38,6 +39,7 @@ struct mm_struct; >> #define __S101 PAGE_READONLY_X >> #define __S110 PAGE_SHARED_X >> #define __S111 PAGE_SHARED_X >> +#endif >> >> #ifndef __ASSEMBLY__ >> >> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c >> index 7b9966402b25..d3b019b95c1d 100644 >> --- a/arch/powerpc/mm/book3s64/pgtable.c >> +++ b/arch/powerpc/mm/book3s64/pgtable.c >> @@ -551,6 +551,26 @@ unsigned long memremap_compat_align(void) >> EXPORT_SYMBOL_GPL(memremap_compat_align); >> #endif >> >> +/* Note due to the way vm flags are laid out, the bits are XWR */ >> +static const pgprot_t protection_map[16] = { >> + [VM_NONE] = PAGE_NONE, >> + [VM_READ] = PAGE_READONLY, >> + [VM_WRITE] = PAGE_COPY, >> + [VM_WRITE | VM_READ] = PAGE_COPY, >> + [VM_EXEC] = PAGE_READONLY_X, >> + [VM_EXEC | VM_READ] = PAGE_READONLY_X, >> + [VM_EXEC | VM_WRITE] = PAGE_COPY_X, >> + [VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY_X, >> + [VM_SHARED] = PAGE_NONE, >> + [VM_SHARED | VM_READ] = PAGE_READONLY, >> + [VM_SHARED | VM_WRITE] = PAGE_SHARED, >> + [VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED, >> + [VM_SHARED | VM_EXEC] = PAGE_READONLY_X, >> + [VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY_X, >> + [VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED_X, >> + [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X >> +}; >> + > There is not much point is first additing that here and then move it > elsewhere in the second patch. > > I think with my suggestion to use #ifdef __P000 as a guard, the powerpc > changes could go in a single patch. > >> pgprot_t vm_get_page_prot(unsigned long vm_flags) >> { >> unsigned long prot = pgprot_val(protection_map[vm_flags & >> diff --git a/mm/mmap.c b/mm/mmap.c >> index 61e6135c54ef..e66920414945 100644 >> --- a/mm/mmap.c >> +++ b/mm/mmap.c >> @@ -101,6 +101,7 @@ static void unmap_region(struct mm_struct *mm, >> * w: (no) no >> * x: (yes) yes >> */ >> +#ifndef CONFIG_ARCH_HAS_VM_GET_PAGE_PROT > You should use #ifdef __P000 instead, that way you could migrate > architectures one by one. If vm_get_page_prot() gets moved into all platforms, wondering what would be the preferred method to organize this patch series ? 1. Move protection_map[] inside platforms with ARCH_HAS_VM_PAGE_PROT (current patch 1) 2. Convert remaining platforms to use ARCH_HAS_VM_PAGE_PROT one after the other 3. Drop ARCH_HAS_VM_PAGE_PROT completely Using "#ifdef __P000" for wrapping protection_map[] will leave two different #ifdefs in flight i.e __P000, ARCH_HAS_VM_PAGE_PROT in the generic mmap code, until both gets dropped eventually. But using "#ifdef __P000" does enable splitting the first patch into multiple changes for each individual platforms.