From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 777FFC4363A for ; Tue, 27 Oct 2020 10:00:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1ED6721D24 for ; Tue, 27 Oct 2020 10:00:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603792854; bh=2hAVw7qTLLEVUjRphlgAEOwljehMZXHfQC2jV2K8ui0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=GP2Sp6ggPTYFZI5/cNEvrbWBxecm0eTeof33a+90hxJQA4qFUDnfauQHqCDKo3pDv QOL7/s4vVKZ7LSuY7PoOmiKujTQHp5uLl+xM42V0HaQSJ6d4LZfiAArABBOrLXL/oT tyOSKi7EaCqtUjtJ8TlqG997PByLFBWbm/w2b8z8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2896717AbgJ0KAw (ORCPT ); Tue, 27 Oct 2020 06:00:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:46858 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2896065AbgJ0KAu (ORCPT ); Tue, 27 Oct 2020 06:00:50 -0400 Received: from kernel.org (unknown [87.70.96.83]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D0AC121D24; Tue, 27 Oct 2020 10:00:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603792849; bh=2hAVw7qTLLEVUjRphlgAEOwljehMZXHfQC2jV2K8ui0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=TOyB0HW/7z4g1FehneF2cfcYLtAIowP4KQMZ7ef+hnI65OHvV+UODT7Wc1S7b44i+ MReM9XB2fwBhhFkUxqUuEkojM2zjDvmK8w7FuHpWpfSbX98lLw3oCHQuOc5UBhZdGd ya0LdjCCp3h0366X6EZ7+DL31CdpMvhor91EJh5A= Date: Tue, 27 Oct 2020 12:00:40 +0200 From: Mike Rapoport To: Atish Patra Cc: linux-kernel@vger.kernel.org, Albert Ou , Andrew Morton , Anup Patel , Ard Biesheuvel , Borislav Petkov , Greentime Hu , Kees Cook , linux-riscv@lists.infradead.org, Michel Lespinasse , Palmer Dabbelt , Paul Walmsley , Zong Li Subject: Re: [PATCH v2 3/6] RISC-V: Enforce protections for kernel sections early Message-ID: <20201027100040.GK1154158@kernel.org> References: <20201026230254.911912-1-atish.patra@wdc.com> <20201026230254.911912-4-atish.patra@wdc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201026230254.911912-4-atish.patra@wdc.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 26, 2020 at 04:02:51PM -0700, Atish Patra wrote: > Currently, all memblocks are mapped with PAGE_KERNEL_EXEC and the strict > permissions are only enforced after /init starts. This leaves the kernel > vulnerable from possible buggy built-in modules. > > Apply permissions to individual sections as early as possible. > > Signed-off-by: Atish Patra > --- > arch/riscv/include/asm/set_memory.h | 2 ++ > arch/riscv/kernel/setup.c | 2 ++ > arch/riscv/mm/init.c | 11 +++++++++-- > 3 files changed, 13 insertions(+), 2 deletions(-) > > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h > index 4c5bae7ca01c..4cc3a4e2afd3 100644 > --- a/arch/riscv/include/asm/set_memory.h > +++ b/arch/riscv/include/asm/set_memory.h > @@ -15,11 +15,13 @@ int set_memory_ro(unsigned long addr, int numpages); > int set_memory_rw(unsigned long addr, int numpages); > int set_memory_x(unsigned long addr, int numpages); > int set_memory_nx(unsigned long addr, int numpages); > +void protect_kernel_text_data(void); > #else > static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; } > static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; } > static inline int set_memory_x(unsigned long addr, int numpages) { return 0; } > static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } > +static inline void protect_kernel_text_data(void) {}; > #endif > > int set_direct_map_invalid_noflush(struct page *page); > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c > index 7d6a04ae3929..b722c5bf892c 100644 > --- a/arch/riscv/kernel/setup.c > +++ b/arch/riscv/kernel/setup.c > @@ -22,6 +22,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -92,6 +93,7 @@ void __init setup_arch(char **cmdline_p) > #if IS_ENABLED(CONFIG_RISCV_SBI) > sbi_init(); > #endif > + protect_kernel_text_data(); > #ifdef CONFIG_SWIOTLB > swiotlb_init(1); > #endif > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index ea933b789a88..5f196f8158d4 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -608,7 +608,7 @@ static inline void setup_vm_final(void) > #endif /* CONFIG_MMU */ > > #ifdef CONFIG_STRICT_KERNEL_RWX > -void mark_rodata_ro(void) > +void protect_kernel_text_data(void) > { > unsigned long text_start = (unsigned long)_text; > unsigned long text_end = (unsigned long)_etext; > @@ -617,9 +617,16 @@ void mark_rodata_ro(void) > unsigned long max_low = (unsigned long)(__va(PFN_PHYS(max_low_pfn))); > A comment about that rodata permissions are set later would be nice here. > set_memory_ro(text_start, (text_end - text_start) >> PAGE_SHIFT); > - set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT); > +} > + > +void mark_rodata_ro(void) > +{ > + unsigned long rodata_start = (unsigned long)__start_rodata; > + unsigned long data_start = (unsigned long)_data; > + > + set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); > > debug_checkwx(); > } > -- > 2.25.1 > -- Sincerely yours, Mike.