From mboxrd@z Thu Jan 1 00:00:00 1970 From: Heiko Carstens Subject: Re: [RFC][PATCH] s390, postinit-readonly: implement post-init RO Date: Tue, 8 Mar 2016 12:43:15 +0100 Message-ID: <20160308114315.GA3869@osiris> References: <20160308002035.GA13606@www.outflux.net> <56DE9279.6040805@de.ibm.com> Reply-To: kernel-hardening@lists.openwall.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: List-Post: List-Help: List-Unsubscribe: List-Subscribe: Content-Disposition: inline In-Reply-To: <56DE9279.6040805@de.ibm.com> To: Christian Borntraeger Cc: Kees Cook , Martin Schwidefsky , Ingo Molnar , David Brown , Andy Lutomirski , "H. Peter Anvin" , Michael Ellerman , Mathias Krause , Thomas Gleixner , "x86@kernel.org" , Arnd Bergmann , PaX Team , Emese Revfy , "kernel-hardening@lists.openwall.com" , LKML , linux-arch , linux-s390 List-Id: linux-arch.vger.kernel.org On Tue, Mar 08, 2016 at 09:51:05AM +0100, Christian Borntraeger wrote: > On 03/08/2016 01:41 AM, Kees Cook wrote: > > >> --- a/arch/s390/kernel/vmlinux.lds.S > >> +++ b/arch/s390/kernel/vmlinux.lds.S > >> @@ -52,6 +52,12 @@ SECTIONS > >> > >> RW_DATA_SECTION(0x100, PAGE_SIZE, THREAD_SIZE) > >> > >> + . = ALIGN(PAGE_SIZE) > > > missing ";" ? > > > With that and your fixes, this function claims to mark 0kB and > lkdtm can still write. Reason is that _edata is 0xc11008 and start is > 0x0c11000. > > making _edata page aligned as well, does now try to mark one page, but then > we run into the next issue, that > > static void change_page_attr(unsigned long addr, int numpages, > pte_t (*set) (pte_t)) > { > pte_t *ptep; > int i; > > for (i = 0; i < numpages; i++) { > ptep = walk_page_table(addr); > > triggers this > if (WARN_ON_ONCE(!ptep)) > break; > > because the kernel decided to map this with a large page. So we need > to fix this function to then break the large page into a smaller chunk.... Yes... however that's a rather large change. I'll try to come up with a patch that has less impact and implement the code that splits the kernel mapping later. Looking at our vmemmap code makes me realize that this code needs also to be improved. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e06smtp12.uk.ibm.com ([195.75.94.108]:37592 "EHLO e06smtp12.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933031AbcCHLnZ (ORCPT ); Tue, 8 Mar 2016 06:43:25 -0500 Received: from localhost by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 8 Mar 2016 11:43:23 -0000 Date: Tue, 8 Mar 2016 12:43:15 +0100 From: Heiko Carstens Subject: Re: [RFC][PATCH] s390, postinit-readonly: implement post-init RO Message-ID: <20160308114315.GA3869@osiris> References: <20160308002035.GA13606@www.outflux.net> <56DE9279.6040805@de.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56DE9279.6040805@de.ibm.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Christian Borntraeger Cc: Kees Cook , Martin Schwidefsky , Ingo Molnar , David Brown , Andy Lutomirski , "H. Peter Anvin" , Michael Ellerman , Mathias Krause , Thomas Gleixner , "x86@kernel.org" , Arnd Bergmann , PaX Team , Emese Revfy , "kernel-hardening@lists.openwall.com" , LKML , linux-arch , linux-s390 Message-ID: <20160308114315.Mtwf10-tlQEtJc3RNMDtRvnyf_LkUR4PNthJnewagF8@z> On Tue, Mar 08, 2016 at 09:51:05AM +0100, Christian Borntraeger wrote: > On 03/08/2016 01:41 AM, Kees Cook wrote: > > >> --- a/arch/s390/kernel/vmlinux.lds.S > >> +++ b/arch/s390/kernel/vmlinux.lds.S > >> @@ -52,6 +52,12 @@ SECTIONS > >> > >> RW_DATA_SECTION(0x100, PAGE_SIZE, THREAD_SIZE) > >> > >> + . = ALIGN(PAGE_SIZE) > > > missing ";" ? > > > With that and your fixes, this function claims to mark 0kB and > lkdtm can still write. Reason is that _edata is 0xc11008 and start is > 0x0c11000. > > making _edata page aligned as well, does now try to mark one page, but then > we run into the next issue, that > > static void change_page_attr(unsigned long addr, int numpages, > pte_t (*set) (pte_t)) > { > pte_t *ptep; > int i; > > for (i = 0; i < numpages; i++) { > ptep = walk_page_table(addr); > > triggers this > if (WARN_ON_ONCE(!ptep)) > break; > > because the kernel decided to map this with a large page. So we need > to fix this function to then break the large page into a smaller chunk.... Yes... however that's a rather large change. I'll try to come up with a patch that has less impact and implement the code that splits the kernel mapping later. Looking at our vmemmap code makes me realize that this code needs also to be improved.