* [RFC 6/7] arm64: add __map_kernel_segment to accept additional vm flags [not found] <1487498660-16600-1-git-send-email-hoeun.ryu@gmail.com> @ 2017-02-19 10:04 ` Hoeun Ryu 2017-02-19 11:21 ` Ard Biesheuvel 2017-02-19 10:04 ` [RFC 7/7] arm64: map seperately rodata sections for __ro_mostly_after_init section Hoeun Ryu 1 sibling, 1 reply; 6+ messages in thread From: Hoeun Ryu @ 2017-02-19 10:04 UTC (permalink / raw) To: linux-arm-kernel Memory attribute for `__ro_mostly_after_init` section should be changed via set_memory_rw/ro that doesn't work against vm areas which don't have VM_ALLOC. Add this function to map `__ro_mostly_after_init` section with VM_ALLOC flag set in map_kernel. Signed-off-by: Hoeun Ryu <hoeun.ryu@gmail.com> --- arch/arm64/mm/mmu.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b805c01..91271b1 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -444,8 +444,10 @@ void mark_rodata_ro(void) debug_checkwx(); } -static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, - pgprot_t prot, struct vm_struct *vma) +static void __init __map_kernel_segment(pgd_t *pgd, + void *va_start, void *va_end, + pgprot_t prot, struct vm_struct *vma, + unsigned long flags) { phys_addr_t pa_start = __pa_symbol(va_start); unsigned long size = va_end - va_start; @@ -459,12 +461,18 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, vma->addr = va_start; vma->phys_addr = pa_start; vma->size = size; - vma->flags = VM_MAP; + vma->flags = flags; vma->caller = __builtin_return_address(0); vm_area_add_early(vma); } +static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, + pgprot_t prot, struct vm_struct *vma) +{ + return __map_kernel_segment(pgd, va_start, va_end, prot, vma, VM_MAP); +} + /* * Create fine-grained mappings for the kernel. */ -- 2.7.4 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC 6/7] arm64: add __map_kernel_segment to accept additional vm flags 2017-02-19 10:04 ` [RFC 6/7] arm64: add __map_kernel_segment to accept additional vm flags Hoeun Ryu @ 2017-02-19 11:21 ` Ard Biesheuvel 0 siblings, 0 replies; 6+ messages in thread From: Ard Biesheuvel @ 2017-02-19 11:21 UTC (permalink / raw) To: linux-arm-kernel On 19 February 2017 at 10:04, Hoeun Ryu <hoeun.ryu@gmail.com> wrote: > Memory attribute for `__ro_mostly_after_init` section should be changed > via set_memory_rw/ro that doesn't work against vm areas which don't have > VM_ALLOC. This is for a good reason: VMALLOC regions are guaranteed to be mapped down to pages, which is the only thing set_memory_rX() supports. On arm64, kernel segments may be mapped using contiguous page mappings or block mappings, which cannot be modified like this. > Add this function to map `__ro_mostly_after_init` section with > VM_ALLOC flag set in map_kernel. > If we add this functionality, I'd rather we have a special handler for the __ro_mostly_after_init section and not use set_memory_rX(). > Signed-off-by: Hoeun Ryu <hoeun.ryu@gmail.com> > --- > arch/arm64/mm/mmu.c | 14 +++++++++++--- > 1 file changed, 11 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index b805c01..91271b1 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -444,8 +444,10 @@ void mark_rodata_ro(void) > debug_checkwx(); > } > > -static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, > - pgprot_t prot, struct vm_struct *vma) > +static void __init __map_kernel_segment(pgd_t *pgd, > + void *va_start, void *va_end, > + pgprot_t prot, struct vm_struct *vma, > + unsigned long flags) > { > phys_addr_t pa_start = __pa_symbol(va_start); > unsigned long size = va_end - va_start; > @@ -459,12 +461,18 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, > vma->addr = va_start; > vma->phys_addr = pa_start; > vma->size = size; > - vma->flags = VM_MAP; > + vma->flags = flags; > vma->caller = __builtin_return_address(0); > > vm_area_add_early(vma); > } > > +static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, > + pgprot_t prot, struct vm_struct *vma) > +{ > + return __map_kernel_segment(pgd, va_start, va_end, prot, vma, VM_MAP); > +} > + > /* > * Create fine-grained mappings for the kernel. > */ > -- > 2.7.4 > ^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC 7/7] arm64: map seperately rodata sections for __ro_mostly_after_init section [not found] <1487498660-16600-1-git-send-email-hoeun.ryu@gmail.com> 2017-02-19 10:04 ` [RFC 6/7] arm64: add __map_kernel_segment to accept additional vm flags Hoeun Ryu @ 2017-02-19 10:04 ` Hoeun Ryu 2017-02-19 11:35 ` Ard Biesheuvel 1 sibling, 1 reply; 6+ messages in thread From: Hoeun Ryu @ 2017-02-19 10:04 UTC (permalink / raw) To: linux-arm-kernel Map rodata sections seperately for the new __ro_mostly_after_init section. Attribute of memory for __ro_mostly_after_init section can be changed later so we need a dedicated vmalloced region for set_memory_rw/ro api. Signed-off-by: Hoeun Ryu <hoeun.ryu@gmail.com> --- arch/arm64/mm/mmu.c | 30 ++++++++++++++++++++++++++---- 1 file changed, 26 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 91271b1..4a89a2e 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -434,8 +434,22 @@ void mark_rodata_ro(void) * mark .rodata as read only. Use __init_begin rather than __end_rodata * to cover NOTES and EXCEPTION_TABLE. */ - section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata; - create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata, + section_size = (unsigned long)__start_data_ro_mostly_after_init - + (unsigned long)__start_rodata; + create_mapping_late(__pa_symbol(__start_rodata), + (unsigned long)__start_rodata, + section_size, PAGE_KERNEL_RO); + + section_size = (unsigned long)__end_data_ro_mostly_after_init - + (unsigned long)__start_data_ro_mostly_after_init; + create_mapping_late(__pa_symbol(__start_data_ro_mostly_after_init), + (unsigned long)__start_data_ro_mostly_after_init, + section_size, PAGE_KERNEL_RO); + + section_size = (unsigned long)__init_begin - + (unsigned long)__end_data_ro_mostly_after_init; + create_mapping_late(__pa_symbol(__end_data_ro_mostly_after_init), + (unsigned long)__end_data_ro_mostly_after_init, section_size, PAGE_KERNEL_RO); /* flush the TLBs after updating live kernel mappings */ @@ -478,10 +492,18 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, */ static void __init map_kernel(pgd_t *pgd) { - static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data; + static struct vm_struct vmlinux_text, vmlinux_rodata1, vmlinux_rodata2, vmlinux_ro_mostly_after_init, vmlinux_init, vmlinux_data; map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text); - map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata); + map_kernel_segment(pgd, __start_rodata, __start_data_ro_mostly_after_init, PAGE_KERNEL, &vmlinux_rodata1); + __map_kernel_segment(pgd, + __start_data_ro_mostly_after_init, + __end_data_ro_mostly_after_init, + PAGE_KERNEL, + &vmlinux_ro_mostly_after_init, + VM_MAP | VM_ALLOC); + map_kernel_segment(pgd, __end_data_ro_mostly_after_init, __init_begin, PAGE_KERNEL, &vmlinux_rodata2); + map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC, &vmlinux_init); map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data); -- 2.7.4 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC 7/7] arm64: map seperately rodata sections for __ro_mostly_after_init section 2017-02-19 10:04 ` [RFC 7/7] arm64: map seperately rodata sections for __ro_mostly_after_init section Hoeun Ryu @ 2017-02-19 11:35 ` Ard Biesheuvel 2017-02-20 12:45 ` Mark Rutland 0 siblings, 1 reply; 6+ messages in thread From: Ard Biesheuvel @ 2017-02-19 11:35 UTC (permalink / raw) To: linux-arm-kernel On 19 February 2017 at 10:04, Hoeun Ryu <hoeun.ryu@gmail.com> wrote: > Map rodata sections seperately for the new __ro_mostly_after_init section. > Attribute of memory for __ro_mostly_after_init section can be changed later > so we need a dedicated vmalloced region for set_memory_rw/ro api. > > Signed-off-by: Hoeun Ryu <hoeun.ryu@gmail.com> > --- > arch/arm64/mm/mmu.c | 30 ++++++++++++++++++++++++++---- > 1 file changed, 26 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 91271b1..4a89a2e 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -434,8 +434,22 @@ void mark_rodata_ro(void) > * mark .rodata as read only. Use __init_begin rather than __end_rodata > * to cover NOTES and EXCEPTION_TABLE. > */ > - section_size = (unsigned long)__init_begin - (unsigned long)__start_rodata; > - create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata, > + section_size = (unsigned long)__start_data_ro_mostly_after_init - > + (unsigned long)__start_rodata; > + create_mapping_late(__pa_symbol(__start_rodata), > + (unsigned long)__start_rodata, > + section_size, PAGE_KERNEL_RO); > + > + section_size = (unsigned long)__end_data_ro_mostly_after_init - > + (unsigned long)__start_data_ro_mostly_after_init; > + create_mapping_late(__pa_symbol(__start_data_ro_mostly_after_init), > + (unsigned long)__start_data_ro_mostly_after_init, > + section_size, PAGE_KERNEL_RO); > + > + section_size = (unsigned long)__init_begin - > + (unsigned long)__end_data_ro_mostly_after_init; > + create_mapping_late(__pa_symbol(__end_data_ro_mostly_after_init), > + (unsigned long)__end_data_ro_mostly_after_init, > section_size, PAGE_KERNEL_RO); > > /* flush the TLBs after updating live kernel mappings */ > @@ -478,10 +492,18 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, > */ > static void __init map_kernel(pgd_t *pgd) > { > - static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data; > + static struct vm_struct vmlinux_text, vmlinux_rodata1, vmlinux_rodata2, vmlinux_ro_mostly_after_init, vmlinux_init, vmlinux_data; > > map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text); > - map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata); > + map_kernel_segment(pgd, __start_rodata, __start_data_ro_mostly_after_init, PAGE_KERNEL, &vmlinux_rodata1); > + __map_kernel_segment(pgd, > + __start_data_ro_mostly_after_init, > + __end_data_ro_mostly_after_init, > + PAGE_KERNEL, > + &vmlinux_ro_mostly_after_init, > + VM_MAP | VM_ALLOC); > + map_kernel_segment(pgd, __end_data_ro_mostly_after_init, __init_begin, PAGE_KERNEL, &vmlinux_rodata2); > + > map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC, > &vmlinux_init); > map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data); > -- > 2.7.4 > While it is correct that you are splitting this into three separate segments (otherwise we would not be able to change the permissions later without risking splitting to occur), I think this leads to unnecessary fragmentation. If there is demand for this feature (but you still need to make the argument for that), I wonder if it wouldn't be sufficient, and much more straightforward, to redefine the __ro_after_init semantics to include the kind of subsystem registration and module init context you are targeting, and implement some hooks to temporarily lift the __ro_after_init r/o permission restrictions in a controlled manner. Kees: any thoughts? ^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC 7/7] arm64: map seperately rodata sections for __ro_mostly_after_init section 2017-02-19 11:35 ` Ard Biesheuvel @ 2017-02-20 12:45 ` Mark Rutland 2017-02-21 20:38 ` Kees Cook 0 siblings, 1 reply; 6+ messages in thread From: Mark Rutland @ 2017-02-20 12:45 UTC (permalink / raw) To: linux-arm-kernel On Sun, Feb 19, 2017 at 11:35:51AM +0000, Ard Biesheuvel wrote: > On 19 February 2017 at 10:04, Hoeun Ryu <hoeun.ryu@gmail.com> wrote: > > Map rodata sections seperately for the new __ro_mostly_after_init section. > > Attribute of memory for __ro_mostly_after_init section can be changed later > > so we need a dedicated vmalloced region for set_memory_rw/ro api. > While it is correct that you are splitting this into three separate > segments (otherwise we would not be able to change the permissions > later without risking splitting to occur), I think this leads to > unnecessary fragmentation. > > If there is demand for this feature (but you still need to make the > argument for that), I wonder if it wouldn't be sufficient, and much > more straightforward, to redefine the __ro_after_init semantics to > include the kind of subsystem registration and module init context you > are targeting, and implement some hooks to temporarily lift the > __ro_after_init r/o permission restrictions in a controlled manner. >From a look over the series, I think this is just __write_rarely in disguise. I personally think that we should keep __write_rarely and __ro_after_init separate, the later being a strictly one-shot affair. I had some ideas [1] as to how we could implement __write_rarely without carving up the kernel mapping further (and keeping the RW permissions local to the thread needing it), but I have not had the time to look into that further. Thanks, Mark. [1] http://www.openwall.com/lists/kernel-hardening/2016/11/18/3 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC 7/7] arm64: map seperately rodata sections for __ro_mostly_after_init section 2017-02-20 12:45 ` Mark Rutland @ 2017-02-21 20:38 ` Kees Cook 0 siblings, 0 replies; 6+ messages in thread From: Kees Cook @ 2017-02-21 20:38 UTC (permalink / raw) To: linux-arm-kernel On Mon, Feb 20, 2017 at 4:45 AM, Mark Rutland <mark.rutland@arm.com> wrote: > On Sun, Feb 19, 2017 at 11:35:51AM +0000, Ard Biesheuvel wrote: >> On 19 February 2017 at 10:04, Hoeun Ryu <hoeun.ryu@gmail.com> wrote: >> > Map rodata sections seperately for the new __ro_mostly_after_init section. >> > Attribute of memory for __ro_mostly_after_init section can be changed later >> > so we need a dedicated vmalloced region for set_memory_rw/ro api. > >> While it is correct that you are splitting this into three separate >> segments (otherwise we would not be able to change the permissions >> later without risking splitting to occur), I think this leads to >> unnecessary fragmentation. >> >> If there is demand for this feature (but you still need to make the >> argument for that), I wonder if it wouldn't be sufficient, and much >> more straightforward, to redefine the __ro_after_init semantics to >> include the kind of subsystem registration and module init context you >> are targeting, and implement some hooks to temporarily lift the >> __ro_after_init r/o permission restrictions in a controlled manner. > > From a look over the series, I think this is just __write_rarely in > disguise. I personally think that we should keep __write_rarely and > __ro_after_init separate, the later being a strictly one-shot affair. That's my thinking too. > I had some ideas [1] as to how we could implement __write_rarely without > carving up the kernel mapping further (and keeping the RW permissions > local to the thread needing it), but I have not had the time to look > into that further. I'm working on a series to do this for x86, but I keep getting distracted. I hope to get an RFC posted this week. -Kees -- Kees Cook Pixel Security ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-02-21 20:38 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1487498660-16600-1-git-send-email-hoeun.ryu@gmail.com>
2017-02-19 10:04 ` [RFC 6/7] arm64: add __map_kernel_segment to accept additional vm flags Hoeun Ryu
2017-02-19 11:21 ` Ard Biesheuvel
2017-02-19 10:04 ` [RFC 7/7] arm64: map seperately rodata sections for __ro_mostly_after_init section Hoeun Ryu
2017-02-19 11:35 ` Ard Biesheuvel
2017-02-20 12:45 ` Mark Rutland
2017-02-21 20:38 ` Kees Cook
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).