* [PATCH bpf 0/3] Three BPF fixes
@ 2018-06-28 21:34 Daniel Borkmann
2018-06-28 21:34 ` [PATCH bpf 1/3] bpf, arm32: fix to use bpf_jit_binary_lock_ro api Daniel Borkmann
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Daniel Borkmann @ 2018-06-28 21:34 UTC (permalink / raw)
To: ast; +Cc: netdev, Daniel Borkmann
This set contains three fixes that are mostly JIT and set_memory_*()
related. The third in the series in particular fixes the syzkaller
bugs that were still pending; aside from local reproduction & testing,
also 'syz test' wasn't able to trigger them anymore. I've tested this
series on x86_64, arm64 and s390x, and kbuild bot wasn't yelling either
for the rest. For details, please see patches as usual, thanks!
Daniel Borkmann (3):
bpf, arm32: fix to use bpf_jit_binary_lock_ro api
bpf, s390: fix potential memleak when later bpf_jit_prog fails
bpf: undo prog rejection on read-only lock failure
arch/arm/net/bpf_jit_32.c | 2 +-
arch/s390/net/bpf_jit_comp.c | 1 +
include/linux/filter.h | 56 +++++++-------------------------------------
kernel/bpf/core.c | 30 +-----------------------
4 files changed, 11 insertions(+), 78 deletions(-)
--
2.9.5
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH bpf 1/3] bpf, arm32: fix to use bpf_jit_binary_lock_ro api
2018-06-28 21:34 [PATCH bpf 0/3] Three BPF fixes Daniel Borkmann
@ 2018-06-28 21:34 ` Daniel Borkmann
2018-06-28 21:34 ` [PATCH bpf 2/3] bpf, s390: fix potential memleak when later bpf_jit_prog fails Daniel Borkmann
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Daniel Borkmann @ 2018-06-28 21:34 UTC (permalink / raw)
To: ast; +Cc: netdev, Daniel Borkmann
Any eBPF JIT that where its underlying arch supports ARCH_HAS_SET_MEMORY
would need to use bpf_jit_binary_{un,}lock_ro() pair instead of the
set_memory_{ro,rw}() pair directly as otherwise changes to the former
might break. arm32's eBPF conversion missed to change it, so fix this
up here.
Fixes: 39c13c204bb1 ("arm: eBPF JIT compiler")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
arch/arm/net/bpf_jit_32.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 6e8b716..f6a62ae 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -1844,7 +1844,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
/* there are 2 passes here */
bpf_jit_dump(prog->len, image_size, 2, ctx.target);
- set_memory_ro((unsigned long)header, header->pages);
+ bpf_jit_binary_lock_ro(header);
prog->bpf_func = (void *)ctx.target;
prog->jited = 1;
prog->jited_len = image_size;
--
2.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH bpf 2/3] bpf, s390: fix potential memleak when later bpf_jit_prog fails
2018-06-28 21:34 [PATCH bpf 0/3] Three BPF fixes Daniel Borkmann
2018-06-28 21:34 ` [PATCH bpf 1/3] bpf, arm32: fix to use bpf_jit_binary_lock_ro api Daniel Borkmann
@ 2018-06-28 21:34 ` Daniel Borkmann
2018-06-28 21:34 ` [PATCH bpf 3/3] bpf: undo prog rejection on read-only lock failure Daniel Borkmann
2018-06-29 17:54 ` [PATCH bpf 0/3] Three BPF fixes Alexei Starovoitov
3 siblings, 0 replies; 9+ messages in thread
From: Daniel Borkmann @ 2018-06-28 21:34 UTC (permalink / raw)
To: ast; +Cc: netdev, Daniel Borkmann, Martin Schwidefsky
If we would ever fail in the bpf_jit_prog() pass that writes the
actual insns to the image after we got header via bpf_jit_binary_alloc()
then we also need to make sure to free it through bpf_jit_binary_free()
again when bailing out. Given we had prior bpf_jit_prog() passes to
initially probe for clobbered registers, program size and to fill in
addrs arrray for jump targets, this is more of a theoretical one,
but at least make sure this doesn't break with future changes.
Fixes: 054623105728 ("s390/bpf: Add s390x eBPF JIT compiler backend")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
arch/s390/net/bpf_jit_comp.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index d2db8ac..5f0234e 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -1286,6 +1286,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
goto free_addrs;
}
if (bpf_jit_prog(&jit, fp)) {
+ bpf_jit_binary_free(header);
fp = orig_fp;
goto free_addrs;
}
--
2.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH bpf 3/3] bpf: undo prog rejection on read-only lock failure
2018-06-28 21:34 [PATCH bpf 0/3] Three BPF fixes Daniel Borkmann
2018-06-28 21:34 ` [PATCH bpf 1/3] bpf, arm32: fix to use bpf_jit_binary_lock_ro api Daniel Borkmann
2018-06-28 21:34 ` [PATCH bpf 2/3] bpf, s390: fix potential memleak when later bpf_jit_prog fails Daniel Borkmann
@ 2018-06-28 21:34 ` Daniel Borkmann
2018-06-29 18:42 ` Kees Cook
2018-06-29 17:54 ` [PATCH bpf 0/3] Three BPF fixes Alexei Starovoitov
3 siblings, 1 reply; 9+ messages in thread
From: Daniel Borkmann @ 2018-06-28 21:34 UTC (permalink / raw)
To: ast; +Cc: netdev, Daniel Borkmann, Laura Abbott, Kees Cook
Partially undo commit 9facc336876f ("bpf: reject any prog that failed
read-only lock") since it caused a regression, that is, syzkaller was
able to manage to cause a panic via fault injection deep in set_memory_ro()
path by letting an allocation fail: In x86's __change_page_attr_set_clr()
it was able to change the attributes of the primary mapping but not in
the alias mapping via cpa_process_alias(), so the second, inner call
to the __change_page_attr() via __change_page_attr_set_clr() had to split
a larger page and failed in the alloc_pages() with the artifically triggered
allocation error which is then propagated down to the call site.
Thus, for set_memory_ro() this means that it returned with an error, but
from debugging a probe_kernel_write() revealed EFAULT on that memory since
the primary mapping succeeded to get changed. Therefore the subsequent
hdr->locked = 0 reset triggered the panic as it was performed on read-only
memory, so call-site assumptions were infact wrong to assume that it would
either succeed /or/ not succeed at all since there's no such rollback in
set_memory_*() calls from partial change of mappings, in other words, we're
left in a state that is "half done". A later undo via set_memory_rw() is
succeeding though due to matching permissions on that part (aka due to the
try_preserve_large_page() succeeding). While reproducing locally with
explicitly triggering this error, the initial splitting only happens on
rare occasions and in real world it would additionally need oom conditions,
but that said, it could partially fail. Therefore, it is definitely wrong
to bail out on set_memory_ro() error and reject the program with the
set_memory_*() semantics we have today. Shouldn't have gone the extra mile
since no other user in tree today infact checks for any set_memory_*()
errors, e.g. neither module_enable_ro() / module_disable_ro() for module
RO/NX handling which is mostly default these days nor kprobes core with
alloc_insn_page() / free_insn_page() as examples that could be invoked long
after bootup and original 314beb9bcabf ("x86: bpf_jit_comp: secure bpf jit
against spraying attacks") did neither when it got first introduced to BPF
so "improving" with bailing out was clearly not right when set_memory_*()
cannot handle it today.
Kees suggested that if set_memory_*() can fail, we should annotate it with
__must_check, and all callers need to deal with it gracefully given those
set_memory_*() markings aren't "advisory", but they're expected to actually
do what they say. This might be an option worth to move forward in future
but would at the same time require that set_memory_*() calls from supporting
archs are guaranteed to be "atomic" in that they provide rollback if part
of the range fails, once that happened, the transition from RW -> RO could
be made more robust that way, while subsequent RO -> RW transition /must/
continue guaranteeing to always succeed the undo part.
Reported-by: syzbot+a4eb8c7766952a1ca872@syzkaller.appspotmail.com
Reported-by: syzbot+d866d1925855328eac3b@syzkaller.appspotmail.com
Fixes: 9facc336876f ("bpf: reject any prog that failed read-only lock")
Cc: Laura Abbott <labbott@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
include/linux/filter.h | 56 ++++++++------------------------------------------
kernel/bpf/core.c | 30 +--------------------------
2 files changed, 9 insertions(+), 77 deletions(-)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 20f2659..300baad 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -470,9 +470,7 @@ struct sock_fprog_kern {
};
struct bpf_binary_header {
- u16 pages;
- u16 locked:1;
-
+ u32 pages;
/* Some arches need word alignment for their instructions */
u8 image[] __aligned(4);
};
@@ -481,7 +479,7 @@ struct bpf_prog {
u16 pages; /* Number of allocated pages */
u16 jited:1, /* Is our filter JIT'ed? */
jit_requested:1,/* archs need to JIT the prog */
- locked:1, /* Program image locked? */
+ undo_set_mem:1, /* Passed set_memory_ro() checkpoint */
gpl_compatible:1, /* Is filter GPL compatible? */
cb_access:1, /* Is control block accessed? */
dst_needed:1, /* Do we need dst entry? */
@@ -677,46 +675,24 @@ bpf_ctx_narrow_access_ok(u32 off, u32 size, u32 size_default)
static inline void bpf_prog_lock_ro(struct bpf_prog *fp)
{
-#ifdef CONFIG_ARCH_HAS_SET_MEMORY
- fp->locked = 1;
- if (set_memory_ro((unsigned long)fp, fp->pages))
- fp->locked = 0;
-#endif
+ fp->undo_set_mem = 1;
+ set_memory_ro((unsigned long)fp, fp->pages);
}
static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
{
-#ifdef CONFIG_ARCH_HAS_SET_MEMORY
- if (fp->locked) {
- WARN_ON_ONCE(set_memory_rw((unsigned long)fp, fp->pages));
- /* In case set_memory_rw() fails, we want to be the first
- * to crash here instead of some random place later on.
- */
- fp->locked = 0;
- }
-#endif
+ if (fp->undo_set_mem)
+ set_memory_rw((unsigned long)fp, fp->pages);
}
static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
{
-#ifdef CONFIG_ARCH_HAS_SET_MEMORY
- hdr->locked = 1;
- if (set_memory_ro((unsigned long)hdr, hdr->pages))
- hdr->locked = 0;
-#endif
+ set_memory_ro((unsigned long)hdr, hdr->pages);
}
static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
{
-#ifdef CONFIG_ARCH_HAS_SET_MEMORY
- if (hdr->locked) {
- WARN_ON_ONCE(set_memory_rw((unsigned long)hdr, hdr->pages));
- /* In case set_memory_rw() fails, we want to be the first
- * to crash here instead of some random place later on.
- */
- hdr->locked = 0;
- }
-#endif
+ set_memory_rw((unsigned long)hdr, hdr->pages);
}
static inline struct bpf_binary_header *
@@ -728,22 +704,6 @@ bpf_jit_binary_hdr(const struct bpf_prog *fp)
return (void *)addr;
}
-#ifdef CONFIG_ARCH_HAS_SET_MEMORY
-static inline int bpf_prog_check_pages_ro_single(const struct bpf_prog *fp)
-{
- if (!fp->locked)
- return -ENOLCK;
- if (fp->jited) {
- const struct bpf_binary_header *hdr = bpf_jit_binary_hdr(fp);
-
- if (!hdr->locked)
- return -ENOLCK;
- }
-
- return 0;
-}
-#endif
-
int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap);
static inline int sk_filter(struct sock *sk, struct sk_buff *skb)
{
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index a9e6c04..1e5625d 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -598,8 +598,6 @@ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
bpf_fill_ill_insns(hdr, size);
hdr->pages = size / PAGE_SIZE;
- hdr->locked = 0;
-
hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)),
PAGE_SIZE - sizeof(*hdr));
start = (get_random_int() % hole) & ~(alignment - 1);
@@ -1450,22 +1448,6 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
return 0;
}
-static int bpf_prog_check_pages_ro_locked(const struct bpf_prog *fp)
-{
-#ifdef CONFIG_ARCH_HAS_SET_MEMORY
- int i, err;
-
- for (i = 0; i < fp->aux->func_cnt; i++) {
- err = bpf_prog_check_pages_ro_single(fp->aux->func[i]);
- if (err)
- return err;
- }
-
- return bpf_prog_check_pages_ro_single(fp);
-#endif
- return 0;
-}
-
static void bpf_prog_select_func(struct bpf_prog *fp)
{
#ifndef CONFIG_BPF_JIT_ALWAYS_ON
@@ -1524,17 +1506,7 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
* all eBPF JITs might immediately support all features.
*/
*err = bpf_check_tail_call(fp);
- if (*err)
- return fp;
-
- /* Checkpoint: at this point onwards any cBPF -> eBPF or
- * native eBPF program is read-only. If we failed to change
- * the page attributes (e.g. allocation failure from
- * splitting large pages), then reject the whole program
- * in order to guarantee not ending up with any W+X pages
- * from BPF side in kernel.
- */
- *err = bpf_prog_check_pages_ro_locked(fp);
+
return fp;
}
EXPORT_SYMBOL_GPL(bpf_prog_select_runtime);
--
2.9.5
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 0/3] Three BPF fixes
2018-06-28 21:34 [PATCH bpf 0/3] Three BPF fixes Daniel Borkmann
` (2 preceding siblings ...)
2018-06-28 21:34 ` [PATCH bpf 3/3] bpf: undo prog rejection on read-only lock failure Daniel Borkmann
@ 2018-06-29 17:54 ` Alexei Starovoitov
3 siblings, 0 replies; 9+ messages in thread
From: Alexei Starovoitov @ 2018-06-29 17:54 UTC (permalink / raw)
To: Daniel Borkmann; +Cc: ast, netdev
On Thu, Jun 28, 2018 at 11:34:56PM +0200, Daniel Borkmann wrote:
> This set contains three fixes that are mostly JIT and set_memory_*()
> related. The third in the series in particular fixes the syzkaller
> bugs that were still pending; aside from local reproduction & testing,
> also 'syz test' wasn't able to trigger them anymore. I've tested this
> series on x86_64, arm64 and s390x, and kbuild bot wasn't yelling either
> for the rest. For details, please see patches as usual, thanks!
Applied, Thanks
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 3/3] bpf: undo prog rejection on read-only lock failure
2018-06-28 21:34 ` [PATCH bpf 3/3] bpf: undo prog rejection on read-only lock failure Daniel Borkmann
@ 2018-06-29 18:42 ` Kees Cook
2018-06-29 23:47 ` Daniel Borkmann
0 siblings, 1 reply; 9+ messages in thread
From: Kees Cook @ 2018-06-29 18:42 UTC (permalink / raw)
To: Daniel Borkmann; +Cc: Alexei Starovoitov, Network Development, Laura Abbott
On Thu, Jun 28, 2018 at 2:34 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> Kees suggested that if set_memory_*() can fail, we should annotate it with
> __must_check, and all callers need to deal with it gracefully given those
> set_memory_*() markings aren't "advisory", but they're expected to actually
> do what they say. This might be an option worth to move forward in future
> but would at the same time require that set_memory_*() calls from supporting
> archs are guaranteed to be "atomic" in that they provide rollback if part
> of the range fails, once that happened, the transition from RW -> RO could
> be made more robust that way, while subsequent RO -> RW transition /must/
> continue guaranteeing to always succeed the undo part.
Does this mean we can have BPF filters that aren't read-only then?
What's the situation where set_memory_ro() fails? (Can it be induced
by the user?)
-Kees
--
Kees Cook
Pixel Security
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 3/3] bpf: undo prog rejection on read-only lock failure
2018-06-29 18:42 ` Kees Cook
@ 2018-06-29 23:47 ` Daniel Borkmann
2018-07-02 18:48 ` Kees Cook
0 siblings, 1 reply; 9+ messages in thread
From: Daniel Borkmann @ 2018-06-29 23:47 UTC (permalink / raw)
To: Kees Cook; +Cc: Alexei Starovoitov, Network Development, Laura Abbott
On 06/29/2018 08:42 PM, Kees Cook wrote:
> On Thu, Jun 28, 2018 at 2:34 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
>> Kees suggested that if set_memory_*() can fail, we should annotate it with
>> __must_check, and all callers need to deal with it gracefully given those
>> set_memory_*() markings aren't "advisory", but they're expected to actually
>> do what they say. This might be an option worth to move forward in future
>> but would at the same time require that set_memory_*() calls from supporting
>> archs are guaranteed to be "atomic" in that they provide rollback if part
>> of the range fails, once that happened, the transition from RW -> RO could
>> be made more robust that way, while subsequent RO -> RW transition /must/
>> continue guaranteeing to always succeed the undo part.
>
> Does this mean we can have BPF filters that aren't read-only then?
> What's the situation where set_memory_ro() fails? (Can it be induced
> by the user?)
My understanding is that the cpa_process_alias() would attempt to also change
attributes of physmap ranges, and it found that a large page had to be split
for this but failed in doing so thus attributes couldn't be updated there due
to page alloc error. Attempting to change the primary mapping which would be
directly the addr passed to set_memory_ro() was however set to read-only
despite error. While for reproduction I had a toggle on the alloc_pages() in
split_large_page() to have it fail, I only could trigger it occasionally; I
used the selftest suite in a loop to stress test and it hit about or twice
over hours.
Thanks,
Daniel
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 3/3] bpf: undo prog rejection on read-only lock failure
2018-06-29 23:47 ` Daniel Borkmann
@ 2018-07-02 18:48 ` Kees Cook
2018-07-02 21:12 ` Daniel Borkmann
0 siblings, 1 reply; 9+ messages in thread
From: Kees Cook @ 2018-07-02 18:48 UTC (permalink / raw)
To: Daniel Borkmann; +Cc: Alexei Starovoitov, Network Development, Laura Abbott
On Fri, Jun 29, 2018 at 4:47 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> On 06/29/2018 08:42 PM, Kees Cook wrote:
>> On Thu, Jun 28, 2018 at 2:34 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
>>> Kees suggested that if set_memory_*() can fail, we should annotate it with
>>> __must_check, and all callers need to deal with it gracefully given those
>>> set_memory_*() markings aren't "advisory", but they're expected to actually
>>> do what they say. This might be an option worth to move forward in future
>>> but would at the same time require that set_memory_*() calls from supporting
>>> archs are guaranteed to be "atomic" in that they provide rollback if part
>>> of the range fails, once that happened, the transition from RW -> RO could
>>> be made more robust that way, while subsequent RO -> RW transition /must/
>>> continue guaranteeing to always succeed the undo part.
>>
>> Does this mean we can have BPF filters that aren't read-only then?
>> What's the situation where set_memory_ro() fails? (Can it be induced
>> by the user?)
>
> My understanding is that the cpa_process_alias() would attempt to also change
> attributes of physmap ranges, and it found that a large page had to be split
> for this but failed in doing so thus attributes couldn't be updated there due
> to page alloc error. Attempting to change the primary mapping which would be
> directly the addr passed to set_memory_ro() was however set to read-only
> despite error. While for reproduction I had a toggle on the alloc_pages() in
> split_large_page() to have it fail, I only could trigger it occasionally; I
> used the selftest suite in a loop to stress test and it hit about or twice
> over hours.
Okay, so it's pretty rare; that's good! :P
It really seems like this should be a situation that never fails, but
if we ARE going to allow failures, then I think we need to propagate
them up to callers. That means modules could fail to load in these
cases, etc, etc. Since this is a fundamental protection, we need to
either never fail to set things RO or we need to disallow operation
continuing in the face of something NOT being RO.
-Kees
--
Kees Cook
Pixel Security
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 3/3] bpf: undo prog rejection on read-only lock failure
2018-07-02 18:48 ` Kees Cook
@ 2018-07-02 21:12 ` Daniel Borkmann
0 siblings, 0 replies; 9+ messages in thread
From: Daniel Borkmann @ 2018-07-02 21:12 UTC (permalink / raw)
To: Kees Cook; +Cc: Alexei Starovoitov, Network Development, Laura Abbott
On 07/02/2018 08:48 PM, Kees Cook wrote:
> On Fri, Jun 29, 2018 at 4:47 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
>> On 06/29/2018 08:42 PM, Kees Cook wrote:
>>> On Thu, Jun 28, 2018 at 2:34 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
>>>> Kees suggested that if set_memory_*() can fail, we should annotate it with
>>>> __must_check, and all callers need to deal with it gracefully given those
>>>> set_memory_*() markings aren't "advisory", but they're expected to actually
>>>> do what they say. This might be an option worth to move forward in future
>>>> but would at the same time require that set_memory_*() calls from supporting
>>>> archs are guaranteed to be "atomic" in that they provide rollback if part
>>>> of the range fails, once that happened, the transition from RW -> RO could
>>>> be made more robust that way, while subsequent RO -> RW transition /must/
>>>> continue guaranteeing to always succeed the undo part.
>>>
>>> Does this mean we can have BPF filters that aren't read-only then?
>>> What's the situation where set_memory_ro() fails? (Can it be induced
>>> by the user?)
>>
>> My understanding is that the cpa_process_alias() would attempt to also change
>> attributes of physmap ranges, and it found that a large page had to be split
>> for this but failed in doing so thus attributes couldn't be updated there due
>> to page alloc error. Attempting to change the primary mapping which would be
>> directly the addr passed to set_memory_ro() was however set to read-only
>> despite error. While for reproduction I had a toggle on the alloc_pages() in
>> split_large_page() to have it fail, I only could trigger it occasionally; I
>> used the selftest suite in a loop to stress test and it hit about or twice
>> over hours.
>
> Okay, so it's pretty rare; that's good! :P
>
> It really seems like this should be a situation that never fails, but
> if we ARE going to allow failures, then I think we need to propagate
> them up to callers. That means modules could fail to load in these
> cases, etc, etc. Since this is a fundamental protection, we need to
> either never fail to set things RO or we need to disallow operation
> continuing in the face of something NOT being RO.
Yeah, fully agree with you, and set_memory_*() would need to be reworked for the
archs implementing them to recover from failure and rollback any attribute changes
already done from the call. Today it's not the case, so this would be first step,
and second step to add the checks for error and bail out from various call-sites.
Thanks,
Daniel
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2018-07-02 21:12 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-06-28 21:34 [PATCH bpf 0/3] Three BPF fixes Daniel Borkmann
2018-06-28 21:34 ` [PATCH bpf 1/3] bpf, arm32: fix to use bpf_jit_binary_lock_ro api Daniel Borkmann
2018-06-28 21:34 ` [PATCH bpf 2/3] bpf, s390: fix potential memleak when later bpf_jit_prog fails Daniel Borkmann
2018-06-28 21:34 ` [PATCH bpf 3/3] bpf: undo prog rejection on read-only lock failure Daniel Borkmann
2018-06-29 18:42 ` Kees Cook
2018-06-29 23:47 ` Daniel Borkmann
2018-07-02 18:48 ` Kees Cook
2018-07-02 21:12 ` Daniel Borkmann
2018-06-29 17:54 ` [PATCH bpf 0/3] Three BPF fixes Alexei Starovoitov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).