* FW: BPF-NX+CFI is a good upstreaming candidate
@ 2024-01-03 16:06 Maxwell Bland
2024-01-03 16:27 ` Greg KH
0 siblings, 1 reply; 7+ messages in thread
From: Maxwell Bland @ 2024-01-03 16:06 UTC (permalink / raw)
To: bpf@vger.kernel.org
Forwarding to BPF mailing list as plaintext to match the mail server restrictions.
From what I understand, Linux security team is reactive rather than proactive, so maybe the below is a moot point, but I'd love to see BPF-NX+CFI if possible.
Originally sent to di_jin@brown.edu; v.atlidakis@gmail.com; vpk@cs.brown.edu; dborkman@kernel.org; lsf-pc@lists.linux-foundation.org; bpf@vger.kernel.org; Andrew Wheeler <awheeler@motorola.com>; Sammy BS2 Que | 阙斌生 <quebs2@motorola.com>
Dear Jin et al. Daniel Borkman, and LSF/BPF mailing lists,
Although a few months late, Jin et al.’s USENIX ATC’23 EPF publication here (https://cs.brown.edu/~vpk/papers/epf.atc23.pdf) is great. It was a relief to see the efforts in https://gitlab.com/brown-ssl/epf/-/blob/master/linux-5.10/patches/0003-Adding-BPF-NX.patch?ref_type=heads and related files.
BPF-NX+CFI would/could/should be a great upstreaming candidate. I am not sure how well BPF-NX+CFI generalizes to the full kernel ecosystem given the approach requires a dedicated vmalloc memory region, but the idea PXN is no longer be enforced at a PMD-level granularity because of eBPF is unfortunate.
BPF-ISR is likely overkill performance-wise as a mechanism and can be handled/refined via kprobes rather than direct patches.
Jin et al., do you happen to have performance numbers for just NX+CFI, or knowledge of how well this may apply to 6.*+ kernels? With your blessing, and if the mailing list peers are supportive, we should discuss your work and BPF security at https://events.linuxfoundation.org/lsfmmbpf/program/cfp/.
Maxwell Bland
Motorola
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: FW: BPF-NX+CFI is a good upstreaming candidate
2024-01-03 16:06 FW: BPF-NX+CFI is a good upstreaming candidate Maxwell Bland
@ 2024-01-03 16:27 ` Greg KH
2024-01-03 18:56 ` Maxwell Bland
0 siblings, 1 reply; 7+ messages in thread
From: Greg KH @ 2024-01-03 16:27 UTC (permalink / raw)
To: Maxwell Bland; +Cc: bpf@vger.kernel.org
On Wed, Jan 03, 2024 at 04:06:32PM +0000, Maxwell Bland wrote:
> Forwarding to BPF mailing list as plaintext to match the mail server restrictions.
>
> From what I understand, Linux security team is reactive rather than
> proactive, so maybe the below is a moot point, but I'd love to see
> BPF-NX+CFI if possible.
security@kernel.org is reactive, as that is it's requirement, but there
are many other groups that work on proactive security, see the
linux-hardening project for lots of work happening there that is adding
loads of good stuff to the kernel.
>
> Originally sent to di_jin@brown.edu; v.atlidakis@gmail.com; vpk@cs.brown.edu; dborkman@kernel.org; lsf-pc@lists.linux-foundation.org; bpf@vger.kernel.org; Andrew Wheeler <awheeler@motorola.com>; Sammy BS2 Que | 阙斌生 <quebs2@motorola.com>
>
> Dear Jin et al. Daniel Borkman, and LSF/BPF mailing lists,
>
> Although a few months late, Jin et al.’s USENIX ATC’23 EPF publication here (https://cs.brown.edu/~vpk/papers/epf.atc23.pdf) is great. It was a relief to see the efforts in https://gitlab.com/brown-ssl/epf/-/blob/master/linux-5.10/patches/0003-Adding-BPF-NX.patch?ref_type=heads and related files.
>
> BPF-NX+CFI would/could/should be a great upstreaming candidate. I am not sure how well BPF-NX+CFI generalizes to the full kernel ecosystem given the approach requires a dedicated vmalloc memory region, but the idea PXN is no longer be enforced at a PMD-level granularity because of eBPF is unfortunate.
>
> BPF-ISR is likely overkill performance-wise as a mechanism and can be handled/refined via kprobes rather than direct patches.
>
> Jin et al., do you happen to have performance numbers for just NX+CFI, or knowledge of how well this may apply to 6.*+ kernels? With your blessing, and if the mailing list peers are supportive, we should discuss your work and BPF security at https://events.linuxfoundation.org/lsfmmbpf/program/cfp/.
Are there working patches somewhere? 5.10.y is very old and obsolete.
thanks,
greg k-h
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: FW: BPF-NX+CFI is a good upstreaming candidate
2024-01-03 16:27 ` Greg KH
@ 2024-01-03 18:56 ` Maxwell Bland
2024-01-03 19:16 ` [PATCH 1/2] Adding BPF NX Maxwell Bland
0 siblings, 1 reply; 7+ messages in thread
From: Maxwell Bland @ 2024-01-03 18:56 UTC (permalink / raw)
To: Greg KH
Cc: bpf@vger.kernel.org, Andrew Wheeler,
Sammy BS2 Que | 阙斌生, di_jin@brown.edu
> From: Greg KH <gregkh@linuxfoundation.org>
> Sent: Wednesday, January 3, 2024 10:28 AM
> To: Maxwell Bland <mbland@motorola.com>
> Cc: bpf@vger.kernel.org
> Subject: [External] Re: FW: BPF-NX+CFI is a good upstreaming candidate
>
> On Wed, Jan 03, 2024 at 04:06:32PM +0000, Maxwell Bland wrote:
> > Forwarding to BPF mailing list as plaintext to match the mail server
> restrictions.
> >
> > From what I understand, Linux security team is reactive rather than
> > proactive, so maybe the below is a moot point, but I'd love to see
> > BPF-NX+CFI if possible.
>
> security@kernel.org is reactive, as that is it's requirement, but there are many
> other groups that work on proactive security, see the linux-hardening project
> for lots of work happening there that is adding loads of good stuff to the
> kernel.
>
> >
> > Originally sent to di_jin@brown.edu; v.atlidakis@gmail.com;
> > vpk@cs.brown.edu; dborkman@kernel.org;
> > lsf-pc@lists.linux-foundation.org; bpf@vger.kernel.org; Andrew Wheeler
> > <awheeler@motorola.com>; Sammy BS2 Que | 阙斌生
> <quebs2@motorola.com>
> >
> > Dear Jin et al. Daniel Borkman, and LSF/BPF mailing lists,
> >
> > Although a few months late, Jin et al.’s USENIX ATC’23 EPF publication here
> (https://cs.bro/
> wn.edu%2F~vpk%2Fpapers%2Fepf.atc23.pdf&data=05%7C02%7Cmbland%40
> motorola.com%7C7eb467ee372346eb381d08dc0c78ec2f%7C5c7d0b28bdf841
> 0caa934df372b16203%7C0%7C0%7C638398960718071157%7CUnknown%7CT
> WFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJ
> XVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Ly%2FwhXKC3bsBQyW0wwzTlxc
> hfndEHq7T8YTxQhFV400%3D&reserved=0) is great. It was a relief to see the
> efforts in
> https://gitlab/.
> com%2Fbrown-ssl%2Fepf%2F-%2Fblob%2Fmaster%2Flinux-
> 5.10%2Fpatches%2F0003-Adding-BPF-
> NX.patch%3Fref_type%3Dheads&data=05%7C02%7Cmbland%40motorola.co
> m%7C7eb467ee372346eb381d08dc0c78ec2f%7C5c7d0b28bdf8410caa934df3
> 72b16203%7C0%7C0%7C638398960718071157%7CUnknown%7CTWFpbGZsb
> 3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0
> %3D%7C3000%7C%7C%7C&sdata=Hhwexyy13tcXzpebEU4PuXwNQoA%2FKJdL
> Xcafq9E5BFM%3D&reserved=0 and related files.
> >
> > BPF-NX+CFI would/could/should be a great upstreaming candidate. I am not
> sure how well BPF-NX+CFI generalizes to the full kernel ecosystem given the
> approach requires a dedicated vmalloc memory region, but the idea PXN is no
> longer be enforced at a PMD-level granularity because of eBPF is unfortunate.
> >
> > BPF-ISR is likely overkill performance-wise as a mechanism and can be
> handled/refined via kprobes rather than direct patches.
> >
> > Jin et al., do you happen to have performance numbers for just NX+CFI, or
> knowledge of how well this may apply to 6.*+ kernels? With your blessing,
> and if the mailing list peers are supportive, we should discuss your work and
> BPF security at
> https://events/
> .linuxfoundation.org%2Flsfmmbpf%2Fprogram%2Fcfp%2F&data=05%7C02%7
> Cmbland%40motorola.com%7C7eb467ee372346eb381d08dc0c78ec2f%7C5c7
> d0b28bdf8410caa934df372b16203%7C0%7C0%7C638398960718071157%7CU
> nknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6
> Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=uskFzeDFSBUW9Sc9
> X5%2BB6gvt8LU34q91pokXsRwfSEI%3D&reserved=0.
>
> Are there working patches somewhere? 5.10.y is very old and obsolete.
>
> thanks,
>
> greg k-h
Went ahead and applied the patches for NX and CFI to Torvalds v6.7-rc8 upstream. Sent to mailing lists as separate patch emails: "[PATCH 1/2] Adding BPF NX" and "[PATCH 2/2] Adding BPF CFI", but not tested. Should be OK.
Not sure I 100% like the architecture-specific method of handling the vmalloc region or the KConfig dependence on x86. Would be better to agnostically set aside a segment of the vaddr space, but I am not sure how.
Regards,
Maxwell Bland
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/2] Adding BPF NX
2024-01-03 18:56 ` Maxwell Bland
@ 2024-01-03 19:16 ` Maxwell Bland
2024-01-03 19:17 ` [PATCH 2/2] Adding BPF CFI Maxwell Bland
2024-01-03 20:47 ` [PATCH 1/2] Adding BPF NX Alexei Starovoitov
0 siblings, 2 replies; 7+ messages in thread
From: Maxwell Bland @ 2024-01-03 19:16 UTC (permalink / raw)
To: Greg KH
Cc: bpf@vger.kernel.org, Andrew Wheeler,
Sammy BS2 Que | 阙斌生, di_jin@brown.edu
From: Tenut <tenut@Niobium>
Subject: [PATCH 1/2] Adding BPF NX
Reserve a memory region for BPF program, and check for it in the interpreter. This simulate the effect of non-executable memory for BPF execution.
Signed-off-by: Maxwell Bland <mbland@motorola.com>
---
arch/x86/include/asm/pgtable_64_types.h | 9 +++++++++
arch/x86/mm/fault.c | 6 +++++-
kernel/bpf/Kconfig | 16 +++++++++++++++
kernel/bpf/core.c | 35 ++++++++++++++++++++++++++++++---
4 files changed, 62 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 38b54b992f32..ad11651eb073 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -123,6 +123,9 @@ extern unsigned int ptrs_per_p4d;
#define __VMALLOC_BASE_L4 0xffffc90000000000UL
#define __VMALLOC_BASE_L5 0xffa0000000000000UL
+#ifdef CONFIG_BPF_NX
+#define __BPF_VBASE 0xffffeb0000000000UL
+#endif
#define VMALLOC_SIZE_TB_L4 32UL
#define VMALLOC_SIZE_TB_L5 12800UL
@@ -169,6 +172,12 @@ extern unsigned int ptrs_per_p4d;
#define VMALLOC_QUARTER_SIZE ((VMALLOC_SIZE_TB << 40) >> 2)
#define VMALLOC_END (VMALLOC_START + VMALLOC_QUARTER_SIZE - 1)
+#ifdef CONFIG_BPF_NX
+#define BPF_SIZE_GB 512UL
+#define BPF_VSTART __BPF_VBASE
+#define BPF_VEND (BPF_VSTART + _AC(BPF_SIZE_GB << 30, UL))
+#endif /* CONFIG_BPF_NX */
+
/*
* vmalloc metadata addresses are calculated by adding shadow/origin offsets
* to vmalloc address.
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index ab778eac1952..cfb63ef72168 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -235,7 +235,11 @@ static noinline int vmalloc_fault(unsigned long address)
pte_t *pte_k;
/* Make sure we are in vmalloc area: */
- if (!(address >= VMALLOC_START && address < VMALLOC_END))
+ if (!(address >= VMALLOC_START && address < VMALLOC_END) #ifdef BPF_NX
+ && !(address >= BPF_VSTART && address < BPF_VEND) #endif
+ )
return -1;
/*
diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
index 6a906ff93006..7160dcaaa58a 100644
--- a/kernel/bpf/Kconfig
+++ b/kernel/bpf/Kconfig
@@ -86,6 +86,22 @@ config BPF_UNPRIV_DEFAULT_OFF
If you are unsure how to answer this question, answer Y.
+config BPF_HARDENING
+ bool "Enable BPF interpreter hardening"
+ select BPF
+ depends on X86_64 && !RANDOMIZE_MEMORY && !BPF_JIT_ALWAYS_ON
+ default n
+ help
+ Enhance bpf interpreter's security
+
+config BPF_NX
+bool "Enable bpf NX"
+ depends on BPF_HARDENING && !DYNAMIC_MEMORY_LAYOUT
+ default n
+ help
+ Allocate eBPF programs in seperate area and make sure the
+ interpreted programs are in the region.
+
source "kernel/bpf/preload/Kconfig"
config BPF_LSM
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index fe254ae035fe..56d9e8d4a6de 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -88,6 +88,34 @@ void *bpf_internal_load_pointer_neg_helper(const struct sk_buff *skb, int k, uns
return NULL;
}
+#ifdef CONFIG_BPF_NX
+#define BPF_MEMORY_ALIGN roundup_pow_of_two(sizeof(struct bpf_prog) + \
+ BPF_MAXINSNS * sizeof(struct bpf_insn))
+static void *__bpf_vmalloc(unsigned long size, gfp_t gfp_mask)
+{
+ return __vmalloc_node_range(size, BPF_MEMORY_ALIGN, BPF_VSTART, BPF_VEND,
+ gfp_mask, PAGE_KERNEL, 0, NUMA_NO_NODE,
+ __builtin_return_address(0));
+}
+
+static void bpf_insn_check_range(const struct bpf_insn *insn)
+{
+ if ((unsigned long)insn < BPF_VSTART
+ || (unsigned long)insn >= BPF_VEND - sizeof(struct bpf_insn))
+ BUG();
+}
+
+#else
+static void *__bpf_vmalloc(unsigned long size, gfp_t gfp_mask)
+{
+ return __vmalloc(size, gfp_mask);
+}
+
+static void bpf_insn_check_range(const struct bpf_insn *insn)
+{
+}
+#endif /* CONFIG_BPF_NX */
+
struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flags)
{
gfp_t gfp_flags = bpf_memcg_flags(GFP_KERNEL | __GFP_ZERO | gfp_extra_flags);
@@ -95,7 +123,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag
struct bpf_prog *fp;
size = round_up(size, PAGE_SIZE);
- fp = __vmalloc(size, gfp_flags);
+ fp = __bpf_vmalloc(size, gfp_flags);
if (fp == NULL)
return NULL;
@@ -246,7 +274,7 @@ struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size,
if (pages <= fp_old->pages)
return fp_old;
- fp = __vmalloc(size, gfp_flags);
+ fp = __bpf_vmalloc(size, gfp_flags);
if (fp) {
memcpy(fp, fp_old, fp_old->pages * PAGE_SIZE);
fp->pages = pages;
@@ -1380,7 +1408,7 @@ static struct bpf_prog *bpf_prog_clone_create(struct bpf_prog *fp_other,
gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | gfp_extra_flags;
struct bpf_prog *fp;
- fp = __vmalloc(fp_other->pages * PAGE_SIZE, gfp_flags);
+ fp = __bpf_vmalloc(fp_other->pages * PAGE_SIZE, gfp_flags);
if (fp != NULL) {
/* aux->prog still points to the fp_other one, so
* when promoting the clone to the real program,
@@ -1695,6 +1723,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
#define CONT_JMP ({ insn++; goto select_insn; })
select_insn:
+ bpf_insn_check_range(insn);
goto *jumptable[insn->code];
/* Explicitly mask the register-based shift amounts with 63 or 31
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/2] Adding BPF CFI
2024-01-03 19:16 ` [PATCH 1/2] Adding BPF NX Maxwell Bland
@ 2024-01-03 19:17 ` Maxwell Bland
2024-01-03 20:47 ` [PATCH 1/2] Adding BPF NX Alexei Starovoitov
1 sibling, 0 replies; 7+ messages in thread
From: Maxwell Bland @ 2024-01-03 19:17 UTC (permalink / raw)
To: Greg KH
Cc: bpf@vger.kernel.org, Andrew Wheeler,
Sammy BS2 Que | 阙斌生, di_jin@brown.edu
From: Tenut <tenut@Niobium>
Subject: [PATCH 2/2] Adding BPF CFI
Check offset of BPF instructions in the interpreter to make sure the BPF program is executed from the correct starting point
Signed-off-by: Maxwell Bland <mbland@motorola.com>
---
kernel/bpf/Kconfig | 10 +++++++
kernel/bpf/core.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 89 insertions(+)
diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig index 7160dcaaa58a..9c64db0ddd63 100644
--- a/kernel/bpf/Kconfig
+++ b/kernel/bpf/Kconfig
@@ -94,6 +94,7 @@ config BPF_HARDENING
help
Enhance bpf interpreter's security
+if BPF_HARDENING
config BPF_NX
bool "Enable bpf NX"
depends on BPF_HARDENING && !DYNAMIC_MEMORY_LAYOUT @@ -102,6 +103,15 @@ bool "Enable bpf NX"
Allocate eBPF programs in seperate area and make sure the
interpreted programs are in the region.
+config BPF_CFI
+ bool "Enable bpf CFI"
+ depends on BPF_NX
+ default n
+ help
+ Enable alignment checks for eBPF program starting points
+
+endif
+
source "kernel/bpf/preload/Kconfig"
config BPF_LSM
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 56d9e8d4a6de..dee0d2713c3b 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -116,6 +116,75 @@ static void bpf_insn_check_range(const struct bpf_insn *insn) } #endif /* CONFIG_BPF_NX */
+#ifdef CONFIG_BPF_CFI
+#define BPF_ON 1
+#define BPF_OFF 0
+
+struct bpf_mode_flag {
+ u8 byte_array[PAGE_SIZE];
+};
+DEFINE_PER_CPU_PAGE_ALIGNED(struct bpf_mode_flag, bpf_exec_mode);
+
+static void __init lock_bpf_exec_mode(void) {
+ struct bpf_mode_flag *flag_page;
+ int cpu;
+ for_each_possible_cpu(cpu) {
+ flag_page = per_cpu_ptr(&bpf_exec_mode, cpu);
+ set_memory_ro((unsigned long)flag_page, 1);
+ };
+}
+subsys_initcall(lock_bpf_exec_mode);
+
+static void write_cr0_nocheck(unsigned long val) {
+ asm volatile("mov %0,%%cr0": "+r" (val) : : "memory"); }
+
+/*
+ * Notice that get_cpu_var also disables preemption so no
+ * extra care needed for that.
+ */
+static void enter_bpf_exec_mode(unsigned long *flagsp) {
+ struct bpf_mode_flag *flag_page;
+ flag_page = &get_cpu_var(bpf_exec_mode);
+ local_irq_save(*flagsp);
+ write_cr0_nocheck(read_cr0() & ~X86_CR0_WP);
+ flag_page->byte_array[0] = BPF_ON;
+ write_cr0_nocheck(read_cr0() | X86_CR0_WP); }
+
+static void leave_bpf_exec_mode(unsigned long *flagsp) {
+ struct bpf_mode_flag *flag_page;
+ flag_page = this_cpu_ptr(&bpf_exec_mode);
+ write_cr0_nocheck(read_cr0() & ~X86_CR0_WP);
+ flag_page->byte_array[0] = BPF_OFF;
+ write_cr0_nocheck(read_cr0() | X86_CR0_WP);
+ local_irq_restore(*flagsp);
+ put_cpu_var(bpf_exec_mode);
+}
+
+static void check_bpf_exec_mode(void)
+{
+ struct bpf_mode_flag *flag_page;
+ flag_page = this_cpu_ptr(&bpf_exec_mode);
+ BUG_ON(flag_page->byte_array[0] != BPF_ON); }
+
+static void bpf_check_cfi(const struct bpf_insn *insn) {
+ const struct bpf_prog *fp;
+ fp = container_of(insn, struct bpf_prog, insnsi[0]);
+ if (!IS_ALIGNED((unsigned long)fp, BPF_MEMORY_ALIGN))
+ BUG();
+}
+
+#else /* CONFIG_BPF_CFI */
+static void check_bpf_exec_mode(void) {} #endif /* CONFIG_BPF_CFI */
+
struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flags) {
gfp_t gfp_flags = bpf_memcg_flags(GFP_KERNEL | __GFP_ZERO | gfp_extra_flags); @@ -1719,11 +1788,18 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) #undef BPF_INSN_2_LBL
u32 tail_call_cnt = 0;
+#ifdef CONFIG_BPF_CFI
+ unsigned long flags;
+ enter_bpf_exec_mode(&flags);
+ bpf_check_cfi(insn);
+#endif
+
#define CONT ({ insn++; goto select_insn; })
#define CONT_JMP ({ insn++; goto select_insn; })
select_insn:
bpf_insn_check_range(insn);
+ check_bpf_exec_mode();
goto *jumptable[insn->code];
/* Explicitly mask the register-based shift amounts with 63 or 31 @@ -2034,6 +2110,9 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
insn += insn->imm;
CONT;
JMP_EXIT:
+#ifdef CONFIG_BPF_CFI
+ leave_bpf_exec_mode(&flags);
+#endif
return BPF_R0;
/* JMP */
#define COND_JMP(SIGN, OPCODE, CMP_OP) \
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] Adding BPF NX
2024-01-03 19:16 ` [PATCH 1/2] Adding BPF NX Maxwell Bland
2024-01-03 19:17 ` [PATCH 2/2] Adding BPF CFI Maxwell Bland
@ 2024-01-03 20:47 ` Alexei Starovoitov
2024-01-03 22:36 ` [External] " Maxwell Bland
1 sibling, 1 reply; 7+ messages in thread
From: Alexei Starovoitov @ 2024-01-03 20:47 UTC (permalink / raw)
To: Maxwell Bland
Cc: Greg KH, bpf@vger.kernel.org, Andrew Wheeler,
Sammy BS2 Que | 阙斌生, di_jin@brown.edu
On Wed, Jan 3, 2024 at 11:16 AM Maxwell Bland <mbland@motorola.com> wrote:
>
> From: Tenut <tenut@Niobium>
> Subject: [PATCH 1/2] Adding BPF NX
>
> Reserve a memory region for BPF program, and check for it in the interpreter. This simulate the effect of non-executable memory for BPF execution.
Hi Maxwell,
interesting ideas in these two patches.
Coding style is not kernel, so if you want to upstream them
you need to follow the patch submission process more closely.
Also checking that you're aware that the interpreter is not secure in general.
Secure systems must use CONFIG_BPF_JIT_ALWAYS_ON.
Adding extra checks to interpreter helps a bit,
but you should really remove the interpreter.
^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [External] Re: [PATCH 1/2] Adding BPF NX
2024-01-03 20:47 ` [PATCH 1/2] Adding BPF NX Alexei Starovoitov
@ 2024-01-03 22:36 ` Maxwell Bland
0 siblings, 0 replies; 7+ messages in thread
From: Maxwell Bland @ 2024-01-03 22:36 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Greg KH, bpf@vger.kernel.org, Andrew Wheeler,
Sammy BS2 Que | 阙斌生, di_jin@brown.edu
> -----Original Message-----
> From: Alexei Starovoitov <alexei.starovoitov@gmail.com>
> Sent: Wednesday, January 3, 2024 2:48 PM
> To: Maxwell Bland <mbland@motorola.com>
> Cc: Greg KH <gregkh@linuxfoundation.org>; bpf@vger.kernel.org; Andrew
> Wheeler <awheeler@motorola.com>; Sammy BS2 Que | 阙斌生
> <quebs2@motorola.com>; di_jin@brown.edu
> Subject: [External] Re: [PATCH 1/2] Adding BPF NX
>
> On Wed, Jan 3, 2024 at 11:16 AM Maxwell Bland <mbland@motorola.com>
> wrote:
> >
> > From: Tenut <tenut@Niobium>
> > Subject: [PATCH 1/2] Adding BPF NX
> >
> > Reserve a memory region for BPF program, and check for it in the
> interpreter. This simulate the effect of non-executable memory for BPF
> execution.
>
> Hi Maxwell,
>
> interesting ideas in these two patches.
> Coding style is not kernel, so if you want to upstream them you need to
> follow the patch submission process more closely.
>
> Also checking that you're aware that the interpreter is not secure in general.
> Secure systems must use CONFIG_BPF_JIT_ALWAYS_ON.
> Adding extra checks to interpreter helps a bit, but you should really remove
> the interpreter.
Thanks Alexei, it looks like my email client ruined the formatting. I will use git send-email in the future.
I was not aware! I see the interpreter is affected by Spectre, creating a double-edged sword.
We have the interpreter disabled. Jin et al.'s patches and the approach need reworking.
Without going into too much detail, I will see what I can do.
Regards and thanks again,
Maxwell Bland
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-01-03 22:36 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-03 16:06 FW: BPF-NX+CFI is a good upstreaming candidate Maxwell Bland
2024-01-03 16:27 ` Greg KH
2024-01-03 18:56 ` Maxwell Bland
2024-01-03 19:16 ` [PATCH 1/2] Adding BPF NX Maxwell Bland
2024-01-03 19:17 ` [PATCH 2/2] Adding BPF CFI Maxwell Bland
2024-01-03 20:47 ` [PATCH 1/2] Adding BPF NX Alexei Starovoitov
2024-01-03 22:36 ` [External] " Maxwell Bland
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox