public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] kprobes: Annotate structs with __counted_by()
@ 2024-08-13 11:53 Jinjie Ruan
  2024-08-13 11:53 ` [PATCH 1/3] " Jinjie Ruan
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Jinjie Ruan @ 2024-08-13 11:53 UTC (permalink / raw)
  To: naveen, anil.s.keshavamurthy, davem, mhiramat, kees, gustavoars,
	linux-kernel, linux-trace-kernel, linux-hardening
  Cc: ruanjinjie

Annotate structs with __counted_by() for kprobe and do some cleanup.

Jinjie Ruan (3):
  kprobes: Annotate structs with __counted_by()
  kprobes: Cleanup the config comment
  kprobes: Cleanup collect_one_slot() and __disable_kprobe()

 kernel/kprobes.c | 87 ++++++++++++++++++++++++------------------------
 1 file changed, 44 insertions(+), 43 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] kprobes: Annotate structs with __counted_by()
  2024-08-13 11:53 [PATCH 0/3] kprobes: Annotate structs with __counted_by() Jinjie Ruan
@ 2024-08-13 11:53 ` Jinjie Ruan
  2024-10-22 20:55   ` Nathan Chancellor
  2024-08-13 11:53 ` [PATCH 2/3] kprobes: Cleanup the config comment Jinjie Ruan
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Jinjie Ruan @ 2024-08-13 11:53 UTC (permalink / raw)
  To: naveen, anil.s.keshavamurthy, davem, mhiramat, kees, gustavoars,
	linux-kernel, linux-trace-kernel, linux-hardening
  Cc: ruanjinjie

Add the __counted_by compiler attribute to the flexible array member
stripes to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
CONFIG_FORTIFY_SOURCE.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 kernel/kprobes.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index da59c68df841..e6f7b0d3b29c 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -92,7 +92,7 @@ struct kprobe_insn_page {
 	struct kprobe_insn_cache *cache;
 	int nused;
 	int ngarbage;
-	char slot_used[];
+	char slot_used[] __counted_by(nused);
 };
 
 #define KPROBE_INSN_PAGE_SIZE(slots)			\
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] kprobes: Cleanup the config comment
  2024-08-13 11:53 [PATCH 0/3] kprobes: Annotate structs with __counted_by() Jinjie Ruan
  2024-08-13 11:53 ` [PATCH 1/3] " Jinjie Ruan
@ 2024-08-13 11:53 ` Jinjie Ruan
  2024-08-13 11:53 ` [PATCH 3/3] kprobes: Cleanup collect_one_slot() and __disable_kprobe() Jinjie Ruan
  2024-10-21 14:01 ` [PATCH 0/3] kprobes: Annotate structs with __counted_by() Masami Hiramatsu
  3 siblings, 0 replies; 9+ messages in thread
From: Jinjie Ruan @ 2024-08-13 11:53 UTC (permalink / raw)
  To: naveen, anil.s.keshavamurthy, davem, mhiramat, kees, gustavoars,
	linux-kernel, linux-trace-kernel, linux-hardening
  Cc: ruanjinjie

The CONFIG_KPROBES_ON_FTRACE #if/#else/#endif section is small and doesn't
nest additional #ifdefs so the comment is useless and should be removed,
but the __ARCH_WANT_KPROBES_INSN_SLOT and CONFIG_OPTPROBES() nest is long,
it is better to add comment for reading.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 kernel/kprobes.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index e6f7b0d3b29c..ca3fa8652c49 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -353,8 +353,8 @@ struct kprobe_insn_cache kprobe_optinsn_slots = {
 	/* .insn_size is initialized later */
 	.nr_garbage = 0,
 };
-#endif
-#endif
+#endif /* CONFIG_OPTPROBES */
+#endif /* __ARCH_WANT_KPROBES_INSN_SLOT */
 
 /* We have preemption disabled.. so it is safe to use __ versions */
 static inline void set_kprobe_instance(struct kprobe *kp)
@@ -1543,7 +1543,7 @@ static int check_ftrace_location(struct kprobe *p)
 	if (ftrace_location(addr) == addr) {
 #ifdef CONFIG_KPROBES_ON_FTRACE
 		p->flags |= KPROBE_FLAG_FTRACE;
-#else	/* !CONFIG_KPROBES_ON_FTRACE */
+#else
 		return -EINVAL;
 #endif
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] kprobes: Cleanup collect_one_slot() and __disable_kprobe()
  2024-08-13 11:53 [PATCH 0/3] kprobes: Annotate structs with __counted_by() Jinjie Ruan
  2024-08-13 11:53 ` [PATCH 1/3] " Jinjie Ruan
  2024-08-13 11:53 ` [PATCH 2/3] kprobes: Cleanup the config comment Jinjie Ruan
@ 2024-08-13 11:53 ` Jinjie Ruan
  2024-10-21 14:01 ` [PATCH 0/3] kprobes: Annotate structs with __counted_by() Masami Hiramatsu
  3 siblings, 0 replies; 9+ messages in thread
From: Jinjie Ruan @ 2024-08-13 11:53 UTC (permalink / raw)
  To: naveen, anil.s.keshavamurthy, davem, mhiramat, kees, gustavoars,
	linux-kernel, linux-trace-kernel, linux-hardening
  Cc: ruanjinjie

If kip->nused is not zero, collect_one_slot() return false, otherwise do
a lot of linked list operations, reverse the processing order to make the
code if nesting more concise. __disable_kprobe() is the same as well.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
 kernel/kprobes.c | 79 ++++++++++++++++++++++++------------------------
 1 file changed, 40 insertions(+), 39 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index ca3fa8652c49..98d71a5acb72 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -206,29 +206,29 @@ static bool collect_one_slot(struct kprobe_insn_page *kip, int idx)
 {
 	kip->slot_used[idx] = SLOT_CLEAN;
 	kip->nused--;
-	if (kip->nused == 0) {
+	if (kip->nused != 0)
+		return false;
+
+	/*
+	 * Page is no longer in use.  Free it unless
+	 * it's the last one.  We keep the last one
+	 * so as not to have to set it up again the
+	 * next time somebody inserts a probe.
+	 */
+	if (!list_is_singular(&kip->list)) {
 		/*
-		 * Page is no longer in use.  Free it unless
-		 * it's the last one.  We keep the last one
-		 * so as not to have to set it up again the
-		 * next time somebody inserts a probe.
+		 * Record perf ksymbol unregister event before removing
+		 * the page.
 		 */
-		if (!list_is_singular(&kip->list)) {
-			/*
-			 * Record perf ksymbol unregister event before removing
-			 * the page.
-			 */
-			perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_OOL,
-					   (unsigned long)kip->insns, PAGE_SIZE, true,
-					   kip->cache->sym);
-			list_del_rcu(&kip->list);
-			synchronize_rcu();
-			kip->cache->free(kip->insns);
-			kfree(kip);
-		}
-		return true;
+		perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_OOL,
+				   (unsigned long)kip->insns, PAGE_SIZE, true,
+				   kip->cache->sym);
+		list_del_rcu(&kip->list);
+		synchronize_rcu();
+		kip->cache->free(kip->insns);
+		kfree(kip);
 	}
-	return false;
+	return true;
 }
 
 static int collect_garbage_slots(struct kprobe_insn_cache *c)
@@ -1725,28 +1725,29 @@ static struct kprobe *__disable_kprobe(struct kprobe *p)
 	if (unlikely(orig_p == NULL))
 		return ERR_PTR(-EINVAL);
 
-	if (!kprobe_disabled(p)) {
-		/* Disable probe if it is a child probe */
-		if (p != orig_p)
-			p->flags |= KPROBE_FLAG_DISABLED;
+	if (kprobe_disabled(p))
+		return orig_p;
 
-		/* Try to disarm and disable this/parent probe */
-		if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
-			/*
-			 * Don't be lazy here.  Even if 'kprobes_all_disarmed'
-			 * is false, 'orig_p' might not have been armed yet.
-			 * Note arm_all_kprobes() __tries__ to arm all kprobes
-			 * on the best effort basis.
-			 */
-			if (!kprobes_all_disarmed && !kprobe_disabled(orig_p)) {
-				ret = disarm_kprobe(orig_p, true);
-				if (ret) {
-					p->flags &= ~KPROBE_FLAG_DISABLED;
-					return ERR_PTR(ret);
-				}
+	/* Disable probe if it is a child probe */
+	if (p != orig_p)
+		p->flags |= KPROBE_FLAG_DISABLED;
+
+	/* Try to disarm and disable this/parent probe */
+	if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
+		/*
+		 * Don't be lazy here.  Even if 'kprobes_all_disarmed'
+		 * is false, 'orig_p' might not have been armed yet.
+		 * Note arm_all_kprobes() __tries__ to arm all kprobes
+		 * on the best effort basis.
+		 */
+		if (!kprobes_all_disarmed && !kprobe_disabled(orig_p)) {
+			ret = disarm_kprobe(orig_p, true);
+			if (ret) {
+				p->flags &= ~KPROBE_FLAG_DISABLED;
+				return ERR_PTR(ret);
 			}
-			orig_p->flags |= KPROBE_FLAG_DISABLED;
 		}
+		orig_p->flags |= KPROBE_FLAG_DISABLED;
 	}
 
 	return orig_p;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/3] kprobes: Annotate structs with __counted_by()
  2024-08-13 11:53 [PATCH 0/3] kprobes: Annotate structs with __counted_by() Jinjie Ruan
                   ` (2 preceding siblings ...)
  2024-08-13 11:53 ` [PATCH 3/3] kprobes: Cleanup collect_one_slot() and __disable_kprobe() Jinjie Ruan
@ 2024-10-21 14:01 ` Masami Hiramatsu
  3 siblings, 0 replies; 9+ messages in thread
From: Masami Hiramatsu @ 2024-10-21 14:01 UTC (permalink / raw)
  To: Jinjie Ruan
  Cc: naveen, anil.s.keshavamurthy, davem, kees, gustavoars,
	linux-kernel, linux-trace-kernel, linux-hardening

On Tue, 13 Aug 2024 19:53:31 +0800
Jinjie Ruan <ruanjinjie@huawei.com> wrote:

> Annotate structs with __counted_by() for kprobe and do some cleanup.
> 
> Jinjie Ruan (3):
>   kprobes: Annotate structs with __counted_by()
>   kprobes: Cleanup the config comment
>   kprobes: Cleanup collect_one_slot() and __disable_kprobe()
> 
>  kernel/kprobes.c | 87 ++++++++++++++++++++++++------------------------
>  1 file changed, 44 insertions(+), 43 deletions(-)

This series looks good to me.

Thanks for cleaning up!

> 
> -- 
> 2.34.1
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] kprobes: Annotate structs with __counted_by()
  2024-08-13 11:53 ` [PATCH 1/3] " Jinjie Ruan
@ 2024-10-22 20:55   ` Nathan Chancellor
  2024-10-29 21:12     ` Nathan Chancellor
  2024-10-31  1:32     ` Masami Hiramatsu
  0 siblings, 2 replies; 9+ messages in thread
From: Nathan Chancellor @ 2024-10-22 20:55 UTC (permalink / raw)
  To: Jinjie Ruan, Masami Hiramatsu
  Cc: naveen, anil.s.keshavamurthy, davem, kees, gustavoars,
	linux-kernel, linux-trace-kernel, linux-hardening

Hi,

On Tue, Aug 13, 2024 at 07:53:32PM +0800, Jinjie Ruan wrote:
> Add the __counted_by compiler attribute to the flexible array member
> stripes to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
> CONFIG_FORTIFY_SOURCE.
> 
> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> ---
>  kernel/kprobes.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index da59c68df841..e6f7b0d3b29c 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -92,7 +92,7 @@ struct kprobe_insn_page {
>  	struct kprobe_insn_cache *cache;
>  	int nused;
>  	int ngarbage;
> -	char slot_used[];
> +	char slot_used[] __counted_by(nused);
>  };
>  
>  #define KPROBE_INSN_PAGE_SIZE(slots)			\
> -- 
> 2.34.1
> 

This change was not properly tested. In next-20241022, where this
changes appears as commit 0888460c9050 ("kprobes: Annotate structs with
__counted_by()"), I get a fortify failure because ->nused is initialized
after the call to memset(). For example, with Debian's powerpc64le
configuration (which is just the first configuration I happened to see
this warning in, I don't think it is architecture specific):

  $ curl -LSso .config https://github.com/nathanchance/llvm-kernel-testing/raw/096eeab130a9077206d3ddd6f0c6e39187113869/configs/debian/powerpc64le.config

  $ make -skj"$(nproc)" ARCH=powerpc CROSS_COMPILE=powerpc64le-linux-gnu- LLVM=1 olddefconfig zImage.epapr

  $ curl -LSs https://github.com/ClangBuiltLinux/boot-utils/releases/download/20230707-182910/ppc64le-rootfs.cpio.zst | zstd -d >rootfs.cpio

  $ qemu-system-ppc64 \
      -display none \
      -nodefaults \
      -device ipmi-bmc-sim,id=bmc0 \
      -device isa-ipmi-bt,bmc=bmc0,irq=10 \
      -machine powernv \
      -kernel arch/powerpc/boot/zImage.epapr \
      -initrd rootfs.cpio \
      -m 2G \
      -serial mon:stdio
  ...
  [    0.000000] Linux version 6.12.0-rc4-next-20241022 (nathan@n3-xlarge-x86) (ClangBuiltLinux clang version 19.1.2 (https://github.com/llvm/llvm-project.git 7ba7d8e2f7b6445b60679da826210cdde29eaf8b), ClangBuiltLinux LLD 19.1.2 (https://github.com/llvm/llvm-project.git 7ba7d8e2f7b6445b60679da826210cdde29eaf8b)) #1 SMP Tue Oct 22 20:24:37 UTC 2024
  ...
  [    0.138628] ------------[ cut here ]------------
  [    0.138816] memset: detected buffer overflow: 512 byte write of buffer size 0
  [    0.140141] WARNING: CPU: 0 PID: 1 at lib/string_helpers.c:1033 __fortify_report+0x60/0x80
  [    0.142208] Modules linked in:
  [    0.142722] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.0-rc4-next-20241022 #1
  [    0.143379] Hardware name: IBM PowerNV (emulated by qemu) POWER10 0x801200 opal:v7.1 PowerNV
  [    0.143810] NIP:  c000000000900920 LR: c00000000090091c CTR: 0000000000000000
  [    0.144098] REGS: c000000002acf290 TRAP: 0700   Not tainted  (6.12.0-rc4-next-20241022)
  [    0.144457] MSR:  9000000002029033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE>  CR: 24800280  XER: 00000000
  [    0.145048] CFAR: c00000000014e414 IRQMASK: 0
  [    0.145048] GPR00: c00000000090091c c000000002acf530 c0000000017bf0e0 0000000000000041
  [    0.145048] GPR04: c0000000025911f8 c0000000ffffefff 0000000000000003 0000000000000001
  [    0.145048] GPR08: 0000000000000003 0000000000000004 0000000000000000 9000000002001033
  [    0.145048] GPR12: 0000000000800000 c00000000289e000 c000000000010eb8 0000000000000000
  [    0.145048] GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  [    0.145048] GPR20: 0000000000000000 0000000000000000 0000000000000000 c0000000027fba70
  [    0.145048] GPR24: 61c8864680b583eb 0000000000000000 0000000000001000 0000000000000000
  [    0.145048] GPR28: c000000002a83400 c00000000262de68 c00000000262de30 c000000002acf5b0
  [    0.147696] NIP [c000000000900920] __fortify_report+0x60/0x80
  [    0.147934] LR [c00000000090091c] __fortify_report+0x5c/0x80
  [    0.148277] Call Trace:
  [    0.148465] [c000000002acf530] [c00000000090091c] __fortify_report+0x5c/0x80 (unreliable)
  [    0.148855] [c000000002acf590] [c000000000900958] __fortify_panic+0x18/0x40
  [    0.149133] [c000000002acf5b0] [c0000000002e3508] __get_insn_slot+0x308/0x320
  [    0.149413] [c000000002acf620] [c0000000000586c4] arch_prepare_kprobe+0x1e4/0x270
  [    0.149699] [c000000002acf6b0] [c0000000002e496c] register_kprobe+0x4cc/0x8e0
  [    0.149975] [c000000002acf930] [c0000000020154ac] arch_init_kprobes+0x30/0x54
  [    0.150262] [c000000002acf960] [c0000000020411bc] init_kprobes+0x80/0x120
  [    0.150524] [c000000002acf9d0] [c000000000010630] do_one_initcall+0x120/0x3e0
  [    0.150798] [c000000002acfd50] [c000000002005cb8] do_pre_smp_initcalls+0x70/0x114
  [    0.151082] [c000000002acfd90] [c000000002005b70] kernel_init_freeable+0x14c/0x224
  [    0.151365] [c000000002acfde0] [c000000000010eec] kernel_init+0x3c/0x250
  [    0.151622] [c000000002acfe50] [c00000000000de3c] ret_from_kernel_user_thread+0x14/0x1c
  [    0.151935] --- interrupt: 0 at 0x0
  [    0.152433] Code: 3cc2fffc 70630001 3c62ffa0 78841f48 38c6f388 38636489 7c86202a 3cc2ff9c 38c62f30 7cc8305e 4b84d9e1 60000000 <0fe00000> 38210060 e8010010 7c0803a6
  [    0.153155] ---[ end trace 0000000000000000 ]---
  ...

Even if the current ->nused assignment is moved before the call to
memset(), the failure still occurs because 1 is clearly wrong for the
size of memset(), which is 512 bytes for this configuration. Could this
instance of memset() be avoided by using kzalloc() for the kip
allocation? Could also take the opportunity to use struct_size(). The
following diff fixes this issue for me but I did not do any further
testing. If this does not work for some reason and there is no other
obvious solution, I think this change should be reverted.

Cheers,
Nathan

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 98d71a5acb72..6adb0cc4e45c 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -95,10 +95,6 @@ struct kprobe_insn_page {
 	char slot_used[] __counted_by(nused);
 };
 
-#define KPROBE_INSN_PAGE_SIZE(slots)			\
-	(offsetof(struct kprobe_insn_page, slot_used) +	\
-	 (sizeof(char) * (slots)))
-
 static int slots_per_page(struct kprobe_insn_cache *c)
 {
 	return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
@@ -175,7 +171,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 		goto retry;
 
 	/* All out of space.  Need to allocate a new page. */
-	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(slots_per_page(c)), GFP_KERNEL);
+	kip = kzalloc(struct_size(kip, slot_used, slots_per_page(c)), GFP_KERNEL);
 	if (!kip)
 		goto out;
 
@@ -185,10 +181,8 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 		goto out;
 	}
 	INIT_LIST_HEAD(&kip->list);
-	memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
-	kip->slot_used[0] = SLOT_USED;
 	kip->nused = 1;
-	kip->ngarbage = 0;
+	kip->slot_used[0] = SLOT_USED;
 	kip->cache = c;
 	list_add_rcu(&kip->list, &c->pages);
 	slot = kip->insns;

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] kprobes: Annotate structs with __counted_by()
  2024-10-22 20:55   ` Nathan Chancellor
@ 2024-10-29 21:12     ` Nathan Chancellor
  2024-10-30  1:18       ` Jinjie Ruan
  2024-10-31  1:32     ` Masami Hiramatsu
  1 sibling, 1 reply; 9+ messages in thread
From: Nathan Chancellor @ 2024-10-29 21:12 UTC (permalink / raw)
  To: Jinjie Ruan, Masami Hiramatsu
  Cc: naveen, anil.s.keshavamurthy, davem, kees, gustavoars,
	linux-kernel, linux-trace-kernel, linux-hardening

On Tue, Oct 22, 2024 at 01:56:01PM -0700, Nathan Chancellor wrote:
> Hi,
> 
> On Tue, Aug 13, 2024 at 07:53:32PM +0800, Jinjie Ruan wrote:
> > Add the __counted_by compiler attribute to the flexible array member
> > stripes to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
> > CONFIG_FORTIFY_SOURCE.
> > 
> > Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> > ---
> >  kernel/kprobes.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > index da59c68df841..e6f7b0d3b29c 100644
> > --- a/kernel/kprobes.c
> > +++ b/kernel/kprobes.c
> > @@ -92,7 +92,7 @@ struct kprobe_insn_page {
> >  	struct kprobe_insn_cache *cache;
> >  	int nused;
> >  	int ngarbage;
> > -	char slot_used[];
> > +	char slot_used[] __counted_by(nused);
> >  };
> >  
> >  #define KPROBE_INSN_PAGE_SIZE(slots)			\
> > -- 
> > 2.34.1
> > 
> 
> This change was not properly tested. In next-20241022, where this
> changes appears as commit 0888460c9050 ("kprobes: Annotate structs with
> __counted_by()"), I get a fortify failure because ->nused is initialized
> after the call to memset(). For example, with Debian's powerpc64le
> configuration (which is just the first configuration I happened to see
> this warning in, I don't think it is architecture specific):
> 
>   $ curl -LSso .config https://github.com/nathanchance/llvm-kernel-testing/raw/096eeab130a9077206d3ddd6f0c6e39187113869/configs/debian/powerpc64le.config
> 
>   $ make -skj"$(nproc)" ARCH=powerpc CROSS_COMPILE=powerpc64le-linux-gnu- LLVM=1 olddefconfig zImage.epapr
> 
>   $ curl -LSs https://github.com/ClangBuiltLinux/boot-utils/releases/download/20230707-182910/ppc64le-rootfs.cpio.zst | zstd -d >rootfs.cpio
> 
>   $ qemu-system-ppc64 \
>       -display none \
>       -nodefaults \
>       -device ipmi-bmc-sim,id=bmc0 \
>       -device isa-ipmi-bt,bmc=bmc0,irq=10 \
>       -machine powernv \
>       -kernel arch/powerpc/boot/zImage.epapr \
>       -initrd rootfs.cpio \
>       -m 2G \
>       -serial mon:stdio
>   ...
>   [    0.000000] Linux version 6.12.0-rc4-next-20241022 (nathan@n3-xlarge-x86) (ClangBuiltLinux clang version 19.1.2 (https://github.com/llvm/llvm-project.git 7ba7d8e2f7b6445b60679da826210cdde29eaf8b), ClangBuiltLinux LLD 19.1.2 (https://github.com/llvm/llvm-project.git 7ba7d8e2f7b6445b60679da826210cdde29eaf8b)) #1 SMP Tue Oct 22 20:24:37 UTC 2024
>   ...
>   [    0.138628] ------------[ cut here ]------------
>   [    0.138816] memset: detected buffer overflow: 512 byte write of buffer size 0
>   [    0.140141] WARNING: CPU: 0 PID: 1 at lib/string_helpers.c:1033 __fortify_report+0x60/0x80
>   [    0.142208] Modules linked in:
>   [    0.142722] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.0-rc4-next-20241022 #1
>   [    0.143379] Hardware name: IBM PowerNV (emulated by qemu) POWER10 0x801200 opal:v7.1 PowerNV
>   [    0.143810] NIP:  c000000000900920 LR: c00000000090091c CTR: 0000000000000000
>   [    0.144098] REGS: c000000002acf290 TRAP: 0700   Not tainted  (6.12.0-rc4-next-20241022)
>   [    0.144457] MSR:  9000000002029033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE>  CR: 24800280  XER: 00000000
>   [    0.145048] CFAR: c00000000014e414 IRQMASK: 0
>   [    0.145048] GPR00: c00000000090091c c000000002acf530 c0000000017bf0e0 0000000000000041
>   [    0.145048] GPR04: c0000000025911f8 c0000000ffffefff 0000000000000003 0000000000000001
>   [    0.145048] GPR08: 0000000000000003 0000000000000004 0000000000000000 9000000002001033
>   [    0.145048] GPR12: 0000000000800000 c00000000289e000 c000000000010eb8 0000000000000000
>   [    0.145048] GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
>   [    0.145048] GPR20: 0000000000000000 0000000000000000 0000000000000000 c0000000027fba70
>   [    0.145048] GPR24: 61c8864680b583eb 0000000000000000 0000000000001000 0000000000000000
>   [    0.145048] GPR28: c000000002a83400 c00000000262de68 c00000000262de30 c000000002acf5b0
>   [    0.147696] NIP [c000000000900920] __fortify_report+0x60/0x80
>   [    0.147934] LR [c00000000090091c] __fortify_report+0x5c/0x80
>   [    0.148277] Call Trace:
>   [    0.148465] [c000000002acf530] [c00000000090091c] __fortify_report+0x5c/0x80 (unreliable)
>   [    0.148855] [c000000002acf590] [c000000000900958] __fortify_panic+0x18/0x40
>   [    0.149133] [c000000002acf5b0] [c0000000002e3508] __get_insn_slot+0x308/0x320
>   [    0.149413] [c000000002acf620] [c0000000000586c4] arch_prepare_kprobe+0x1e4/0x270
>   [    0.149699] [c000000002acf6b0] [c0000000002e496c] register_kprobe+0x4cc/0x8e0
>   [    0.149975] [c000000002acf930] [c0000000020154ac] arch_init_kprobes+0x30/0x54
>   [    0.150262] [c000000002acf960] [c0000000020411bc] init_kprobes+0x80/0x120
>   [    0.150524] [c000000002acf9d0] [c000000000010630] do_one_initcall+0x120/0x3e0
>   [    0.150798] [c000000002acfd50] [c000000002005cb8] do_pre_smp_initcalls+0x70/0x114
>   [    0.151082] [c000000002acfd90] [c000000002005b70] kernel_init_freeable+0x14c/0x224
>   [    0.151365] [c000000002acfde0] [c000000000010eec] kernel_init+0x3c/0x250
>   [    0.151622] [c000000002acfe50] [c00000000000de3c] ret_from_kernel_user_thread+0x14/0x1c
>   [    0.151935] --- interrupt: 0 at 0x0
>   [    0.152433] Code: 3cc2fffc 70630001 3c62ffa0 78841f48 38c6f388 38636489 7c86202a 3cc2ff9c 38c62f30 7cc8305e 4b84d9e1 60000000 <0fe00000> 38210060 e8010010 7c0803a6
>   [    0.153155] ---[ end trace 0000000000000000 ]---
>   ...
> 
> Even if the current ->nused assignment is moved before the call to
> memset(), the failure still occurs because 1 is clearly wrong for the
> size of memset(), which is 512 bytes for this configuration. Could this
> instance of memset() be avoided by using kzalloc() for the kip
> allocation? Could also take the opportunity to use struct_size(). The
> following diff fixes this issue for me but I did not do any further
> testing. If this does not work for some reason and there is no other
> obvious solution, I think this change should be reverted.

Ping? I still see this on next-20241029. I can also reproduce it with a
tip of tree GCC.

> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 98d71a5acb72..6adb0cc4e45c 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -95,10 +95,6 @@ struct kprobe_insn_page {
>  	char slot_used[] __counted_by(nused);
>  };
>  
> -#define KPROBE_INSN_PAGE_SIZE(slots)			\
> -	(offsetof(struct kprobe_insn_page, slot_used) +	\
> -	 (sizeof(char) * (slots)))
> -
>  static int slots_per_page(struct kprobe_insn_cache *c)
>  {
>  	return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
> @@ -175,7 +171,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>  		goto retry;
>  
>  	/* All out of space.  Need to allocate a new page. */
> -	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(slots_per_page(c)), GFP_KERNEL);
> +	kip = kzalloc(struct_size(kip, slot_used, slots_per_page(c)), GFP_KERNEL);
>  	if (!kip)
>  		goto out;
>  
> @@ -185,10 +181,8 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>  		goto out;
>  	}
>  	INIT_LIST_HEAD(&kip->list);
> -	memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
> -	kip->slot_used[0] = SLOT_USED;
>  	kip->nused = 1;
> -	kip->ngarbage = 0;
> +	kip->slot_used[0] = SLOT_USED;
>  	kip->cache = c;
>  	list_add_rcu(&kip->list, &c->pages);
>  	slot = kip->insns;

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] kprobes: Annotate structs with __counted_by()
  2024-10-29 21:12     ` Nathan Chancellor
@ 2024-10-30  1:18       ` Jinjie Ruan
  0 siblings, 0 replies; 9+ messages in thread
From: Jinjie Ruan @ 2024-10-30  1:18 UTC (permalink / raw)
  To: Nathan Chancellor, Masami Hiramatsu
  Cc: naveen, anil.s.keshavamurthy, davem, kees, gustavoars,
	linux-kernel, linux-trace-kernel, linux-hardening



On 2024/10/30 5:12, Nathan Chancellor wrote:
> On Tue, Oct 22, 2024 at 01:56:01PM -0700, Nathan Chancellor wrote:
>> Hi,
>>
>> On Tue, Aug 13, 2024 at 07:53:32PM +0800, Jinjie Ruan wrote:
>>> Add the __counted_by compiler attribute to the flexible array member
>>> stripes to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
>>> CONFIG_FORTIFY_SOURCE.
>>>
>>> Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
>>> ---
>>>  kernel/kprobes.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
>>> index da59c68df841..e6f7b0d3b29c 100644
>>> --- a/kernel/kprobes.c
>>> +++ b/kernel/kprobes.c
>>> @@ -92,7 +92,7 @@ struct kprobe_insn_page {
>>>  	struct kprobe_insn_cache *cache;
>>>  	int nused;
>>>  	int ngarbage;
>>> -	char slot_used[];
>>> +	char slot_used[] __counted_by(nused);
>>>  };
>>>  
>>>  #define KPROBE_INSN_PAGE_SIZE(slots)			\
>>> -- 
>>> 2.34.1
>>>
>>
>> This change was not properly tested. In next-20241022, where this
>> changes appears as commit 0888460c9050 ("kprobes: Annotate structs with
>> __counted_by()"), I get a fortify failure because ->nused is initialized
>> after the call to memset(). For example, with Debian's powerpc64le
>> configuration (which is just the first configuration I happened to see
>> this warning in, I don't think it is architecture specific):
>>
>>   $ curl -LSso .config https://github.com/nathanchance/llvm-kernel-testing/raw/096eeab130a9077206d3ddd6f0c6e39187113869/configs/debian/powerpc64le.config
>>
>>   $ make -skj"$(nproc)" ARCH=powerpc CROSS_COMPILE=powerpc64le-linux-gnu- LLVM=1 olddefconfig zImage.epapr
>>
>>   $ curl -LSs https://github.com/ClangBuiltLinux/boot-utils/releases/download/20230707-182910/ppc64le-rootfs.cpio.zst | zstd -d >rootfs.cpio
>>
>>   $ qemu-system-ppc64 \
>>       -display none \
>>       -nodefaults \
>>       -device ipmi-bmc-sim,id=bmc0 \
>>       -device isa-ipmi-bt,bmc=bmc0,irq=10 \
>>       -machine powernv \
>>       -kernel arch/powerpc/boot/zImage.epapr \
>>       -initrd rootfs.cpio \
>>       -m 2G \
>>       -serial mon:stdio
>>   ...
>>   [    0.000000] Linux version 6.12.0-rc4-next-20241022 (nathan@n3-xlarge-x86) (ClangBuiltLinux clang version 19.1.2 (https://github.com/llvm/llvm-project.git 7ba7d8e2f7b6445b60679da826210cdde29eaf8b), ClangBuiltLinux LLD 19.1.2 (https://github.com/llvm/llvm-project.git 7ba7d8e2f7b6445b60679da826210cdde29eaf8b)) #1 SMP Tue Oct 22 20:24:37 UTC 2024
>>   ...
>>   [    0.138628] ------------[ cut here ]------------
>>   [    0.138816] memset: detected buffer overflow: 512 byte write of buffer size 0
>>   [    0.140141] WARNING: CPU: 0 PID: 1 at lib/string_helpers.c:1033 __fortify_report+0x60/0x80
>>   [    0.142208] Modules linked in:
>>   [    0.142722] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.0-rc4-next-20241022 #1
>>   [    0.143379] Hardware name: IBM PowerNV (emulated by qemu) POWER10 0x801200 opal:v7.1 PowerNV
>>   [    0.143810] NIP:  c000000000900920 LR: c00000000090091c CTR: 0000000000000000
>>   [    0.144098] REGS: c000000002acf290 TRAP: 0700   Not tainted  (6.12.0-rc4-next-20241022)
>>   [    0.144457] MSR:  9000000002029033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE>  CR: 24800280  XER: 00000000
>>   [    0.145048] CFAR: c00000000014e414 IRQMASK: 0
>>   [    0.145048] GPR00: c00000000090091c c000000002acf530 c0000000017bf0e0 0000000000000041
>>   [    0.145048] GPR04: c0000000025911f8 c0000000ffffefff 0000000000000003 0000000000000001
>>   [    0.145048] GPR08: 0000000000000003 0000000000000004 0000000000000000 9000000002001033
>>   [    0.145048] GPR12: 0000000000800000 c00000000289e000 c000000000010eb8 0000000000000000
>>   [    0.145048] GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
>>   [    0.145048] GPR20: 0000000000000000 0000000000000000 0000000000000000 c0000000027fba70
>>   [    0.145048] GPR24: 61c8864680b583eb 0000000000000000 0000000000001000 0000000000000000
>>   [    0.145048] GPR28: c000000002a83400 c00000000262de68 c00000000262de30 c000000002acf5b0
>>   [    0.147696] NIP [c000000000900920] __fortify_report+0x60/0x80
>>   [    0.147934] LR [c00000000090091c] __fortify_report+0x5c/0x80
>>   [    0.148277] Call Trace:
>>   [    0.148465] [c000000002acf530] [c00000000090091c] __fortify_report+0x5c/0x80 (unreliable)
>>   [    0.148855] [c000000002acf590] [c000000000900958] __fortify_panic+0x18/0x40
>>   [    0.149133] [c000000002acf5b0] [c0000000002e3508] __get_insn_slot+0x308/0x320
>>   [    0.149413] [c000000002acf620] [c0000000000586c4] arch_prepare_kprobe+0x1e4/0x270
>>   [    0.149699] [c000000002acf6b0] [c0000000002e496c] register_kprobe+0x4cc/0x8e0
>>   [    0.149975] [c000000002acf930] [c0000000020154ac] arch_init_kprobes+0x30/0x54
>>   [    0.150262] [c000000002acf960] [c0000000020411bc] init_kprobes+0x80/0x120
>>   [    0.150524] [c000000002acf9d0] [c000000000010630] do_one_initcall+0x120/0x3e0
>>   [    0.150798] [c000000002acfd50] [c000000002005cb8] do_pre_smp_initcalls+0x70/0x114
>>   [    0.151082] [c000000002acfd90] [c000000002005b70] kernel_init_freeable+0x14c/0x224
>>   [    0.151365] [c000000002acfde0] [c000000000010eec] kernel_init+0x3c/0x250
>>   [    0.151622] [c000000002acfe50] [c00000000000de3c] ret_from_kernel_user_thread+0x14/0x1c
>>   [    0.151935] --- interrupt: 0 at 0x0
>>   [    0.152433] Code: 3cc2fffc 70630001 3c62ffa0 78841f48 38c6f388 38636489 7c86202a 3cc2ff9c 38c62f30 7cc8305e 4b84d9e1 60000000 <0fe00000> 38210060 e8010010 7c0803a6
>>   [    0.153155] ---[ end trace 0000000000000000 ]---
>>   ...
>>
>> Even if the current ->nused assignment is moved before the call to
>> memset(), the failure still occurs because 1 is clearly wrong for the
>> size of memset(), which is 512 bytes for this configuration. Could this
>> instance of memset() be avoided by using kzalloc() for the kip
>> allocation? Could also take the opportunity to use struct_size(). The
>> following diff fixes this issue for me but I did not do any further
>> testing. If this does not work for some reason and there is no other
>> obvious solution, I think this change should be reverted.
> 
> Ping? I still see this on next-20241029. I can also reproduce it with a
> tip of tree GCC.

Hi, Nathan,

Could you send the patch to fix this?

> 
>> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
>> index 98d71a5acb72..6adb0cc4e45c 100644
>> --- a/kernel/kprobes.c
>> +++ b/kernel/kprobes.c
>> @@ -95,10 +95,6 @@ struct kprobe_insn_page {
>>  	char slot_used[] __counted_by(nused);
>>  };
>>  
>> -#define KPROBE_INSN_PAGE_SIZE(slots)			\
>> -	(offsetof(struct kprobe_insn_page, slot_used) +	\
>> -	 (sizeof(char) * (slots)))
>> -
>>  static int slots_per_page(struct kprobe_insn_cache *c)
>>  {
>>  	return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
>> @@ -175,7 +171,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>>  		goto retry;
>>  
>>  	/* All out of space.  Need to allocate a new page. */
>> -	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(slots_per_page(c)), GFP_KERNEL);
>> +	kip = kzalloc(struct_size(kip, slot_used, slots_per_page(c)), GFP_KERNEL);
>>  	if (!kip)
>>  		goto out;
>>  
>> @@ -185,10 +181,8 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>>  		goto out;
>>  	}
>>  	INIT_LIST_HEAD(&kip->list);
>> -	memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
>> -	kip->slot_used[0] = SLOT_USED;
>>  	kip->nused = 1;
>> -	kip->ngarbage = 0;
>> +	kip->slot_used[0] = SLOT_USED;
>>  	kip->cache = c;
>>  	list_add_rcu(&kip->list, &c->pages);
>>  	slot = kip->insns;
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] kprobes: Annotate structs with __counted_by()
  2024-10-22 20:55   ` Nathan Chancellor
  2024-10-29 21:12     ` Nathan Chancellor
@ 2024-10-31  1:32     ` Masami Hiramatsu
  1 sibling, 0 replies; 9+ messages in thread
From: Masami Hiramatsu @ 2024-10-31  1:32 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: Jinjie Ruan, naveen, anil.s.keshavamurthy, davem, kees,
	gustavoars, linux-kernel, linux-trace-kernel, linux-hardening

Hi Nathan,

On Tue, 22 Oct 2024 13:55:57 -0700
Nathan Chancellor <nathan@kernel.org> wrote:

> Hi,
> 
> On Tue, Aug 13, 2024 at 07:53:32PM +0800, Jinjie Ruan wrote:
> > Add the __counted_by compiler attribute to the flexible array member
> > stripes to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
> > CONFIG_FORTIFY_SOURCE.
> > 
> > Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
> > ---
> >  kernel/kprobes.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > index da59c68df841..e6f7b0d3b29c 100644
> > --- a/kernel/kprobes.c
> > +++ b/kernel/kprobes.c
> > @@ -92,7 +92,7 @@ struct kprobe_insn_page {
> >  	struct kprobe_insn_cache *cache;
> >  	int nused;
> >  	int ngarbage;
> > -	char slot_used[];
> > +	char slot_used[] __counted_by(nused);
> >  };
> >  
> >  #define KPROBE_INSN_PAGE_SIZE(slots)			\
> > -- 
> > 2.34.1
> > 
> 
> This change was not properly tested. In next-20241022, where this
> changes appears as commit 0888460c9050 ("kprobes: Annotate structs with
> __counted_by()"), I get a fortify failure because ->nused is initialized
> after the call to memset(). For example, with Debian's powerpc64le
> configuration (which is just the first configuration I happened to see
> this warning in, I don't think it is architecture specific):
> 
>   $ curl -LSso .config https://github.com/nathanchance/llvm-kernel-testing/raw/096eeab130a9077206d3ddd6f0c6e39187113869/configs/debian/powerpc64le.config
> 
>   $ make -skj"$(nproc)" ARCH=powerpc CROSS_COMPILE=powerpc64le-linux-gnu- LLVM=1 olddefconfig zImage.epapr
> 
>   $ curl -LSs https://github.com/ClangBuiltLinux/boot-utils/releases/download/20230707-182910/ppc64le-rootfs.cpio.zst | zstd -d >rootfs.cpio
> 
>   $ qemu-system-ppc64 \
>       -display none \
>       -nodefaults \
>       -device ipmi-bmc-sim,id=bmc0 \
>       -device isa-ipmi-bt,bmc=bmc0,irq=10 \
>       -machine powernv \
>       -kernel arch/powerpc/boot/zImage.epapr \
>       -initrd rootfs.cpio \
>       -m 2G \
>       -serial mon:stdio
>   ...
>   [    0.000000] Linux version 6.12.0-rc4-next-20241022 (nathan@n3-xlarge-x86) (ClangBuiltLinux clang version 19.1.2 (https://github.com/llvm/llvm-project.git 7ba7d8e2f7b6445b60679da826210cdde29eaf8b), ClangBuiltLinux LLD 19.1.2 (https://github.com/llvm/llvm-project.git 7ba7d8e2f7b6445b60679da826210cdde29eaf8b)) #1 SMP Tue Oct 22 20:24:37 UTC 2024
>   ...
>   [    0.138628] ------------[ cut here ]------------
>   [    0.138816] memset: detected buffer overflow: 512 byte write of buffer size 0
>   [    0.140141] WARNING: CPU: 0 PID: 1 at lib/string_helpers.c:1033 __fortify_report+0x60/0x80
>   [    0.142208] Modules linked in:
>   [    0.142722] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.0-rc4-next-20241022 #1
>   [    0.143379] Hardware name: IBM PowerNV (emulated by qemu) POWER10 0x801200 opal:v7.1 PowerNV
>   [    0.143810] NIP:  c000000000900920 LR: c00000000090091c CTR: 0000000000000000
>   [    0.144098] REGS: c000000002acf290 TRAP: 0700   Not tainted  (6.12.0-rc4-next-20241022)
>   [    0.144457] MSR:  9000000002029033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE>  CR: 24800280  XER: 00000000
>   [    0.145048] CFAR: c00000000014e414 IRQMASK: 0
>   [    0.145048] GPR00: c00000000090091c c000000002acf530 c0000000017bf0e0 0000000000000041
>   [    0.145048] GPR04: c0000000025911f8 c0000000ffffefff 0000000000000003 0000000000000001
>   [    0.145048] GPR08: 0000000000000003 0000000000000004 0000000000000000 9000000002001033
>   [    0.145048] GPR12: 0000000000800000 c00000000289e000 c000000000010eb8 0000000000000000
>   [    0.145048] GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
>   [    0.145048] GPR20: 0000000000000000 0000000000000000 0000000000000000 c0000000027fba70
>   [    0.145048] GPR24: 61c8864680b583eb 0000000000000000 0000000000001000 0000000000000000
>   [    0.145048] GPR28: c000000002a83400 c00000000262de68 c00000000262de30 c000000002acf5b0
>   [    0.147696] NIP [c000000000900920] __fortify_report+0x60/0x80
>   [    0.147934] LR [c00000000090091c] __fortify_report+0x5c/0x80
>   [    0.148277] Call Trace:
>   [    0.148465] [c000000002acf530] [c00000000090091c] __fortify_report+0x5c/0x80 (unreliable)
>   [    0.148855] [c000000002acf590] [c000000000900958] __fortify_panic+0x18/0x40
>   [    0.149133] [c000000002acf5b0] [c0000000002e3508] __get_insn_slot+0x308/0x320
>   [    0.149413] [c000000002acf620] [c0000000000586c4] arch_prepare_kprobe+0x1e4/0x270
>   [    0.149699] [c000000002acf6b0] [c0000000002e496c] register_kprobe+0x4cc/0x8e0
>   [    0.149975] [c000000002acf930] [c0000000020154ac] arch_init_kprobes+0x30/0x54
>   [    0.150262] [c000000002acf960] [c0000000020411bc] init_kprobes+0x80/0x120
>   [    0.150524] [c000000002acf9d0] [c000000000010630] do_one_initcall+0x120/0x3e0
>   [    0.150798] [c000000002acfd50] [c000000002005cb8] do_pre_smp_initcalls+0x70/0x114
>   [    0.151082] [c000000002acfd90] [c000000002005b70] kernel_init_freeable+0x14c/0x224
>   [    0.151365] [c000000002acfde0] [c000000000010eec] kernel_init+0x3c/0x250
>   [    0.151622] [c000000002acfe50] [c00000000000de3c] ret_from_kernel_user_thread+0x14/0x1c
>   [    0.151935] --- interrupt: 0 at 0x0
>   [    0.152433] Code: 3cc2fffc 70630001 3c62ffa0 78841f48 38c6f388 38636489 7c86202a 3cc2ff9c 38c62f30 7cc8305e 4b84d9e1 60000000 <0fe00000> 38210060 e8010010 7c0803a6
>   [    0.153155] ---[ end trace 0000000000000000 ]---
>   ...
> 
> Even if the current ->nused assignment is moved before the call to
> memset(), the failure still occurs because 1 is clearly wrong for the
> size of memset(), which is 512 bytes for this configuration. Could this
> instance of memset() be avoided by using kzalloc() for the kip
> allocation? Could also take the opportunity to use struct_size(). The
> following diff fixes this issue for me but I did not do any further
> testing. If this does not work for some reason and there is no other
> obvious solution, I think this change should be reverted.

Yeah, it looks good to me. As Jinije said, can you send this as a
fix patch? I will pick it.

Thank you,

> 
> Cheers,
> Nathan
> 
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 98d71a5acb72..6adb0cc4e45c 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -95,10 +95,6 @@ struct kprobe_insn_page {
>  	char slot_used[] __counted_by(nused);
>  };
>  
> -#define KPROBE_INSN_PAGE_SIZE(slots)			\
> -	(offsetof(struct kprobe_insn_page, slot_used) +	\
> -	 (sizeof(char) * (slots)))
> -
>  static int slots_per_page(struct kprobe_insn_cache *c)
>  {
>  	return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
> @@ -175,7 +171,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>  		goto retry;
>  
>  	/* All out of space.  Need to allocate a new page. */
> -	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(slots_per_page(c)), GFP_KERNEL);
> +	kip = kzalloc(struct_size(kip, slot_used, slots_per_page(c)), GFP_KERNEL);
>  	if (!kip)
>  		goto out;
>  
> @@ -185,10 +181,8 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>  		goto out;
>  	}
>  	INIT_LIST_HEAD(&kip->list);
> -	memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
> -	kip->slot_used[0] = SLOT_USED;
>  	kip->nused = 1;
> -	kip->ngarbage = 0;
> +	kip->slot_used[0] = SLOT_USED;
>  	kip->cache = c;
>  	list_add_rcu(&kip->list, &c->pages);
>  	slot = kip->insns;


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-10-31  1:33 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-13 11:53 [PATCH 0/3] kprobes: Annotate structs with __counted_by() Jinjie Ruan
2024-08-13 11:53 ` [PATCH 1/3] " Jinjie Ruan
2024-10-22 20:55   ` Nathan Chancellor
2024-10-29 21:12     ` Nathan Chancellor
2024-10-30  1:18       ` Jinjie Ruan
2024-10-31  1:32     ` Masami Hiramatsu
2024-08-13 11:53 ` [PATCH 2/3] kprobes: Cleanup the config comment Jinjie Ruan
2024-08-13 11:53 ` [PATCH 3/3] kprobes: Cleanup collect_one_slot() and __disable_kprobe() Jinjie Ruan
2024-10-21 14:01 ` [PATCH 0/3] kprobes: Annotate structs with __counted_by() Masami Hiramatsu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox