linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] kprobes: Adjustments for __counted_by addition
@ 2024-10-30 16:14 Nathan Chancellor
  2024-10-30 16:14 ` [PATCH 1/2] kprobes: Fix __get_insn_slot() after __counted_by annotation Nathan Chancellor
  2024-10-30 16:14 ` [PATCH 2/2] kprobes: Use struct_size() in __get_insn_slot() Nathan Chancellor
  0 siblings, 2 replies; 7+ messages in thread
From: Nathan Chancellor @ 2024-10-30 16:14 UTC (permalink / raw)
  To: Masami Hiramatsu, Naveen N Rao, Anil S Keshavamurthy,
	David S. Miller
  Cc: Kees Cook, Gustavo A. R. Silva, Jinjie Ruan, linux-kernel,
	linux-trace-kernel, linux-hardening, patches, Nathan Chancellor

Hi all,

This series addresses the issues that I brought up at [1]. The first
change is the actual functional fix and the second change is something I
noticed while auditing this code, which is related but tangential to the
actual fix, since the current code has a correct calculation.

This resolves the issue for me when testing both clang 19 and GCC 15
(tip of tree, which is the only version that has __counted_by support).

[1]: https://lore.kernel.org/20241022205557.GA3004519@thelio-3990X/

---
Nathan Chancellor (2):
      kprobes: Fix __get_insn_slot() after __counted_by annotation
      kprobes: Use struct_size() in __get_insn_slot()

 kernel/kprobes.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)
---
base-commit: b5f348cbce367d5dcd34bf2b6c02c39a5be3fb97
change-id: 20241029-kprobes-fix-counted-by-annotation-ddeb95228b32

Best regards,
-- 
Nathan Chancellor <nathan@kernel.org>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] kprobes: Fix __get_insn_slot() after __counted_by annotation
  2024-10-30 16:14 [PATCH 0/2] kprobes: Adjustments for __counted_by addition Nathan Chancellor
@ 2024-10-30 16:14 ` Nathan Chancellor
  2024-10-31  1:58   ` Masami Hiramatsu
  2024-10-30 16:14 ` [PATCH 2/2] kprobes: Use struct_size() in __get_insn_slot() Nathan Chancellor
  1 sibling, 1 reply; 7+ messages in thread
From: Nathan Chancellor @ 2024-10-30 16:14 UTC (permalink / raw)
  To: Masami Hiramatsu, Naveen N Rao, Anil S Keshavamurthy,
	David S. Miller
  Cc: Kees Cook, Gustavo A. R. Silva, Jinjie Ruan, linux-kernel,
	linux-trace-kernel, linux-hardening, patches, Nathan Chancellor

Commit 0888460c9050 ("kprobes: Annotate structs with __counted_by()")
added a __counted_by annotation without adjusting the code for the
__counted_by requirements, resulting in a panic when UBSAN_BOUNDS and
FORTIFY_SOURCE are enabled:

  | memset: detected buffer overflow: 512 byte write of buffer size 0
  | WARNING: CPU: 0 PID: 1 at lib/string_helpers.c:1032 __fortify_report+0x64/0x80
  | Call Trace:
  |  __fortify_report+0x60/0x80 (unreliable)
  |  __fortify_panic+0x18/0x1c
  |  __get_insn_slot+0x33c/0x340

__counted_by requires that the counter be set before accessing the
flexible array but ->nused is not set until after ->slot_used is
accessed via memset(). Even if the current ->nused assignment were moved
up before memset(), the value of 1 would be incorrect because the entire
array is being accessed, not just one element.

Set ->nused to the full number of slots from slots_per_page() before
calling memset() to resolve the panic. While it is not strictly
necessary because of the new assignment, move the existing ->nused
assignment above accessing ->slot_used[0] for visual consistency.

The value of slots_per_page() should not change throughout
__get_insn_slot() because ->insn_size is never modified after its
initial assignment (which has to be done by this point otherwise it
would be incorrect) and the other values are constants, so use a new
variable to reuse its value directly.

Fixes: 0888460c9050 ("kprobes: Annotate structs with __counted_by()")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
---
 kernel/kprobes.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 98d71a5acb723ddfff3efcc44cc6754ee36ec1de..2cf4628bc97ce2ae18547b513cd75b6350e9cc9c 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -145,16 +145,18 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 {
 	struct kprobe_insn_page *kip;
 	kprobe_opcode_t *slot = NULL;
+	int num_slots;
 
 	/* Since the slot array is not protected by rcu, we need a mutex */
 	mutex_lock(&c->mutex);
+	num_slots = slots_per_page(c);
  retry:
 	rcu_read_lock();
 	list_for_each_entry_rcu(kip, &c->pages, list) {
-		if (kip->nused < slots_per_page(c)) {
+		if (kip->nused < num_slots) {
 			int i;
 
-			for (i = 0; i < slots_per_page(c); i++) {
+			for (i = 0; i < num_slots; i++) {
 				if (kip->slot_used[i] == SLOT_CLEAN) {
 					kip->slot_used[i] = SLOT_USED;
 					kip->nused++;
@@ -164,7 +166,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 				}
 			}
 			/* kip->nused is broken. Fix it. */
-			kip->nused = slots_per_page(c);
+			kip->nused = num_slots;
 			WARN_ON(1);
 		}
 	}
@@ -175,7 +177,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 		goto retry;
 
 	/* All out of space.  Need to allocate a new page. */
-	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(slots_per_page(c)), GFP_KERNEL);
+	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(num_slots), GFP_KERNEL);
 	if (!kip)
 		goto out;
 
@@ -185,9 +187,11 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 		goto out;
 	}
 	INIT_LIST_HEAD(&kip->list);
-	memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
-	kip->slot_used[0] = SLOT_USED;
+	/* nused must be set before accessing slot_used */
+	kip->nused = num_slots;
+	memset(kip->slot_used, SLOT_CLEAN, num_slots);
 	kip->nused = 1;
+	kip->slot_used[0] = SLOT_USED;
 	kip->ngarbage = 0;
 	kip->cache = c;
 	list_add_rcu(&kip->list, &c->pages);

-- 
2.47.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] kprobes: Use struct_size() in __get_insn_slot()
  2024-10-30 16:14 [PATCH 0/2] kprobes: Adjustments for __counted_by addition Nathan Chancellor
  2024-10-30 16:14 ` [PATCH 1/2] kprobes: Fix __get_insn_slot() after __counted_by annotation Nathan Chancellor
@ 2024-10-30 16:14 ` Nathan Chancellor
  2024-10-31  1:58   ` Masami Hiramatsu
  1 sibling, 1 reply; 7+ messages in thread
From: Nathan Chancellor @ 2024-10-30 16:14 UTC (permalink / raw)
  To: Masami Hiramatsu, Naveen N Rao, Anil S Keshavamurthy,
	David S. Miller
  Cc: Kees Cook, Gustavo A. R. Silva, Jinjie Ruan, linux-kernel,
	linux-trace-kernel, linux-hardening, patches, Nathan Chancellor

__get_insn_slot() allocates 'struct kprobe_insn_page' using a custom
structure size calculation macro, KPROBE_INSN_PAGE_SIZE. Replace
KPROBE_INSN_PAGE_SIZE with the struct_size() macro, which is the
preferred way to calculate the size of flexible structures in the kernel
because it handles overflow and makes it easier to change and audit how
flexible structures are allocated across the entire tree.

Signed-off-by: Nathan Chancellor <nathan@kernel.org>
---
 kernel/kprobes.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 2cf4628bc97ce2ae18547b513cd75b6350e9cc9c..d452e784b31fa69042229ce0f5ffff9d8b671e92 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -95,10 +95,6 @@ struct kprobe_insn_page {
 	char slot_used[] __counted_by(nused);
 };
 
-#define KPROBE_INSN_PAGE_SIZE(slots)			\
-	(offsetof(struct kprobe_insn_page, slot_used) +	\
-	 (sizeof(char) * (slots)))
-
 static int slots_per_page(struct kprobe_insn_cache *c)
 {
 	return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
@@ -177,7 +173,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
 		goto retry;
 
 	/* All out of space.  Need to allocate a new page. */
-	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(num_slots), GFP_KERNEL);
+	kip = kmalloc(struct_size(kip, slot_used, num_slots), GFP_KERNEL);
 	if (!kip)
 		goto out;
 

-- 
2.47.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] kprobes: Fix __get_insn_slot() after __counted_by annotation
  2024-10-30 16:14 ` [PATCH 1/2] kprobes: Fix __get_insn_slot() after __counted_by annotation Nathan Chancellor
@ 2024-10-31  1:58   ` Masami Hiramatsu
  2024-10-31  3:37     ` Nathan Chancellor
  0 siblings, 1 reply; 7+ messages in thread
From: Masami Hiramatsu @ 2024-10-31  1:58 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: Naveen N Rao, Anil S Keshavamurthy, David S. Miller, Kees Cook,
	Gustavo A. R. Silva, Jinjie Ruan, linux-kernel,
	linux-trace-kernel, linux-hardening, patches

On Wed, 30 Oct 2024 09:14:48 -0700
Nathan Chancellor <nathan@kernel.org> wrote:

> Commit 0888460c9050 ("kprobes: Annotate structs with __counted_by()")
> added a __counted_by annotation without adjusting the code for the
> __counted_by requirements, resulting in a panic when UBSAN_BOUNDS and
> FORTIFY_SOURCE are enabled:
> 
>   | memset: detected buffer overflow: 512 byte write of buffer size 0
>   | WARNING: CPU: 0 PID: 1 at lib/string_helpers.c:1032 __fortify_report+0x64/0x80
>   | Call Trace:
>   |  __fortify_report+0x60/0x80 (unreliable)
>   |  __fortify_panic+0x18/0x1c
>   |  __get_insn_slot+0x33c/0x340
> 
> __counted_by requires that the counter be set before accessing the
> flexible array but ->nused is not set until after ->slot_used is
> accessed via memset(). Even if the current ->nused assignment were moved
> up before memset(), the value of 1 would be incorrect because the entire
> array is being accessed, not just one element.

Ah, I think I misunderstood the __counted_by(). If so, ->nused can be
smaller than the accessing element of slot_used[]. I should revert it.
The accessing index and ->nused should have no relationship.

for example, slots_per_page(c) is 10, and 10 kprobes are registered
and then, the 1st and 2nd kprobes are unregistered. At this moment,
->nused is 8 but slot_used[9] is still used. To unregister this 10th
kprobe, we have to access slot_used[9].

So let's just revert the commit 0888460c9050.

Thank you,

> 
> Set ->nused to the full number of slots from slots_per_page() before
> calling memset() to resolve the panic. While it is not strictly
> necessary because of the new assignment, move the existing ->nused
> assignment above accessing ->slot_used[0] for visual consistency.
> 
> The value of slots_per_page() should not change throughout
> __get_insn_slot() because ->insn_size is never modified after its
> initial assignment (which has to be done by this point otherwise it
> would be incorrect) and the other values are constants, so use a new
> variable to reuse its value directly.
> 
> Fixes: 0888460c9050 ("kprobes: Annotate structs with __counted_by()")
> Signed-off-by: Nathan Chancellor <nathan@kernel.org>
> ---
>  kernel/kprobes.c | 16 ++++++++++------
>  1 file changed, 10 insertions(+), 6 deletions(-)
> 
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 98d71a5acb723ddfff3efcc44cc6754ee36ec1de..2cf4628bc97ce2ae18547b513cd75b6350e9cc9c 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -145,16 +145,18 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>  {
>  	struct kprobe_insn_page *kip;
>  	kprobe_opcode_t *slot = NULL;
> +	int num_slots;
>  
>  	/* Since the slot array is not protected by rcu, we need a mutex */
>  	mutex_lock(&c->mutex);
> +	num_slots = slots_per_page(c);
>   retry:
>  	rcu_read_lock();
>  	list_for_each_entry_rcu(kip, &c->pages, list) {
> -		if (kip->nused < slots_per_page(c)) {
> +		if (kip->nused < num_slots) {
>  			int i;
>  
> -			for (i = 0; i < slots_per_page(c); i++) {
> +			for (i = 0; i < num_slots; i++) {
>  				if (kip->slot_used[i] == SLOT_CLEAN) {
>  					kip->slot_used[i] = SLOT_USED;
>  					kip->nused++;
> @@ -164,7 +166,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>  				}
>  			}
>  			/* kip->nused is broken. Fix it. */
> -			kip->nused = slots_per_page(c);
> +			kip->nused = num_slots;
>  			WARN_ON(1);
>  		}
>  	}
> @@ -175,7 +177,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>  		goto retry;
>  
>  	/* All out of space.  Need to allocate a new page. */
> -	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(slots_per_page(c)), GFP_KERNEL);
> +	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(num_slots), GFP_KERNEL);
>  	if (!kip)
>  		goto out;
>  
> @@ -185,9 +187,11 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>  		goto out;
>  	}
>  	INIT_LIST_HEAD(&kip->list);
> -	memset(kip->slot_used, SLOT_CLEAN, slots_per_page(c));
> -	kip->slot_used[0] = SLOT_USED;
> +	/* nused must be set before accessing slot_used */
> +	kip->nused = num_slots;
> +	memset(kip->slot_used, SLOT_CLEAN, num_slots);
>  	kip->nused = 1;
> +	kip->slot_used[0] = SLOT_USED;
>  	kip->ngarbage = 0;
>  	kip->cache = c;
>  	list_add_rcu(&kip->list, &c->pages);
> 
> -- 
> 2.47.0
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] kprobes: Use struct_size() in __get_insn_slot()
  2024-10-30 16:14 ` [PATCH 2/2] kprobes: Use struct_size() in __get_insn_slot() Nathan Chancellor
@ 2024-10-31  1:58   ` Masami Hiramatsu
  0 siblings, 0 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2024-10-31  1:58 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: Naveen N Rao, Anil S Keshavamurthy, David S. Miller, Kees Cook,
	Gustavo A. R. Silva, Jinjie Ruan, linux-kernel,
	linux-trace-kernel, linux-hardening, patches

On Wed, 30 Oct 2024 09:14:49 -0700
Nathan Chancellor <nathan@kernel.org> wrote:

> __get_insn_slot() allocates 'struct kprobe_insn_page' using a custom
> structure size calculation macro, KPROBE_INSN_PAGE_SIZE. Replace
> KPROBE_INSN_PAGE_SIZE with the struct_size() macro, which is the
> preferred way to calculate the size of flexible structures in the kernel
> because it handles overflow and makes it easier to change and audit how
> flexible structures are allocated across the entire tree.
> 

But I like this patch. I'll pick this.

Thank you!


> Signed-off-by: Nathan Chancellor <nathan@kernel.org>
> ---
>  kernel/kprobes.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 2cf4628bc97ce2ae18547b513cd75b6350e9cc9c..d452e784b31fa69042229ce0f5ffff9d8b671e92 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -95,10 +95,6 @@ struct kprobe_insn_page {
>  	char slot_used[] __counted_by(nused);
>  };
>  
> -#define KPROBE_INSN_PAGE_SIZE(slots)			\
> -	(offsetof(struct kprobe_insn_page, slot_used) +	\
> -	 (sizeof(char) * (slots)))
> -
>  static int slots_per_page(struct kprobe_insn_cache *c)
>  {
>  	return PAGE_SIZE/(c->insn_size * sizeof(kprobe_opcode_t));
> @@ -177,7 +173,7 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c)
>  		goto retry;
>  
>  	/* All out of space.  Need to allocate a new page. */
> -	kip = kmalloc(KPROBE_INSN_PAGE_SIZE(num_slots), GFP_KERNEL);
> +	kip = kmalloc(struct_size(kip, slot_used, num_slots), GFP_KERNEL);
>  	if (!kip)
>  		goto out;
>  
> 
> -- 
> 2.47.0
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] kprobes: Fix __get_insn_slot() after __counted_by annotation
  2024-10-31  1:58   ` Masami Hiramatsu
@ 2024-10-31  3:37     ` Nathan Chancellor
  2024-11-01  1:53       ` Masami Hiramatsu
  0 siblings, 1 reply; 7+ messages in thread
From: Nathan Chancellor @ 2024-10-31  3:37 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Naveen N Rao, Anil S Keshavamurthy, David S. Miller, Kees Cook,
	Gustavo A. R. Silva, Jinjie Ruan, linux-kernel,
	linux-trace-kernel, linux-hardening, patches

On Thu, Oct 31, 2024 at 10:58:27AM +0900, Masami Hiramatsu wrote:
> On Wed, 30 Oct 2024 09:14:48 -0700
> Nathan Chancellor <nathan@kernel.org> wrote:
> 
> > Commit 0888460c9050 ("kprobes: Annotate structs with __counted_by()")
> > added a __counted_by annotation without adjusting the code for the
> > __counted_by requirements, resulting in a panic when UBSAN_BOUNDS and
> > FORTIFY_SOURCE are enabled:
> > 
> >   | memset: detected buffer overflow: 512 byte write of buffer size 0
> >   | WARNING: CPU: 0 PID: 1 at lib/string_helpers.c:1032 __fortify_report+0x64/0x80
> >   | Call Trace:
> >   |  __fortify_report+0x60/0x80 (unreliable)
> >   |  __fortify_panic+0x18/0x1c
> >   |  __get_insn_slot+0x33c/0x340
> > 
> > __counted_by requires that the counter be set before accessing the
> > flexible array but ->nused is not set until after ->slot_used is
> > accessed via memset(). Even if the current ->nused assignment were moved
> > up before memset(), the value of 1 would be incorrect because the entire
> > array is being accessed, not just one element.
> 
> Ah, I think I misunderstood the __counted_by(). If so, ->nused can be
> smaller than the accessing element of slot_used[]. I should revert it.
> The accessing index and ->nused should have no relationship.
> 
> for example, slots_per_page(c) is 10, and 10 kprobes are registered
> and then, the 1st and 2nd kprobes are unregistered. At this moment,
> ->nused is 8 but slot_used[9] is still used. To unregister this 10th
> kprobe, we have to access slot_used[9].

Ah, I totally missed that bit of the code, sorry about that. Thanks for
the explanation!

> So let's just revert the commit 0888460c9050.

Reverting that change sounds totally reasonable to me based on the
above. Will you take care of that?

For what it's worth, I think patch #2 should still be applicable, if you
are okay with that one.

Cheers,
Nathan

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] kprobes: Fix __get_insn_slot() after __counted_by annotation
  2024-10-31  3:37     ` Nathan Chancellor
@ 2024-11-01  1:53       ` Masami Hiramatsu
  0 siblings, 0 replies; 7+ messages in thread
From: Masami Hiramatsu @ 2024-11-01  1:53 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: Naveen N Rao, Anil S Keshavamurthy, David S. Miller, Kees Cook,
	Gustavo A. R. Silva, Jinjie Ruan, linux-kernel,
	linux-trace-kernel, linux-hardening, patches

On Wed, 30 Oct 2024 20:37:31 -0700
Nathan Chancellor <nathan@kernel.org> wrote:

> On Thu, Oct 31, 2024 at 10:58:27AM +0900, Masami Hiramatsu wrote:
> > On Wed, 30 Oct 2024 09:14:48 -0700
> > Nathan Chancellor <nathan@kernel.org> wrote:
> > 
> > > Commit 0888460c9050 ("kprobes: Annotate structs with __counted_by()")
> > > added a __counted_by annotation without adjusting the code for the
> > > __counted_by requirements, resulting in a panic when UBSAN_BOUNDS and
> > > FORTIFY_SOURCE are enabled:
> > > 
> > >   | memset: detected buffer overflow: 512 byte write of buffer size 0
> > >   | WARNING: CPU: 0 PID: 1 at lib/string_helpers.c:1032 __fortify_report+0x64/0x80
> > >   | Call Trace:
> > >   |  __fortify_report+0x60/0x80 (unreliable)
> > >   |  __fortify_panic+0x18/0x1c
> > >   |  __get_insn_slot+0x33c/0x340
> > > 
> > > __counted_by requires that the counter be set before accessing the
> > > flexible array but ->nused is not set until after ->slot_used is
> > > accessed via memset(). Even if the current ->nused assignment were moved
> > > up before memset(), the value of 1 would be incorrect because the entire
> > > array is being accessed, not just one element.
> > 
> > Ah, I think I misunderstood the __counted_by(). If so, ->nused can be
> > smaller than the accessing element of slot_used[]. I should revert it.
> > The accessing index and ->nused should have no relationship.
> > 
> > for example, slots_per_page(c) is 10, and 10 kprobes are registered
> > and then, the 1st and 2nd kprobes are unregistered. At this moment,
> > ->nused is 8 but slot_used[9] is still used. To unregister this 10th
> > kprobe, we have to access slot_used[9].
> 
> Ah, I totally missed that bit of the code, sorry about that. Thanks for
> the explanation!
> 
> > So let's just revert the commit 0888460c9050.
> 
> Reverting that change sounds totally reasonable to me based on the
> above. Will you take care of that?

Yeah, probes/for-next is a working branch. So I just dropped it.

> 
> For what it's worth, I think patch #2 should still be applicable, if you
> are okay with that one.

Yes, other patches look good to me.

Thank you,

> 
> Cheers,
> Nathan


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-11-01  1:53 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-30 16:14 [PATCH 0/2] kprobes: Adjustments for __counted_by addition Nathan Chancellor
2024-10-30 16:14 ` [PATCH 1/2] kprobes: Fix __get_insn_slot() after __counted_by annotation Nathan Chancellor
2024-10-31  1:58   ` Masami Hiramatsu
2024-10-31  3:37     ` Nathan Chancellor
2024-11-01  1:53       ` Masami Hiramatsu
2024-10-30 16:14 ` [PATCH 2/2] kprobes: Use struct_size() in __get_insn_slot() Nathan Chancellor
2024-10-31  1:58   ` Masami Hiramatsu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).