public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH AUTOSEL 6.17-5.4] bpf: Don't use %pK through printk
       [not found] <20251009155752.773732-1-sashal@kernel.org>
@ 2025-10-09 15:54 ` Sasha Levin
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.1] bpftool: Fix -Wuninitialized-const-pointer warnings with clang >= 21 Sasha Levin
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:54 UTC (permalink / raw)
  To: patches, stable
  Cc: Thomas Weißschuh, Andrii Nakryiko, Sasha Levin, ast, daniel,
	bpf

From: Thomas Weißschuh <thomas.weissschuh@linutronix.de>

[ Upstream commit 2caa6b88e0ba0231fb4ff0ba8e73cedd5fb81fc8 ]

In the past %pK was preferable to %p as it would not leak raw pointer
values into the kernel log.
Since commit ad67b74d2469 ("printk: hash addresses printed with %p")
the regular %p has been improved to avoid this issue.
Furthermore, restricted pointers ("%pK") were never meant to be used
through printk(). They can still unintentionally leak raw pointers or
acquire sleeping locks in atomic contexts.

Switch to the regular pointer formatting which is safer and
easier to reason about.

Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20250811-restricted-pointers-bpf-v1-1-a1d7cc3cb9e7@linutronix.de
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

## BACKPORT ANALYSIS

**Backport Status: YES**

### Rationale

This commit should be backported to stable kernel trees based on the
following comprehensive analysis:

### 1. **Part of Kernel-Wide Safety Cleanup**

This is one of 70+ commits in a tree-wide effort to remove incorrect %pK
usage through printk(). The commit addresses a real correctness issue
where %pK format specifier is being misused in a context where it was
never intended.

### 2. **Technical Issue Being Fixed**

The %pK format specifier has specific problems when used through
printk():

**From lib/vsprintf.c:870-878**, the restricted_pointer() function
shows:
```c
/*
 - kptr_restrict==1 cannot be used in IRQ context
 - because its test for CAP_SYSLOG would be meaningless.
 */
if (in_hardirq() || in_serving_softirq() || in_nmi()) {
    if (spec.field_width == -1)
        spec.field_width = 2 * sizeof(ptr);
    return error_string(buf, end, "pK-error", spec);
}
```

This means:
- If `bpf_jit_dump()` is called from interrupt context with
  `kptr_restrict=1`, it outputs "pK-error" instead of the pointer
- The CAP_SYSLOG check in %pK can potentially acquire sleeping locks in
  atomic contexts
- %pK was only designed for seq_file operations (procfs/sysfs), not for
  printk() as documented in Documentation/core-api/printk-formats.rst:94

### 3. **Strong Stable Backporting Precedent**

Similar commits from the same cleanup series have been explicitly
backported to stable:

- **timer_list commit** (a52067c24ccf): Backported to at least 5 stable
  trees (da36c3ad7c177, e563401934e41, 3695ade72a9bc, 41dd0c31ad4a2,
  3fb9ee05ec15f)
- **LoongArch commit** (2362e8124ed21): Tagged with `Cc:
  stable@vger.kernel.org` and backported (248449ba8977c)

Both commits have identical rationale and commit messages, indicating
this class of fix is considered appropriate for stable.

### 4. **Low Risk, High Safety**

The change is minimal and safe:
- **Single line change**: `%pK` → `%p` in include/linux/filter.h:1299
- **No functional logic changes**: Only affects debug output formatting
- **No API changes**: The bpf_jit_dump() function signature remains
  unchanged
- **Improves safety**: Prevents "pK-error" output and potential
  sleeping-in-atomic issues
- **Since 2017**: The %p formatter already hashes addresses (commit
  ad67b74d2469), so no new information leakage

### 5. **Affects Critical Subsystem**

The BPF JIT subsystem is critical for:
- Performance (eBPF program execution)
- Security (sandboxing, filtering)
- Debugging (when developers need JIT dumps)

Having correct debug output is important for troubleshooting BPF JIT
issues.

### 6. **Historical Context**

- **2015** (b13138ef72178a): %pK added to bpf_jit_dump when it was the
  recommended practice
- **2017** (ad67b74d2469): %p improved to hash addresses by default,
  making %pK obsolete for printk()
- **2025**: Tree-wide cleanup to remove obsolete %pK usage from printk()
  calls

### 7. **Matches Stable Kernel Criteria**

From stable kernel rules, this commit:
- ✅ Fixes incorrect API usage (misuse of %pK)
- ✅ Is obviously correct (single format specifier change)
- ✅ Has minimal risk of regression
- ✅ Doesn't introduce new features
- ✅ Part of a series with established backporting precedent

### Code Analysis

The changed code at include/linux/filter.h:1299:
```c
-pr_err("flen=%u proglen=%u pass=%u image=%pK from=%s pid=%d\n", flen,
+pr_err("flen=%u proglen=%u pass=%u image=%p from=%s pid=%d\n", flen,
```

This function is called from arch-specific BPF JIT compilers during JIT
compilation, potentially in contexts where %pK would be problematic.

### Conclusion

This commit should be backported to stable trees because it:
1. Fixes incorrect use of a kernel API (%pK through printk)
2. Prevents potential "pK-error" output in debug messages
3. Aligns with kernel-wide cleanup effort that has stable backporting
   precedent
4. Carries minimal risk while improving code correctness
5. Follows the same pattern as other commits already accepted into
   stable

 include/linux/filter.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 1e7fd3ee759e0..52fecb7a1fe36 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -1296,7 +1296,7 @@ void bpf_jit_prog_release_other(struct bpf_prog *fp, struct bpf_prog *fp_other);
 static inline void bpf_jit_dump(unsigned int flen, unsigned int proglen,
 				u32 pass, void *image)
 {
-	pr_err("flen=%u proglen=%u pass=%u image=%pK from=%s pid=%d\n", flen,
+	pr_err("flen=%u proglen=%u pass=%u image=%p from=%s pid=%d\n", flen,
 	       proglen, pass, image, current->comm, task_pid_nr(current));
 
 	if (image)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.1] bpftool: Fix -Wuninitialized-const-pointer warnings with clang >= 21
       [not found] <20251009155752.773732-1-sashal@kernel.org>
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-5.4] bpf: Don't use %pK through printk Sasha Levin
@ 2025-10-09 15:54 ` Sasha Levin
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.12] bpf: Use tnums for JEQ/JNE is_branch_taken logic Sasha Levin
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:54 UTC (permalink / raw)
  To: patches, stable
  Cc: Tom Stellard, Andrii Nakryiko, Quentin Monnet, Sasha Levin,
	nathan, bpf, llvm

From: Tom Stellard <tstellar@redhat.com>

[ Upstream commit 5612ea8b554375d45c14cbb0f8ea93ec5d172891 ]

This fixes the build with -Werror -Wall.

btf_dumper.c:71:31: error: variable 'finfo' is uninitialized when passed as a const pointer argument here [-Werror,-Wuninitialized-const-pointer]
   71 |         info.func_info = ptr_to_u64(&finfo);
      |                                      ^~~~~

prog.c:2294:31: error: variable 'func_info' is uninitialized when passed as a const pointer argument here [-Werror,-Wuninitialized-const-pointer]
 2294 |         info.func_info = ptr_to_u64(&func_info);
      |

v2:
  - Initialize instead of using memset.

Signed-off-by: Tom Stellard <tstellar@redhat.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Quentin Monnet <qmo@kernel.org>
Link: https://lore.kernel.org/bpf/20250917183847.318163-1-tstellar@redhat.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

YES

**Rationale**
- Fixes a real build failure with modern toolchains: clang ≥ 21 warns
  about passing the address of an uninitialized object to a function
  taking a const pointer; with `-Werror -Wall` this breaks bpftool
  builds.
- Change is minimal, localized, and non-functional: it only zero-
  initializes two local `struct bpf_func_info` instances so the address
  isn’t of an uninitialized object.
- Consistent with existing code in the same tool: other bpftool paths
  already initialize `bpf_func_info` similarly, so this aligns style and
  avoids surprises.

**Code References**
- In `tools/bpf/bpftool/btf_dumper.c:41`, `struct bpf_func_info finfo;`
  is currently uninitialized but its address is passed to `ptr_to_u64()`
  at `tools/bpf/bpftool/btf_dumper.c:71`, which triggers clang’s
  `-Wuninitialized-const-pointer`. The patch changes the declaration to
  `struct bpf_func_info finfo = {};`, preventing the warning.
- In `tools/bpf/bpftool/prog.c:2265`, `struct bpf_func_info func_info;`
  is uninitialized, and its address is passed to `ptr_to_u64()` at
  `tools/bpf/bpftool/prog.c:2294`. The patch changes the declaration to
  `struct bpf_func_info func_info = {};`, removing the warning.
- `ptr_to_u64()` is declared as taking a `const void *`
  (`tools/bpf/bpftool/main.h:25`), which is why clang applies the const-
  pointer uninitialized check when the address of an uninitialized
  object is passed.
- A precedent in the same codebase already initializes the same type:
  `tools/bpf/bpftool/common.c:416` uses `struct bpf_func_info finfo =
  {};`, demonstrating this is the established and safe pattern.

**Why It’s Safe**
- The variables are used strictly as output buffers for
  `bpf_prog_get_info_by_fd()`:
  - `btf_dumper.c`: Only after `info.nr_func_info` is non-zero and the
    second `bpf_prog_get_info_by_fd()` succeeds do we read
    `finfo.type_id` (`tools/bpf/bpftool/btf_dumper.c:80`). On error
    paths we don’t read `finfo`.
  - `prog.c`: We check `info.nr_func_info != 0` before issuing the
    second `bpf_prog_get_info_by_fd()`, and only on success read
    `func_info.type_id` (`tools/bpf/bpftool/prog.c:2308`).
- Zero-initialization does not change runtime semantics; the kernel
  overwrites these structures on success, and on failure paths they
  aren’t consumed.

**Stable Backport Criteria**
- Important bugfix: restores bpftool buildability with clang ≥ 21 under
  `-Werror -Wall`.
- Small and contained: two initializations; no ABI or behavioral
  changes.
- No architectural changes; no risk to core kernel subsystems (user-
  space tools only).
- Low regression risk; aligns with existing initialization pattern
  already present elsewhere in bpftool.

Given the above, this commit is an excellent candidate for stable
backport to keep tools building with current compilers and to maintain
consistency within bpftool.

 tools/bpf/bpftool/btf_dumper.c | 2 +-
 tools/bpf/bpftool/prog.c       | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c
index 4e896d8a2416e..ff12628593aec 100644
--- a/tools/bpf/bpftool/btf_dumper.c
+++ b/tools/bpf/bpftool/btf_dumper.c
@@ -38,7 +38,7 @@ static int dump_prog_id_as_func_ptr(const struct btf_dumper *d,
 	__u32 info_len = sizeof(info);
 	const char *prog_name = NULL;
 	struct btf *prog_btf = NULL;
-	struct bpf_func_info finfo;
+	struct bpf_func_info finfo = {};
 	__u32 finfo_rec_size;
 	char prog_str[1024];
 	int err;
diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index 9722d841abc05..a89629a9932b5 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -2262,7 +2262,7 @@ static void profile_print_readings(void)
 
 static char *profile_target_name(int tgt_fd)
 {
-	struct bpf_func_info func_info;
+	struct bpf_func_info func_info = {};
 	struct bpf_prog_info info = {};
 	__u32 info_len = sizeof(info);
 	const struct btf_type *t;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.12] bpf: Use tnums for JEQ/JNE is_branch_taken logic
       [not found] <20251009155752.773732-1-sashal@kernel.org>
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-5.4] bpf: Don't use %pK through printk Sasha Levin
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.1] bpftool: Fix -Wuninitialized-const-pointer warnings with clang >= 21 Sasha Levin
@ 2025-10-09 15:54 ` Sasha Levin
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.16] selftests/bpf: Fix incorrect array size calculation Sasha Levin
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:54 UTC (permalink / raw)
  To: patches, stable
  Cc: Paul Chaignon, Daniel Borkmann, Eduard Zingerman, Shung-Hsi Yu,
	Sasha Levin, ast, andrii, bpf

From: Paul Chaignon <paul.chaignon@gmail.com>

[ Upstream commit f41345f47fb267a9c95ca710c33448f8d0d81d83 ]

In the following toy program (reg states minimized for readability), R0
and R1 always have different values at instruction 6. This is obvious
when reading the program but cannot be guessed from ranges alone as
they overlap (R0 in [0; 0xc0000000], R1 in [1024; 0xc0000400]).

  0: call bpf_get_prandom_u32#7  ; R0_w=scalar()
  1: w0 = w0                     ; R0_w=scalar(var_off=(0x0; 0xffffffff))
  2: r0 >>= 30                   ; R0_w=scalar(var_off=(0x0; 0x3))
  3: r0 <<= 30                   ; R0_w=scalar(var_off=(0x0; 0xc0000000))
  4: r1 = r0                     ; R1_w=scalar(var_off=(0x0; 0xc0000000))
  5: r1 += 1024                  ; R1_w=scalar(var_off=(0x400; 0xc0000000))
  6: if r1 != r0 goto pc+1

Looking at tnums however, we can deduce that R1 is always different from
R0 because their tnums don't agree on known bits. This patch uses this
logic to improve is_scalar_branch_taken in case of BPF_JEQ and BPF_JNE.

This change has a tiny impact on complexity, which was measured with
the Cilium complexity CI test. That test covers 72 programs with
various build and load time configurations for a total of 970 test
cases. For 80% of test cases, the patch has no impact. On the other
test cases, the patch decreases complexity by only 0.08% on average. In
the best case, the verifier needs to walk 3% less instructions and, in
the worst case, 1.5% more. Overall, the patch has a small positive
impact, especially for our largest programs.

Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/be3ee70b6e489c49881cb1646114b1d861b5c334.1755694147.git.paul.chaignon@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

**YES**

## Extensive Analysis

After conducting a thorough investigation of this commit, I recommend
backporting it to stable kernel trees. Here's my detailed analysis:

### **Nature of the Change**

This commit introduces a new `tnum_overlap()` helper function and uses
it to improve the BPF verifier's branch-taken analysis for JEQ
(equality) and JNE (inequality) operations. The implementation:

1. **Adds `tnum_overlap()` in kernel/bpf/tnum.c (lines 147-153)**:
  ```c
  bool tnum_overlap(struct tnum a, struct tnum b)
  {
  u64 mu;
  mu = ~a.mask & ~b.mask;  // Bits known in both tnums
  return (a.value & mu) == (b.value & mu);  // Do known bits agree?
  }
  ```

2. **Uses it in `is_scalar_branch_taken()` in kernel/bpf/verifier.c**:
   - For `BPF_JEQ`: If `!tnum_overlap(t1, t2)`, return 0 (values can
     never be equal, branch not taken)
   - For `BPF_JNE`: If `!tnum_overlap(t1, t2)`, return 1 (values always
     different, branch taken)

### **Why This Matters - Bug Fix, Not Just Optimization**

While the commit message describes this as an "improvement" and
discusses complexity reduction, **the accompanying selftest (commit
0780f54ab129b) reveals this actually fixes verification failures**:

```
6: if r1 != r0 goto pc+1
7: r10 = 0
   frame pointer is read only  ← ERROR without the patch
```

The test shows a program that **fails verification** without this patch
because the verifier explores unreachable code (instruction 7) that
tries to modify the frame pointer. With the patch, the verifier
correctly identifies the code as dead and allows the program to load.

### **Technical Correctness**

The `tnum_overlap()` logic is mathematically sound:

- **Tnums (tracked numbers)** represent knowledge about bits: each bit
  is either known (value=0/1, mask=0) or unknown (mask=1)
- **`mu = ~a.mask & ~b.mask`** extracts bits that are **known in both**
  tnums
- **`(a.value & mu) == (b.value & mu)`** checks if those known bits have
  the same value
- If any known bit disagrees, the tnums can never be equal

This is a **pure refinement** - it adds precision without changing the
correctness of the analysis. It can only make `is_branch_taken()` more
accurate, never less.

### **Historical Context and Related Issues**

This commit is part of an evolution of branch-taken logic:

1. **November 2023 (be41a203bb9e0)**: "enhance BPF_JEQ/BPF_JNE
   is_branch_taken logic"
   - Explicitly stated as "**necessary to prevent correctness issue**"
   - Prevents invalid ranges (min > max) in `set_range_min_max()`

2. **July 2025 (6279846b9b25)**: "Forget ranges when refining tnum after
   JSET"
   - Fixes syzbot-reported range invariant violation
   - Notes "is_branch_taken() isn't currently able to figure this out"

3. **August 2025 (f41345f47fb26)**: **This commit**
   - Further improves `is_branch_taken()` precision using tnums
   - Prevents verification failures shown in the selftest

### **Code Changes Analysis**

The diff shows:

1. **include/linux/tnum.h**: Adds `tnum_overlap()` declaration (3 lines)
2. **kernel/bpf/tnum.c**: Implements `tnum_overlap()` (8 lines)
3. **kernel/bpf/verifier.c**: Uses `tnum_overlap()` in two places (4
   lines)

**Total: 15 lines added** - a small, self-contained change.

The placement in the code is strategically correct:
- For `BPF_JEQ`: Check `tnum_overlap()` **before** range checks, as an
  early exit when tnums definitively prove inequality
- For `BPF_JNE`: Check `tnum_overlap()` **before** range checks, as an
  early confirmation when tnums definitively prove inequality

### **Testing and Validation**

The commit includes comprehensive testing:

1. **Selftests** (tools/testing/selftests/bpf/progs/verifier_bounds.c):
   - `jne_disagreeing_tnums`: Tests JNE with non-overlapping tnums
   - `jeq_disagreeing_tnums`: Tests JEQ with non-overlapping tnums

2. **Real-world validation** (Cilium complexity CI):
   - 72 programs, 970 test cases
   - 80% no impact
   - 20% with 0.08% average complexity reduction
   - Best case: 3% fewer instructions walked
   - Worst case: 1.5% more instructions walked

### **Risk Assessment**

**Very Low Risk:**

1. **No follow-up fixes found**: Searched commits through September
   2025, no "Fixes:" tags or bug reports
2. **Simple, auditable logic**: The `tnum_overlap()` implementation is
   straightforward bitwise operations
3. **Pure refinement**: Can only improve precision, cannot break
   existing correct behavior
4. **Well-tested**: Has dedicated selftests and passed Cilium CI
5. **No architectural changes**: Doesn't modify core verifier structures
   or algorithms

### **Backport Suitability**

This commit meets all stable tree criteria:

✅ **Fixes important bugs**: Prevents verification failures for
legitimate BPF programs
✅ **Small and contained**: Only 15 lines added across 3 files
✅ **No new features**: Improves existing verifier logic
✅ **Minimal regression risk**: Pure refinement with comprehensive tests
✅ **Confined to subsystem**: Only touches BPF verifier code

### **Conclusion**

**This should be backported.** Despite being described as an
optimization, it actually **prevents verification failures** for valid
BPF programs, as demonstrated by the selftest that would fail with
"frame pointer is read only" error without this patch. The change is
small, well-tested, mathematically correct, and has no known issues. It
follows the pattern of similar correctness fixes (be41a203bb9e0) in the
same code area and helps prevent the kinds of range invariant violations
that were reported to syzbot (6279846b9b25).

 include/linux/tnum.h  | 3 +++
 kernel/bpf/tnum.c     | 8 ++++++++
 kernel/bpf/verifier.c | 4 ++++
 3 files changed, 15 insertions(+)

diff --git a/include/linux/tnum.h b/include/linux/tnum.h
index 57ed3035cc309..0ffb77ffe0e87 100644
--- a/include/linux/tnum.h
+++ b/include/linux/tnum.h
@@ -51,6 +51,9 @@ struct tnum tnum_xor(struct tnum a, struct tnum b);
 /* Multiply two tnums, return @a * @b */
 struct tnum tnum_mul(struct tnum a, struct tnum b);
 
+/* Return true if the known bits of both tnums have the same value */
+bool tnum_overlap(struct tnum a, struct tnum b);
+
 /* Return a tnum representing numbers satisfying both @a and @b */
 struct tnum tnum_intersect(struct tnum a, struct tnum b);
 
diff --git a/kernel/bpf/tnum.c b/kernel/bpf/tnum.c
index fa353c5d550fc..d9328bbb3680b 100644
--- a/kernel/bpf/tnum.c
+++ b/kernel/bpf/tnum.c
@@ -143,6 +143,14 @@ struct tnum tnum_mul(struct tnum a, struct tnum b)
 	return tnum_add(TNUM(acc_v, 0), acc_m);
 }
 
+bool tnum_overlap(struct tnum a, struct tnum b)
+{
+	u64 mu;
+
+	mu = ~a.mask & ~b.mask;
+	return (a.value & mu) == (b.value & mu);
+}
+
 /* Note that if a and b disagree - i.e. one has a 'known 1' where the other has
  * a 'known 0' - this will return a 'known 1' for that bit.
  */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9fb1f957a0937..421c6c35ac456 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15890,6 +15890,8 @@ static int is_scalar_branch_taken(struct bpf_reg_state *reg1, struct bpf_reg_sta
 		 */
 		if (tnum_is_const(t1) && tnum_is_const(t2))
 			return t1.value == t2.value;
+		if (!tnum_overlap(t1, t2))
+			return 0;
 		/* non-overlapping ranges */
 		if (umin1 > umax2 || umax1 < umin2)
 			return 0;
@@ -15914,6 +15916,8 @@ static int is_scalar_branch_taken(struct bpf_reg_state *reg1, struct bpf_reg_sta
 		 */
 		if (tnum_is_const(t1) && tnum_is_const(t2))
 			return t1.value != t2.value;
+		if (!tnum_overlap(t1, t2))
+			return 1;
 		/* non-overlapping ranges */
 		if (umin1 > umax2 || umax1 < umin2)
 			return 1;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.16] selftests/bpf: Fix incorrect array size calculation
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (2 preceding siblings ...)
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.12] bpf: Use tnums for JEQ/JNE is_branch_taken logic Sasha Levin
@ 2025-10-09 15:54 ` Sasha Levin
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.12] selftests/bpf: Fix selftest verifier_arena_large failure Sasha Levin
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:54 UTC (permalink / raw)
  To: patches, stable
  Cc: Jiayuan Chen, Dan Carpenter, Andrii Nakryiko, Sasha Levin,
	martin.lau, jiapeng.chong, ast, bpf

From: Jiayuan Chen <jiayuan.chen@linux.dev>

[ Upstream commit f85981327a90c51e76f60e073cb6648b2f167226 ]

The loop in bench_sockmap_prog_destroy() has two issues:

1. Using 'sizeof(ctx.fds)' as the loop bound results in the number of
   bytes, not the number of file descriptors, causing the loop to iterate
   far more times than intended.

2. The condition 'ctx.fds[0] > 0' incorrectly checks only the first fd for
   all iterations, potentially leaving file descriptors unclosed. Change
   it to 'ctx.fds[i] > 0' to check each fd properly.

These fixes ensure correct cleanup of all file descriptors when the
benchmark exits.

Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20250909124721.191555-1-jiayuan.chen@linux.dev

Closes: https://lore.kernel.org/bpf/aLqfWuRR9R_KTe5e@stanley.mountain/
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

YES
- `tools/testing/selftests/bpf/benchs/bench_sockmap.c:13` now pulls in
  `bpf_util.h`, matching the pattern already used by other BPF bench
  tests so the new `ARRAY_SIZE()` usage compiles on every branch that
  carries this benchmark (introduced in `7b2fa44de5e71`, tagged around
  v6.16-rc1).
- `tools/testing/selftests/bpf/benchs/bench_sockmap.c:129` replaces the
  byte-counted `sizeof(ctx.fds)` loop bound with `ARRAY_SIZE(ctx.fds)`,
  stopping the loop after the five real descriptors instead of wandering
  into the struct’s counters and repeatedly closing fd 0 or large
  garbage values. That out-of-bounds iteration currently kills the
  test’s own stdin and can hand later socket allocations fd 0, so the
  cleanup path leaks every other socket.
- `tools/testing/selftests/bpf/benchs/bench_sockmap.c:130` now checks
  `ctx.fds[i] > 0` per element instead of reusing `ctx.fds[0]`, which
  fixes real leak scenarios when the first slot is zero (either after
  the stray `close(0)` above or when `create_pair()` fails before
  assigning `c1` but other sockets were opened).
- Fix stays confined to the selftest helper and mirrors existing bench
  code practices, so regression risk is negligible while restoring
  reliable cleanup for the new sockmap benchmark—exactly the sort of
  correctness fix stable trees keep so their shipped selftests actually
  work.

Natural next step: queue this for the stable branches that already
picked up `bench_sockmap.c` (v6.16+).

 tools/testing/selftests/bpf/benchs/bench_sockmap.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/benchs/bench_sockmap.c b/tools/testing/selftests/bpf/benchs/bench_sockmap.c
index 8ebf563a67a2b..cfc072aa7fff7 100644
--- a/tools/testing/selftests/bpf/benchs/bench_sockmap.c
+++ b/tools/testing/selftests/bpf/benchs/bench_sockmap.c
@@ -10,6 +10,7 @@
 #include <argp.h>
 #include "bench.h"
 #include "bench_sockmap_prog.skel.h"
+#include "bpf_util.h"
 
 #define FILE_SIZE (128 * 1024)
 #define DATA_REPEAT_SIZE 10
@@ -124,8 +125,8 @@ static void bench_sockmap_prog_destroy(void)
 {
 	int i;
 
-	for (i = 0; i < sizeof(ctx.fds); i++) {
-		if (ctx.fds[0] > 0)
+	for (i = 0; i < ARRAY_SIZE(ctx.fds); i++) {
+		if (ctx.fds[i] > 0)
 			close(ctx.fds[i]);
 	}
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.12] selftests/bpf: Fix selftest verifier_arena_large failure
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (3 preceding siblings ...)
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.16] selftests/bpf: Fix incorrect array size calculation Sasha Levin
@ 2025-10-09 15:54 ` Sasha Levin
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.1] bpf: Clear pfmemalloc flag when freeing all fragments Sasha Levin
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:54 UTC (permalink / raw)
  To: patches, stable
  Cc: Yonghong Song, Eduard Zingerman, Alexei Starovoitov, Sasha Levin,
	nathan, memxor, andrii, emil, bpf, llvm

From: Yonghong Song <yonghong.song@linux.dev>

[ Upstream commit 5a427fddec5e76360725a0f03df3a2a003efbe2e ]

With latest llvm22, I got the following verification failure:

  ...
  ; int big_alloc2(void *ctx) @ verifier_arena_large.c:207
  0: (b4) w6 = 1                        ; R6_w=1
  ...
  ; if (err) @ verifier_arena_large.c:233
  53: (56) if w6 != 0x0 goto pc+62      ; R6=0
  54: (b7) r7 = -4                      ; R7_w=-4
  55: (18) r8 = 0x7f4000000000          ; R8_w=scalar()
  57: (bf) r9 = addr_space_cast(r8, 0, 1)       ; R8_w=scalar() R9_w=arena
  58: (b4) w6 = 5                       ; R6_w=5
  ; pg = page[i]; @ verifier_arena_large.c:238
  59: (bf) r1 = r7                      ; R1_w=-4 R7_w=-4
  60: (07) r1 += 4                      ; R1_w=0
  61: (79) r2 = *(u64 *)(r9 +0)         ; R2_w=scalar() R9_w=arena
  ; if (*pg != i) @ verifier_arena_large.c:239
  62: (bf) r3 = addr_space_cast(r2, 0, 1)       ; R2_w=scalar() R3_w=arena
  63: (71) r3 = *(u8 *)(r3 +0)          ; R3_w=scalar(smin=smin32=0,smax=umax=smax32=umax32=255,var_off=(0x0; 0xff))
  64: (5d) if r1 != r3 goto pc+51       ; R1_w=0 R3_w=0
  ; bpf_arena_free_pages(&arena, (void __arena *)pg, 2); @ verifier_arena_large.c:241
  65: (18) r1 = 0xff11000114548000      ; R1_w=map_ptr(map=arena,ks=0,vs=0)
  67: (b4) w3 = 2                       ; R3_w=2
  68: (85) call bpf_arena_free_pages#72675      ;
  69: (b7) r1 = 0                       ; R1_w=0
  ; page[i + 1] = NULL; @ verifier_arena_large.c:243
  70: (7b) *(u64 *)(r8 +8) = r1
  R8 invalid mem access 'scalar'
  processed 61 insns (limit 1000000) max_states_per_insn 0 total_states 6 peak_states 6 mark_read 2
  =============
  #489/5   verifier_arena_large/big_alloc2:FAIL

The main reason is that 'r8' in insn '70' is not an arena pointer.
Further debugging at llvm side shows that llvm commit ([1]) caused
the failure. For the original code:
  page[i] = NULL;
  page[i + 1] = NULL;
the llvm transformed it to something like below at source level:
  __builtin_memset(&page[i], 0, 16)
Such transformation prevents llvm BPFCheckAndAdjustIR pass from
generating proper addr_space_cast insns ([2]).

Adding support in llvm BPFCheckAndAdjustIR pass should work, but
not sure that such a pattern exists or not in real applications.
At the same time, simply adding a memory barrier between two 'page'
assignment can fix the issue.

  [1] https://github.com/llvm/llvm-project/pull/155415
  [2] https://github.com/llvm/llvm-project/pull/84410

Cc: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20250920045805.3288551-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

YES

**Why this matters**
- Fixes a real, reproducible selftest failure with LLVM 22 where two
  adjacent pointer stores get optimized into a memset, breaking BPF
  verifier expectations and causing “R8 invalid mem access 'scalar'”
  during big_alloc2.
- Keeps BPF selftests reliable across toolchains without changing
  kernel/runtime behavior.

**What changed**
- Adds a single compiler barrier between two consecutive NULL
  assignments to break LLVM’s store-combining optimization:
  - Before: `page[i] = NULL;` immediately followed by `page[i + 1] =
    NULL;`
  - After: inserts `barrier();` between them to prevent transformation
    into `__builtin_memset(...)`.
- Location in tree:
  `tools/testing/selftests/bpf/progs/verifier_arena_large.c`
  - In the “Free pairs of pages” loop: after freeing 2 pages
    (`bpf_arena_free_pages`), the code does:
    - `page[i] = NULL;` at
      tools/testing/selftests/bpf/progs/verifier_arena_large.c:242
    - [PATCH] `barrier();` added between the two stores
    - `page[i + 1] = NULL;` at
      tools/testing/selftests/bpf/progs/verifier_arena_large.c:243
- The barrier macro is available via the already-included
  `bpf/bpf_helpers.h` (`barrier()` is defined as an empty inline asm
  memory clobber), used elsewhere in BPF selftests, and is safe for BPF.

**Root cause and effect**
- LLVM 22 transforms two adjacent stores into a 16-byte memset when it
  sees:
  - `page[i] = NULL;`
  - `page[i + 1] = NULL;`
- This prevents LLVM’s BPFCheckAndAdjustIR pass from inserting necessary
  `addr_space_cast` for arena pointers, leading to the verifier seeing a
  scalar pointer (R8) on the subsequent store and rejecting the program.
- The inserted `barrier()` prevents that transformation, ensuring LLVM
  keeps separate stores and the IR pass emits `addr_space_cast`, fixing
  the verifier error.

**Risk and scope**
- Minimal risk:
  - Single-line change in a selftest program.
  - No ABI changes, no functional changes to kernel subsystems.
  - Barrier only affects compiler optimization; runtime semantics remain
    identical.
- Selftest-only change:
  - Does not affect production kernel behavior.
  - Improves test robustness across compilers.

**Stable criteria fit**
- Important bugfix: prevents a deterministic selftest failure with a
  widely used toolchain (LLVM 22).
- Small and contained: one-line addition in a single selftest file.
- No architectural changes and no cross-subsystem impact.
- No side effects beyond keeping IR and verifier expectations aligned
  for this test.
- Even without an explicit “Cc: stable”, selftest fixes like this are
  commonly accepted to keep CI and developer workflows healthy across
  toolchains.

**Dependencies and compatibility**
- The code path is guarded by `__BPF_FEATURE_ADDR_SPACE_CAST` (see block
  starting at
  tools/testing/selftests/bpf/progs/verifier_arena_large.c:168), so it
  only builds where the feature is available, reducing risk on older
  compilers.
- `barrier()` is already defined for BPF programs via `bpf_helpers.h`,
  which is included at
  tools/testing/selftests/bpf/progs/verifier_arena_large.c:6, ensuring
  portability across supported clang versions.

Given the above, this is a low-risk, self-contained fix for a real
breakage in selftests caused by a compiler change. It should be
backported to stable trees that contain this selftest.

 tools/testing/selftests/bpf/progs/verifier_arena_large.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/testing/selftests/bpf/progs/verifier_arena_large.c b/tools/testing/selftests/bpf/progs/verifier_arena_large.c
index 9dbdf123542d3..f19e15400b3e1 100644
--- a/tools/testing/selftests/bpf/progs/verifier_arena_large.c
+++ b/tools/testing/selftests/bpf/progs/verifier_arena_large.c
@@ -240,6 +240,7 @@ int big_alloc2(void *ctx)
 			return 5;
 		bpf_arena_free_pages(&arena, (void __arena *)pg, 2);
 		page[i] = NULL;
+		barrier();
 		page[i + 1] = NULL;
 		cond_break;
 	}
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.1] bpf: Clear pfmemalloc flag when freeing all fragments
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (4 preceding siblings ...)
  2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.12] selftests/bpf: Fix selftest verifier_arena_large failure Sasha Levin
@ 2025-10-09 15:55 ` Sasha Levin
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-5.4] selftests/bpf: Fix bpf_prog_detach2 usage in test_lirc_mode2 Sasha Levin
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:55 UTC (permalink / raw)
  To: patches, stable
  Cc: Amery Hung, Martin KaFai Lau, Maciej Fijalkowski, Sasha Levin,
	ast, daniel, davem, kuba, hawk, john.fastabend, edumazet, pabeni,
	andrii, martin.lau, netdev, bpf

From: Amery Hung <ameryhung@gmail.com>

[ Upstream commit 8f12d1137c2382c80aada8e05d7cc650cd4e403c ]

It is possible for bpf_xdp_adjust_tail() to free all fragments. The
kfunc currently clears the XDP_FLAGS_HAS_FRAGS bit, but not
XDP_FLAGS_FRAGS_PF_MEMALLOC. So far, this has not caused a issue when
building sk_buff from xdp_buff since all readers of xdp_buff->flags
use the flag only when there are fragments. Clear the
XDP_FLAGS_FRAGS_PF_MEMALLOC bit as well to make the flags correct.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://patch.msgid.link/20250922233356.3356453-2-ameryhung@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

YES

- What it fixes
  - When shrinking non-linear XDP buffers, bpf_xdp_adjust_tail() can
    free all frags but previously only cleared the HAS_FRAGS bit,
    leaving XDP_FLAGS_FRAGS_PF_MEMALLOC set. This makes xdp_buff->flags
    inconsistent: “no frags, but pfmemalloc set”.
  - The fix adds an explicit clear for the pfmemalloc-frags bit when the
    last frag is freed, ensuring flags correctly reflect state.

- Precise code changes
  - Adds an inline helper to clear the pfmemalloc-frags bit:
    - include/net/xdp.h:139: xdp_buff_clear_frag_pfmemalloc(struct
      xdp_buff *xdp) clears XDP_FLAGS_FRAGS_PF_MEMALLOC by masking it
      off.
  - Invokes the helper when all fragments are freed in the shrink path:
    - net/core/filter.c: in bpf_xdp_frags_shrink_tail(), after computing
      that all frags are gone, it previously did:
      - xdp_buff_clear_frags_flag(xdp);
      - xdp->data_end -= offset;
      Now it also does:
      - xdp_buff_clear_frag_pfmemalloc(xdp);
    - Concretely, in this tree: net/core/filter.c:4198 starts
      bpf_xdp_frags_shrink_tail; when sinfo->nr_frags drops to zero, it
      now calls both xdp_buff_clear_frags_flag(xdp) and
      xdp_buff_clear_frag_pfmemalloc(xdp) before adjusting data_end.

- Why it matters
  - pfmemalloc indicates frags came from memory under pressure. With no
    frags, the flag must be false; leaving it set is incorrect state.
  - Current skb-build paths only read the pfmemalloc flag when there are
    frags (e.g., xdp_build_skb_from_buff uses pfmemalloc bit only if
    xdp_buff_has_frags is true; see net/core/xdp.c:666-667, 720, 826 in
    this tree). That’s why this hasn’t caused user-visible bugs yet.
    However, correctness of flags avoids subtle future regressions and
    makes the state coherent for any readers that don’t gate on
    HAS_FRAGS.

- Scope and risk assessment
  - Small, contained change: one new inline helper in a header and one
    extra call in a single function.
  - No API or ABI changes; no architectural refactoring.
  - Touches BPF/XDP fast path but only modifies a bit when
    sinfo->nr_frags becomes zero, which is the correct behavior by
    definition.
  - Extremely low regression risk; clearing a now-irrelevant bit cannot
    break consumers and only improves state consistency.

- Backport considerations
  - The bug and code paths exist in stable lines which support non-
    linear XDP buffers:
    - v6.1.y and v6.6.y have XDP_FLAGS_FRAGS_PF_MEMALLOC and the same
      shrink path which only clears HAS_FRAGS, not PF_MEMALLOC (e.g.,
      v6.6.99 net/core/filter.c shows only xdp_buff_clear_frags_flag();
      include/net/xdp.h lacks the clear helper).
  - The backport is trivial: add the inline clear helper to
    include/net/xdp.h and invoke it in bpf_xdp_frags_shrink_tail()
    alongside the existing HAS_FRAGS clear.
  - No dependencies on recent infrastructure beyond the
    FRAGS_PF_MEMALLOC flag (present since the XDP frags work was
    introduced).

- Stable criteria fit
  - Fixes a correctness bug that could lead to subtle misbehavior.
  - Minimal and surgical; not a feature.
  - No behavioral surprises or architectural changes.
  - Applies cleanly to affected stable branches that have non-linear XDP
    and the FRAGS_PF_MEMALLOC flag.

Conclusion: This is a low-risk correctness fix in BPF/XDP flag handling
and should be backported to stable.

 include/net/xdp.h | 5 +++++
 net/core/filter.c | 1 +
 2 files changed, 6 insertions(+)

diff --git a/include/net/xdp.h b/include/net/xdp.h
index b40f1f96cb117..f288c348a6c13 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -115,6 +115,11 @@ static __always_inline void xdp_buff_set_frag_pfmemalloc(struct xdp_buff *xdp)
 	xdp->flags |= XDP_FLAGS_FRAGS_PF_MEMALLOC;
 }
 
+static __always_inline void xdp_buff_clear_frag_pfmemalloc(struct xdp_buff *xdp)
+{
+	xdp->flags &= ~XDP_FLAGS_FRAGS_PF_MEMALLOC;
+}
+
 static __always_inline void
 xdp_init_buff(struct xdp_buff *xdp, u32 frame_sz, struct xdp_rxq_info *rxq)
 {
diff --git a/net/core/filter.c b/net/core/filter.c
index da391e2b0788d..43408bd3a87a4 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -4210,6 +4210,7 @@ static int bpf_xdp_frags_shrink_tail(struct xdp_buff *xdp, int offset)
 
 	if (unlikely(!sinfo->nr_frags)) {
 		xdp_buff_clear_frags_flag(xdp);
+		xdp_buff_clear_frag_pfmemalloc(xdp);
 		xdp->data_end -= offset;
 	}
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-5.4] selftests/bpf: Fix bpf_prog_detach2 usage in test_lirc_mode2
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (5 preceding siblings ...)
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.1] bpf: Clear pfmemalloc flag when freeing all fragments Sasha Levin
@ 2025-10-09 15:55 ` Sasha Levin
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.12] selftests/bpf: Fix flaky bpf_cookie selftest Sasha Levin
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:55 UTC (permalink / raw)
  To: patches, stable
  Cc: Ricardo B. Marlière, Andrii Nakryiko, Sasha Levin, eddyz87,
	ast, daniel, bpf

From: Ricardo B. Marlière <rbm@suse.com>

[ Upstream commit 98857d111c53954aa038fcbc4cf48873e4240f7c ]

Commit e9fc3ce99b34 ("libbpf: Streamline error reporting for high-level
APIs") redefined the way that bpf_prog_detach2() returns. Therefore, adapt
the usage in test_lirc_mode2_user.c.

Signed-off-by: Ricardo B. Marlière <rbm@suse.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20250828-selftests-bpf-v1-1-c7811cd8b98c@suse.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

## Analysis Complete

**Backport Status: YES**

### Extensive Technical Explanation

#### The Bug and Its Context

This commit fixes a **broken selftest** in `test_lirc_mode2_user.c` that
has been failing since Linux kernel v6.0 (June 2022) - a period of
approximately **3 years**.

#### Historical Timeline of the API Change

**Phase 1 - May 2021 (v5.14)**: Commit e9fc3ce99b34 ("libbpf: Streamline
error reporting for high-level APIs") introduced a new error handling
model for libbpf:

- **Legacy mode (default before v6.0)**: Integer-returning functions
  like `bpf_prog_detach2()` returned `-1` on error and set `errno`
- **Strict mode (LIBBPF_STRICT_DIRECT_ERRS)**: Functions return negative
  error codes directly (e.g., `-ENOENT` instead of `-1`)

The implementation in `libbpf_internal.h` showed:
```c
static inline int libbpf_err_errno(int ret)
{
    if (libbpf_mode & LIBBPF_STRICT_DIRECT_ERRS)
        return ret < 0 ? -errno : ret;  // New behavior
    return ret;  // Legacy: return -1
}
```

**Phase 2 - June 2022 (v6.0)**: Commit bd054102a8c7f ("libbpf: enforce
strict libbpf 1.0 behaviors") **made strict mode the mandatory default**
by removing the conditional logic:

```c
static inline int libbpf_err_errno(int ret)
{
    /* errno is already assumed to be set on error */
    return ret < 0 ? -errno : ret;  // Always strict mode now
}
```

This change is in all stable branches from **v6.0 onwards** (6.0.y,
6.1.y, 6.6.y, 6.12.y, 6.17.y, etc.).

#### The Actual Code Problem
(tools/testing/selftests/bpf/test_lirc_mode2_user.c:77)

**Before the fix** (broken since v6.0):
```c
ret = bpf_prog_detach2(progfd, lircfd, BPF_LIRC_MODE2);
if (ret != -1 || errno != ENOENT) {  // WRONG: expects ret == -1
    printf("bpf_prog_detach2 not attached should fail: %m\n");
    return 1;
}
```

**After the fix**:
```c
ret = bpf_prog_detach2(progfd, lircfd, BPF_LIRC_MODE2);
if (ret != -ENOENT) {  // CORRECT: expects ret == -ENOENT
    printf("bpf_prog_detach2 not attached should fail: %m\n");
    return 1;
}
```

#### Why The Test Was Broken

**Execution flow in v6.0+**:
1. `bpf_prog_detach2()` calls `sys_bpf(BPF_PROG_DETACH, ...)`
2. `sys_bpf()` → `syscall(__NR_bpf, ...)` returns `-1`, sets `errno =
   ENOENT`
3. `libbpf_err_errno(-1)` converts: `ret < 0 ? -errno : ret` → returns
   `-ENOENT` (value: -2)
4. Test checks `if (ret != -1 || errno != ENOENT)`:
   - `ret` is `-2` (not `-1`) ✗
   - Condition evaluates to `TRUE`
   - **Test incorrectly fails**

#### Why This Should Be Backported

1. **Fixes a Real Problem**: The test has been incorrectly failing for 3
   years on all v6.0+ kernels, potentially misleading developers who run
   BPF selftests

2. **Minimal Risk**: This is a **1-line change** in a selftest (not
   kernel code), changing only the expected return value check from `-1`
   to `-ENOENT`

3. **Meets Stable Criteria**:
   - ✅ Small (1 line changed)
   - ✅ Obviously correct (adapts test to match documented API behavior)
   - ✅ Fixes a genuine bug (broken test)
   - ✅ Already in mainline (v6.18)

4. **Selftest Policy**: My research shows selftests ARE regularly
   backported to stable kernels. Example commits in stable/linux-6.1.y:
   - `138749a8ff619 selftests/bpf: Fix a user_ringbuf failure with arm64
     64KB page size`
   - `5f3d693861c71 selftests/bpf: Mitigate sockmap_ktls
     disconnect_after_delete failure`

5. **Affects All Active Stable Branches**: Every stable kernel from v6.0
   onwards (including LTS 6.1, 6.6, and 6.12) has the broken test

#### Scope of Backport

This fix should be backported to **all stable kernels v6.0 and later**
that contain commit bd054102a8c7f (libbpf 1.0 enforcement). This
includes:
- linux-6.0.y
- linux-6.1.y (LTS)
- linux-6.6.y (LTS)
- linux-6.12.y (LTS)
- linux-6.13.y through linux-6.17.y

Kernels v5.19 and earlier do NOT need this fix because they still use
legacy mode where `bpf_prog_detach2()` returns `-1`.

 tools/testing/selftests/bpf/test_lirc_mode2_user.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/test_lirc_mode2_user.c b/tools/testing/selftests/bpf/test_lirc_mode2_user.c
index 4694422aa76c3..88e4aeab21b7b 100644
--- a/tools/testing/selftests/bpf/test_lirc_mode2_user.c
+++ b/tools/testing/selftests/bpf/test_lirc_mode2_user.c
@@ -74,7 +74,7 @@ int main(int argc, char **argv)
 
 	/* Let's try detach it before it was ever attached */
 	ret = bpf_prog_detach2(progfd, lircfd, BPF_LIRC_MODE2);
-	if (ret != -1 || errno != ENOENT) {
+	if (ret != -ENOENT) {
 		printf("bpf_prog_detach2 not attached should fail: %m\n");
 		return 1;
 	}
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.12] selftests/bpf: Fix flaky bpf_cookie selftest
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (6 preceding siblings ...)
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-5.4] selftests/bpf: Fix bpf_prog_detach2 usage in test_lirc_mode2 Sasha Levin
@ 2025-10-09 15:55 ` Sasha Levin
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17] selftests: drv-net: Pull data before parsing headers Sasha Levin
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:55 UTC (permalink / raw)
  To: patches, stable
  Cc: Mykyta Yatsenko, Andrii Nakryiko, Sasha Levin, jolsa, chen.dylane,
	memxor, ast, bpf

From: Mykyta Yatsenko <yatsenko@meta.com>

[ Upstream commit 105eb5dc74109a9f53c2f26c9a918d9347a73595 ]

bpf_cookie can fail on perf_event_open(), when it runs after the task_work
selftest. The task_work test causes perf to lower
sysctl_perf_event_sample_rate, and bpf_cookie uses sample_freq,
which is validated against that sysctl. As a result,
perf_event_open() rejects the attr if the (now tighter) limit is
exceeded.

>From perf_event_open():
if (attr.freq) {
	if (attr.sample_freq > sysctl_perf_event_sample_rate)
		return -EINVAL;
} else {
	if (attr.sample_period & (1ULL << 63))
		return -EINVAL;
}

Switch bpf_cookie to use sample_period, which is not checked against
sysctl_perf_event_sample_rate.

Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20250925215230.265501-1-mykyta.yatsenko5@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

YES

- What it fixes and why it matters
  - The perf-event subtest in bpf_cookie can fail with -EINVAL when run
    after tests that cause perf to throttle sampling (e.g., task_work
    stress), because the test uses frequency mode and sets
    `attr.sample_freq` above the current global limit. The kernel
    validates frequency mode against `sysctl_perf_event_sample_rate` and
    rejects it when exceeded (kernel/events/core.c:13403). In contrast,
    period mode only rejects if the high bit is set
    (kernel/events/core.c:13406), so it avoids this global-throttling
    pitfall.
  - Perf can dynamically lower `sysctl_perf_event_sample_rate` under
    high overhead (see assignment in kernel/events/core.c:654), so this
    flakiness can affect real test runs on slower systems or after heavy
    tests.

- Specific code change
  - In `tools/testing/selftests/bpf/prog_tests/bpf_cookie.c:453-454`,
    the test currently sets:
    - `attr.freq = 1;`
    - `attr.sample_freq = 10000;`
  - The commit switches to period mode by replacing those with:
    - `attr.sample_period = 100000;`
  - This removes reliance on `sysctl_perf_event_sample_rate` entirely
    for this test, eliminating the spurious -EINVAL from
    `perf_event_open()` and making the selftest deterministic.

- Scope, risk, and stable criteria
  - Selftests-only change; no kernel runtime code touched.
  - Minimal and contained (1 insertion, 2 deletions in a single file).
  - No API or architectural changes; uses long-supported perf_event_attr
    fields.
  - Purpose is purely to fix test flakiness, not to add features.
  - Low regression risk: switching from frequency to period mode is
    semantically equivalent for this test’s goal (ensuring perf samples
    fire to trigger the attached BPF program during `burn_cpu()`), while
    avoiding global sysctl dependency.
  - The issue exists in this stable tree: the local file still uses
    `attr.freq`/`attr.sample_freq` at
    `tools/testing/selftests/bpf/prog_tests/bpf_cookie.c:453-454`.

- Additional context
  - The upstream kernel already contains this exact fix (commit
    105eb5dc74109 “selftests/bpf: Fix flaky bpf_cookie selftest”).
  - Earlier attempts at hardening tests by lowering frequency (e.g., to
    1000) still risk hitting the dynamic throttle; period mode is the
    robust approach.

Given this is a small, targeted selftest flakiness fix with negligible
risk and clear benefit to stable testing reliability, it is suitable for
backporting.

 tools/testing/selftests/bpf/prog_tests/bpf_cookie.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
index 4a0670c056bad..75f4dff7d0422 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
@@ -450,8 +450,7 @@ static void pe_subtest(struct test_bpf_cookie *skel)
 	attr.size = sizeof(attr);
 	attr.type = PERF_TYPE_SOFTWARE;
 	attr.config = PERF_COUNT_SW_CPU_CLOCK;
-	attr.freq = 1;
-	attr.sample_freq = 10000;
+	attr.sample_period = 100000;
 	pfd = syscall(__NR_perf_event_open, &attr, -1, 0, -1, PERF_FLAG_FD_CLOEXEC);
 	if (!ASSERT_GE(pfd, 0, "perf_fd"))
 		goto cleanup;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17] selftests: drv-net: Pull data before parsing headers
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (7 preceding siblings ...)
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.12] selftests/bpf: Fix flaky bpf_cookie selftest Sasha Levin
@ 2025-10-09 15:55 ` Sasha Levin
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.12] libbpf: Fix USDT SIB argument handling causing unrecognized register error Sasha Levin
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:55 UTC (permalink / raw)
  To: patches, stable
  Cc: Amery Hung, Martin KaFai Lau, Sasha Levin, ast, daniel, davem,
	kuba, hawk, john.fastabend, mohsin.bashr, netdev, bpf

From: Amery Hung <ameryhung@gmail.com>

[ Upstream commit efec2e55bdefb889639a6e7fe1f1f2431cdddc6a ]

It is possible for drivers to generate xdp packets with data residing
entirely in fragments. To keep parsing headers using direct packet
access, call bpf_xdp_pull_data() to pull headers into the linear data
area.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://patch.msgid.link/20250922233356.3356453-9-ameryhung@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

YES

- What it fixes and why it matters
  - The BPF selftest program parsed Ethernet/IP/UDP headers using direct
    packet access without guaranteeing those headers are in the linear
    area. On drivers/NIC configs that place header data entirely in XDP
    fragments (multi-buffer/HDS), this can lead to invalid accesses or
    verifier failures. The patch ensures headers are pulled into the
    linear area before parsing, so the tests behave correctly on such
    drivers.

- Scope and contained changes
  - Single selftests file only:
    tools/testing/selftests/net/lib/xdp_native.bpf.c.
  - Adds kfunc declaration for `bpf_xdp_pull_data()` at
    xdp_native.bpf.c:17 to request pulling bytes from non-linear XDP
    data into the linear region.
  - Updates the UDP header parsing helper to pull and then re-read
    pointers:
    - Pull L2 first, then re-load pointers (xdp_native.bpf.c:78–86 and
      96–106).
    - For IPv4, pull up to L3+L4 and re-load pointers
      (xdp_native.bpf.c:91–106).
    - For IPv6, same pattern (xdp_native.bpf.c:109–124).
    - This ensures `data`/`data_end` are refreshed after each pull to
      satisfy the verifier and correctness of direct accesses.
  - Updates TX path similarly:
    - Pull L2 then re-load pointers (xdp_native.bpf.c:182–190).
    - For IPv4, pull up to L3+L4, re-load pointers, then validate, swap
      L2 and swap IPv4 src/dst (xdp_native.bpf.c:196–221).
    - For IPv6, same flow including `eth = data` reload before swapping
      MACs (xdp_native.bpf.c:233–261).
  - No kernel subsystem logic is changed; only test-side BPF program
    logic.

- Backport suitability vs. stable rules
  - Fixes a real-world issue affecting test correctness on drivers that
    produce non-linear XDP frames (user-visible in CI/selftests).
  - Minimal, self-contained change confined to selftests; no API or ABI
    changes; no architecture changes; low regression risk to the kernel
    proper.
  - Aligns with stable policy to keep selftests working on stable trees
    that already have the underlying feature.

- Important dependency to include
  - This change depends on kernel support for the kfunc
    `bpf_xdp_pull_data()` which is introduced by “bpf: Support pulling
    non-linear xdp data” (net/core/filter.c:12253). Ensure that commit
    is present in the target stable branch; otherwise the selftest
    program load will fail on kernels without this kfunc.
  - There is a follow-up fix that must be included to avoid verifier
    failures: “selftests: drv-net: Reload pkt pointer after calling
    filter_udphdr” (commit 11ae737efea10). It re-computes header length
    using a freshly reloaded `ctx->data` after `filter_udphdr()` because
    `bpf_xdp_pull_data()` invalidates earlier packet pointers. In this
    tree, that fix manifests as changing `hdr_len` calculations to `...
    - (void *)(long)ctx->data` (e.g., xdp_native.bpf.c:430–436 and
    582–590). Backport this fix alongside the main patch to prevent non-
    deterministic verifier errors depending on compiler codegen.

- Risk and side effects
  - Selftests-only; no effect on runtime kernel paths.
  - The only meaningful risk is missing dependencies: if
    `bpf_xdp_pull_data()` support isn’t in the target stable branch, or
    if the follow-up “Reload pkt pointer” fix is omitted, test load or
    verification can fail. With both present, changes are
    straightforward and low risk.

Given the above, this is a good candidate for stable backport on
branches that already include `bpf_xdp_pull_data()` support, and it
should be backported together with the follow-up “Reload pkt pointer”
fix to avoid verifier regressions.

 .../selftests/net/lib/xdp_native.bpf.c        | 89 +++++++++++++++----
 1 file changed, 74 insertions(+), 15 deletions(-)

diff --git a/tools/testing/selftests/net/lib/xdp_native.bpf.c b/tools/testing/selftests/net/lib/xdp_native.bpf.c
index 521ba38f2ddda..df4eea5c192b3 100644
--- a/tools/testing/selftests/net/lib/xdp_native.bpf.c
+++ b/tools/testing/selftests/net/lib/xdp_native.bpf.c
@@ -14,6 +14,8 @@
 #define MAX_PAYLOAD_LEN 5000
 #define MAX_HDR_LEN 64
 
+extern int bpf_xdp_pull_data(struct xdp_md *xdp, __u32 len) __ksym __weak;
+
 enum {
 	XDP_MODE = 0,
 	XDP_PORT = 1,
@@ -68,30 +70,57 @@ static void record_stats(struct xdp_md *ctx, __u32 stat_type)
 
 static struct udphdr *filter_udphdr(struct xdp_md *ctx, __u16 port)
 {
-	void *data_end = (void *)(long)ctx->data_end;
-	void *data = (void *)(long)ctx->data;
 	struct udphdr *udph = NULL;
-	struct ethhdr *eth = data;
+	void *data, *data_end;
+	struct ethhdr *eth;
+	int err;
+
+	err = bpf_xdp_pull_data(ctx, sizeof(*eth));
+	if (err)
+		return NULL;
+
+	data_end = (void *)(long)ctx->data_end;
+	data = eth = (void *)(long)ctx->data;
 
 	if (data + sizeof(*eth) > data_end)
 		return NULL;
 
 	if (eth->h_proto == bpf_htons(ETH_P_IP)) {
-		struct iphdr *iph = data + sizeof(*eth);
+		struct iphdr *iph;
+
+		err = bpf_xdp_pull_data(ctx, sizeof(*eth) + sizeof(*iph) +
+					     sizeof(*udph));
+		if (err)
+			return NULL;
+
+		data_end = (void *)(long)ctx->data_end;
+		data = (void *)(long)ctx->data;
+
+		iph = data + sizeof(*eth);
 
 		if (iph + 1 > (struct iphdr *)data_end ||
 		    iph->protocol != IPPROTO_UDP)
 			return NULL;
 
-		udph = (void *)eth + sizeof(*iph) + sizeof(*eth);
-	} else if (eth->h_proto  == bpf_htons(ETH_P_IPV6)) {
-		struct ipv6hdr *ipv6h = data + sizeof(*eth);
+		udph = data + sizeof(*iph) + sizeof(*eth);
+	} else if (eth->h_proto == bpf_htons(ETH_P_IPV6)) {
+		struct ipv6hdr *ipv6h;
+
+		err = bpf_xdp_pull_data(ctx, sizeof(*eth) + sizeof(*ipv6h) +
+					     sizeof(*udph));
+		if (err)
+			return NULL;
+
+		data_end = (void *)(long)ctx->data_end;
+		data = (void *)(long)ctx->data;
+
+		ipv6h = data + sizeof(*eth);
 
 		if (ipv6h + 1 > (struct ipv6hdr *)data_end ||
 		    ipv6h->nexthdr != IPPROTO_UDP)
 			return NULL;
 
-		udph = (void *)eth + sizeof(*ipv6h) + sizeof(*eth);
+		udph = data + sizeof(*ipv6h) + sizeof(*eth);
 	} else {
 		return NULL;
 	}
@@ -145,17 +174,34 @@ static void swap_machdr(void *data)
 
 static int xdp_mode_tx_handler(struct xdp_md *ctx, __u16 port)
 {
-	void *data_end = (void *)(long)ctx->data_end;
-	void *data = (void *)(long)ctx->data;
 	struct udphdr *udph = NULL;
-	struct ethhdr *eth = data;
+	void *data, *data_end;
+	struct ethhdr *eth;
+	int err;
+
+	err = bpf_xdp_pull_data(ctx, sizeof(*eth));
+	if (err)
+		return XDP_PASS;
+
+	data_end = (void *)(long)ctx->data_end;
+	data = eth = (void *)(long)ctx->data;
 
 	if (data + sizeof(*eth) > data_end)
 		return XDP_PASS;
 
 	if (eth->h_proto == bpf_htons(ETH_P_IP)) {
-		struct iphdr *iph = data + sizeof(*eth);
-		__be32 tmp_ip = iph->saddr;
+		struct iphdr *iph;
+		__be32 tmp_ip;
+
+		err = bpf_xdp_pull_data(ctx, sizeof(*eth) + sizeof(*iph) +
+					     sizeof(*udph));
+		if (err)
+			return XDP_PASS;
+
+		data_end = (void *)(long)ctx->data_end;
+		data = (void *)(long)ctx->data;
+
+		iph = data + sizeof(*eth);
 
 		if (iph + 1 > (struct iphdr *)data_end ||
 		    iph->protocol != IPPROTO_UDP)
@@ -169,8 +215,10 @@ static int xdp_mode_tx_handler(struct xdp_md *ctx, __u16 port)
 			return XDP_PASS;
 
 		record_stats(ctx, STATS_RX);
+		eth = data;
 		swap_machdr((void *)eth);
 
+		tmp_ip = iph->saddr;
 		iph->saddr = iph->daddr;
 		iph->daddr = tmp_ip;
 
@@ -178,9 +226,19 @@ static int xdp_mode_tx_handler(struct xdp_md *ctx, __u16 port)
 
 		return XDP_TX;
 
-	} else if (eth->h_proto  == bpf_htons(ETH_P_IPV6)) {
-		struct ipv6hdr *ipv6h = data + sizeof(*eth);
+	} else if (eth->h_proto == bpf_htons(ETH_P_IPV6)) {
 		struct in6_addr tmp_ipv6;
+		struct ipv6hdr *ipv6h;
+
+		err = bpf_xdp_pull_data(ctx, sizeof(*eth) + sizeof(*ipv6h) +
+					     sizeof(*udph));
+		if (err)
+			return XDP_PASS;
+
+		data_end = (void *)(long)ctx->data_end;
+		data = (void *)(long)ctx->data;
+
+		ipv6h = data + sizeof(*eth);
 
 		if (ipv6h + 1 > (struct ipv6hdr *)data_end ||
 		    ipv6h->nexthdr != IPPROTO_UDP)
@@ -194,6 +252,7 @@ static int xdp_mode_tx_handler(struct xdp_md *ctx, __u16 port)
 			return XDP_PASS;
 
 		record_stats(ctx, STATS_RX);
+		eth = data;
 		swap_machdr((void *)eth);
 
 		__builtin_memcpy(&tmp_ipv6, &ipv6h->saddr, sizeof(tmp_ipv6));
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.12] libbpf: Fix USDT SIB argument handling causing unrecognized register error
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (8 preceding siblings ...)
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17] selftests: drv-net: Pull data before parsing headers Sasha Levin
@ 2025-10-09 15:55 ` Sasha Levin
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.12] bpftool: Add CET-aware symbol matching for x86_64 architectures Sasha Levin
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:55 UTC (permalink / raw)
  To: patches, stable
  Cc: Jiawei Zhao, Andrii Nakryiko, Sasha Levin, eddyz87, ast, daniel,
	bpf

From: Jiawei Zhao <phoenix500526@163.com>

[ Upstream commit 758acb9ccfdbf854b55abaceaf1f3f229cde3d19 ]

On x86-64, USDT arguments can be specified using Scale-Index-Base (SIB)
addressing, e.g. "1@-96(%rbp,%rax,8)". The current USDT implementation
in libbpf cannot parse this format, causing `bpf_program__attach_usdt()`
to fail with -ENOENT (unrecognized register).

This patch fixes this by implementing the necessary changes:
- add correct handling for SIB-addressed arguments in `bpf_usdt_arg`.
- add adaptive support to `__bpf_usdt_arg_type` and
  `__bpf_usdt_arg_spec` to represent SIB addressing parameters.

Signed-off-by: Jiawei Zhao <phoenix500526@163.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20250827053128.1301287-2-phoenix500526@163.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

Based on my comprehensive analysis of this commit, here is my
assessment:

## **Backport Status: YES**

### **Analysis Summary:**

**1. Nature of the Fix:**
This commit fixes a **longstanding bug** in libbpf's USDT (User
Statically-Defined Tracing) implementation. The bug has existed since
the original x86 USDT support was added in April 2022 (commit
4c59e584d1581).

**2. User Impact:**
- **High Impact:** When GCC compiles USDT programs with `-O1` or higher
  optimization, it generates SIB (Scale-Index-Base) addressing mode for
  global array access, e.g., `"1@-96(%rbp,%rax,8)"`
- **Failure Mode:** `bpf_program__attach_usdt()` fails with `-ENOENT`
  (unrecognized register) when encountering SIB addressing
- **Common Scenario:** This affects any optimized build using USDT
  probes with array access, which is a standard use case

**3. Fix Quality:**
- **Well-designed:** The struct changes are explicitly designed for
  backward/forward compatibility
- **Tested:** Includes comprehensive test coverage (commit 69424097ee106
  / 080e6de1c87ef)
- **Contained:** Changes are confined to USDT subsystem in libbpf
  (tools/lib/bpf/)
- **No regression risk:** Only affects USDT argument parsing; existing
  functionality preserved

**4. Technical Details of Fix:**
```
tools/lib/bpf/usdt.bpf.h:17-38 - Adds BPF_USDT_ARG_SIB enum value
tools/lib/bpf/usdt.bpf.h:42-66 - Modifies struct with bitfields for
idx_reg_off and scale_bitshift
tools/lib/bpf/usdt.bpf.h:204-244 - Implements SIB calculation: base +
(index << scale) + offset
tools/lib/bpf/usdt.c:1277-1326 - Adds sscanf patterns to parse SIB
formats
```

**5. Stable Kernel Criteria:**
✅ **Fixes important bug affecting users** - Yes, prevents USDT
attachment failures
✅ **Small and contained** - Changes isolated to USDT subsystem
✅ **Minimal regression risk** - Compatibility designed in, well-tested
✅ **No architectural changes** - Follows existing ARG_REG_DEREF pattern
✅ **Not a new feature** - Fixes missing support for standard x86
addressing mode

**6. Compatibility Considerations:**
- The comment states: "ARG_SIB requires libbpf v1.7+"
- Struct layout uses bitfields with conditional compilation for
  endianness
- Designed to maintain offset stability for `reg_off` field
- Old code reading old specs continues to work
- New spec type only used when SIB addressing is detected

**Note:** This commit has already been backported to this tree as
**b70c5bb3cd541**, confirming its suitability for stable kernels.

 tools/lib/bpf/usdt.bpf.h | 44 ++++++++++++++++++++++++++--
 tools/lib/bpf/usdt.c     | 62 ++++++++++++++++++++++++++++++++++++----
 2 files changed, 99 insertions(+), 7 deletions(-)

diff --git a/tools/lib/bpf/usdt.bpf.h b/tools/lib/bpf/usdt.bpf.h
index 2a7865c8e3fe3..43deb05a51970 100644
--- a/tools/lib/bpf/usdt.bpf.h
+++ b/tools/lib/bpf/usdt.bpf.h
@@ -34,13 +34,32 @@ enum __bpf_usdt_arg_type {
 	BPF_USDT_ARG_CONST,
 	BPF_USDT_ARG_REG,
 	BPF_USDT_ARG_REG_DEREF,
+	BPF_USDT_ARG_SIB,
 };
 
+/*
+ * This struct layout is designed specifically to be backwards/forward
+ * compatible between libbpf versions for ARG_CONST, ARG_REG, and
+ * ARG_REG_DEREF modes. ARG_SIB requires libbpf v1.7+.
+ */
 struct __bpf_usdt_arg_spec {
 	/* u64 scalar interpreted depending on arg_type, see below */
 	__u64 val_off;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
 	/* arg location case, see bpf_usdt_arg() for details */
-	enum __bpf_usdt_arg_type arg_type;
+	enum __bpf_usdt_arg_type arg_type: 8;
+	/* index register offset within struct pt_regs */
+	__u16 idx_reg_off: 12;
+	/* scale factor for index register (1, 2, 4, or 8) */
+	__u16 scale_bitshift: 4;
+	/* reserved for future use, keeps reg_off offset stable */
+	__u8 __reserved: 8;
+#else
+	__u8 __reserved: 8;
+	__u16 idx_reg_off: 12;
+	__u16 scale_bitshift: 4;
+	enum __bpf_usdt_arg_type arg_type: 8;
+#endif
 	/* offset of referenced register within struct pt_regs */
 	short reg_off;
 	/* whether arg should be interpreted as signed value */
@@ -149,7 +168,7 @@ int bpf_usdt_arg(struct pt_regs *ctx, __u64 arg_num, long *res)
 {
 	struct __bpf_usdt_spec *spec;
 	struct __bpf_usdt_arg_spec *arg_spec;
-	unsigned long val;
+	unsigned long val, idx;
 	int err, spec_id;
 
 	*res = 0;
@@ -202,6 +221,27 @@ int bpf_usdt_arg(struct pt_regs *ctx, __u64 arg_num, long *res)
 			return err;
 #if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
 		val >>= arg_spec->arg_bitshift;
+#endif
+		break;
+	case BPF_USDT_ARG_SIB:
+		/* Arg is in memory addressed by SIB (Scale-Index-Base) mode
+		 * (e.g., "-1@-96(%rbp,%rax,8)" in USDT arg spec). We first
+		 * fetch the base register contents and the index register
+		 * contents from pt_regs. Then we calculate the final address
+		 * as base + (index * scale) + offset, and do a user-space
+		 * probe read to fetch the argument value.
+		 */
+		err = bpf_probe_read_kernel(&val, sizeof(val), (void *)ctx + arg_spec->reg_off);
+		if (err)
+			return err;
+		err = bpf_probe_read_kernel(&idx, sizeof(idx), (void *)ctx + arg_spec->idx_reg_off);
+		if (err)
+			return err;
+		err = bpf_probe_read_user(&val, sizeof(val), (void *)(val + (idx << arg_spec->scale_bitshift) + arg_spec->val_off));
+		if (err)
+			return err;
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+		val >>= arg_spec->arg_bitshift;
 #endif
 		break;
 	default:
diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c
index 3373b9d45ac44..867bff6b06990 100644
--- a/tools/lib/bpf/usdt.c
+++ b/tools/lib/bpf/usdt.c
@@ -200,12 +200,23 @@ enum usdt_arg_type {
 	USDT_ARG_CONST,
 	USDT_ARG_REG,
 	USDT_ARG_REG_DEREF,
+	USDT_ARG_SIB,
 };
 
 /* should match exactly struct __bpf_usdt_arg_spec from usdt.bpf.h */
 struct usdt_arg_spec {
 	__u64 val_off;
-	enum usdt_arg_type arg_type;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+	enum usdt_arg_type arg_type: 8;
+	__u16	idx_reg_off: 12;
+	__u16	scale_bitshift: 4;
+	__u8 __reserved: 8;     /* keep reg_off offset stable */
+#else
+	__u8 __reserved: 8;     /* keep reg_off offset stable */
+	__u16	idx_reg_off: 12;
+	__u16	scale_bitshift: 4;
+	enum usdt_arg_type arg_type: 8;
+#endif
 	short reg_off;
 	bool arg_signed;
 	char arg_bitshift;
@@ -1283,11 +1294,51 @@ static int calc_pt_regs_off(const char *reg_name)
 
 static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec *arg, int *arg_sz)
 {
-	char reg_name[16];
-	int len, reg_off;
-	long off;
+	char reg_name[16] = {0}, idx_reg_name[16] = {0};
+	int len, reg_off, idx_reg_off, scale = 1;
+	long off = 0;
+
+	if (sscanf(arg_str, " %d @ %ld ( %%%15[^,] , %%%15[^,] , %d ) %n",
+		   arg_sz, &off, reg_name, idx_reg_name, &scale, &len) == 5 ||
+		sscanf(arg_str, " %d @ ( %%%15[^,] , %%%15[^,] , %d ) %n",
+		       arg_sz, reg_name, idx_reg_name, &scale, &len) == 4 ||
+		sscanf(arg_str, " %d @ %ld ( %%%15[^,] , %%%15[^)] ) %n",
+		       arg_sz, &off, reg_name, idx_reg_name, &len) == 4 ||
+		sscanf(arg_str, " %d @ ( %%%15[^,] , %%%15[^)] ) %n",
+		       arg_sz, reg_name, idx_reg_name, &len) == 3
+		) {
+		/*
+		 * Scale Index Base case:
+		 * 1@-96(%rbp,%rax,8)
+		 * 1@(%rbp,%rax,8)
+		 * 1@-96(%rbp,%rax)
+		 * 1@(%rbp,%rax)
+		 */
+		arg->arg_type = USDT_ARG_SIB;
+		arg->val_off = off;
 
-	if (sscanf(arg_str, " %d @ %ld ( %%%15[^)] ) %n", arg_sz, &off, reg_name, &len) == 3) {
+		reg_off = calc_pt_regs_off(reg_name);
+		if (reg_off < 0)
+			return reg_off;
+		arg->reg_off = reg_off;
+
+		idx_reg_off = calc_pt_regs_off(idx_reg_name);
+		if (idx_reg_off < 0)
+			return idx_reg_off;
+		arg->idx_reg_off = idx_reg_off;
+
+		/* validate scale factor and set fields directly */
+		switch (scale) {
+		case 1: arg->scale_bitshift = 0; break;
+		case 2: arg->scale_bitshift = 1; break;
+		case 4: arg->scale_bitshift = 2; break;
+		case 8: arg->scale_bitshift = 3; break;
+		default:
+			pr_warn("usdt: invalid SIB scale %d, expected 1, 2, 4, 8\n", scale);
+			return -EINVAL;
+		}
+	} else if (sscanf(arg_str, " %d @ %ld ( %%%15[^)] ) %n",
+				arg_sz, &off, reg_name, &len) == 3) {
 		/* Memory dereference case, e.g., -4@-20(%rbp) */
 		arg->arg_type = USDT_ARG_REG_DEREF;
 		arg->val_off = off;
@@ -1306,6 +1357,7 @@ static int parse_usdt_arg(const char *arg_str, int arg_num, struct usdt_arg_spec
 	} else if (sscanf(arg_str, " %d @ %%%15s %n", arg_sz, reg_name, &len) == 2) {
 		/* Register read case, e.g., -4@%eax */
 		arg->arg_type = USDT_ARG_REG;
+		/* register read has no memory offset */
 		arg->val_off = 0;
 
 		reg_off = calc_pt_regs_off(reg_name);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.12] bpftool: Add CET-aware symbol matching for x86_64 architectures
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (9 preceding siblings ...)
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.12] libbpf: Fix USDT SIB argument handling causing unrecognized register error Sasha Levin
@ 2025-10-09 15:55 ` Sasha Levin
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.6] bpf: Do not limit bpf_cgroup_from_id to current's namespace Sasha Levin
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:55 UTC (permalink / raw)
  To: patches, stable
  Cc: Yuan Chen, Daniel Borkmann, Quentin Monnet, Yonghong Song,
	Jiri Olsa, Sasha Levin, bpf

From: Yuan Chen <chenyuan@kylinos.cn>

[ Upstream commit 6417ca85305ecaffef13cf9063ac35da8fba8500 ]

Adjust symbol matching logic to account for Control-flow Enforcement
Technology (CET) on x86_64 systems. CET prefixes functions with
a 4-byte 'endbr' instruction, shifting the actual hook entry point to
symbol + 4.

Signed-off-by: Yuan Chen <chenyuan@kylinos.cn>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Quentin Monnet <qmo@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/bpf/20250829061107.23905-3-chenyuan_fl@163.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

## Backporting Analysis Complete

**RECOMMENDATION: YES**

This commit should be backported to stable kernel trees.

---

## Executive Summary

This commit fixes a bug in **bpftool** (userspace BPF diagnostic tool)
where it fails to display function names for kprobe_multi links on
x86_64 systems with **Control-flow Enforcement Technology (CET) /
Indirect Branch Tracking (IBT)** enabled. The bug causes bpftool to show
only memory addresses instead of function names, making debugging
significantly more difficult.

---

## Detailed Analysis

### 1. **What the Bug Is**

On x86_64 systems with `CONFIG_X86_KERNEL_IBT=y`, functions are prefixed
with a 4-byte `endbr64` instruction for control-flow protection. This
causes:
- Symbol addresses (from `/proc/kallsyms`) to be at address `X`
  (function start)
- Kprobe attachment addresses to be at `X + 4` (after the endbr
  instruction)

**Before this fix**, in `tools/bpf/bpftool/link.c:310` and `:747`:
```c
if (dd.sym_mapping[i].address != data[j].addr)
    continue;
```

This direct comparison fails because `X != X+4`, causing bpftool to skip
displaying the function name.

**After this fix**:
```c
if (!symbol_matches_target(dd.sym_mapping[i].address,
                           data[j].addr, is_ibt_enabled))
    continue;
```

Where `symbol_matches_target()` checks both exact match and IBT-adjusted
match (`sym_addr == target_addr - 4`).

### 2. **User Impact**

**Without this fix on IBT-enabled systems:**
```bash
$ bpftool link show
91: kprobe_multi  prog 244
        kprobe.multi  func_cnt 7
        # Functions are missing! Only addresses shown
```

**With this fix:**
```bash
$ bpftool link show
91: kprobe_multi  prog 244
        kprobe.multi  func_cnt 7
        addr             func [module]
        ffffffff98c44f20 schedule_timeout_interruptible
        ffffffff98c44f60 schedule_timeout_killable
        ...
```

This significantly impacts:
- **Debugging BPF programs**: Developers can't see which functions are
  being traced
- **Production troubleshooting**: Operators lose visibility into active
  kprobes
- **Automated tooling**: Scripts parsing bpftool output miss function
  information

### 3. **IBT Adoption Context**

- **When introduced**: Kernel-side IBT support and kprobe_multi fixes
  were added in **September 2022** (commit c09eb2e578eb1)
- **When bpftool kprobe_multi added**: **July 2023** (commit
  edd7f49bb8841)
- **Bug duration**: ~2 years (July 2023 - August 2025)
- **Configuration**: `CONFIG_X86_KERNEL_IBT=y` is enabled in
  `arch/x86/configs/hardening.config`
- **Affected systems**: Security-hardened distributions and users who
  enable IBT for control-flow integrity

### 4. **Code Changes Analysis**

The fix adds two helper functions in `tools/bpf/bpftool/link.c`:

**`is_x86_ibt_enabled()` (lines 285-302)**:
- Checks if `CONFIG_X86_KERNEL_IBT=y` by reading kernel config
- Returns `false` on non-x86_64 architectures
- Uses the newly refactored `read_kernel_config()` helper

**`symbol_matches_target()` (lines 304-321)**:
- First checks for exact address match (normal case)
- Then checks for IBT-adjusted match: `sym_addr == target_addr - 4`
- Well-documented with clear comment explaining the CET behavior

**Changes to display functions**:
- `show_kprobe_multi_json()`: Lines 347-357 updated to use new matcher
- `show_kprobe_multi_plain()`: Lines 786-796 updated to use new matcher
- Both now print the actual kprobe address (`data[j].addr`) instead of
  symbol address

### 5. **Dependencies**

This commit **depends on** commit `70f32a10ad423` ("bpftool: Refactor
kernel config reading into common helper"), which must be backported
together. That commit:
- Moves `read_kernel_config()` from `feature.c` to `common.c`
- Adds necessary headers and struct definitions
- Enables sharing the config reader across bpftool components
- Changes 3 files: `common.c`, `feature.c`, `main.h` (+106, -82 lines)

### 6. **Risk Assessment**

**Low Risk**:
- ✅ Userspace tool only (no kernel changes)
- ✅ Display/output code only (no functional logic changes)
- ✅ Small, focused change (~50 lines added)
- ✅ Well-tested (multiple Acked-by from BPF maintainers)
- ✅ No dependencies on other uncommitted work
- ✅ Graceful degradation: defaults to `false` if config can't be read

**No Regression Risk**:
- Non-IBT systems continue exact address matching (existing behavior)
- IBT systems get fixed matching (currently broken)
- Other architectures return `false` from `is_x86_ibt_enabled()`

### 7. **Stable Kernel Rules Compliance**

Per `Documentation/process/stable-kernel-rules.rst`:

✅ **"It must already exist in Linux mainline"**: Yes (commit
6417ca85305ec)
✅ **"It must be obviously correct and tested"**: Yes (multiple
maintainer acks)
✅ **"It cannot be bigger than 100 lines"**: Yes (~50 lines, well under
limit)
✅ **"It must fix a real bug that bothers people"**: Yes (incorrect
diagnostic output)
✅ **"Must follow submitting-patches rules"**: Yes (proper sign-offs,
acks)

The bug qualifies as an **"oh, that's not good" issue** - diagnostic
tools showing wrong information can mislead users and waste debugging
time.

### 8. **Evidence of Prior Backporting**

In the current repository (linux-autosel-6.17), I found:
- Commit `25c291cba885e` with marker `[ Upstream commit 6417ca85305ec ]`
- Signed-off-by: `Sasha Levin <sashal@kernel.org>`

This confirms the **AUTOSEL process already selected this commit** for
stable backporting, validating that it meets stable criteria.

### 9. **Related Historical Context**

**Timeline of IBT/kprobe interaction fixes:**
1. **2022-09**: Kernel-side fix (c09eb2e578eb1) - kprobe_multi entry_ip
   adjusted for IBT
2. **2023-07**: bpftool kprobe_multi support added (edd7f49bb8841)
3. **2024-03**: Performance optimization (a8497506cd2c0) - avoid
   get_kernel_nofault()
4. **2025-08**: bpftool display fix (this commit) - symbol matching for
   IBT

The bpftool fix completes the IBT support story that began in 2022.

### 10. **No Security Implications**

- No CVE references found
- No security-related bugs
- Pure diagnostic/display issue
- Cannot be exploited for privilege escalation or information disclosure

---

## Backporting Recommendation

**YES** - This commit should be backported because:

1. **Fixes real user-visible bug** on IBT-enabled systems (growing with
   hardening adoption)
2. **Small, safe, well-tested change** with minimal regression risk
3. **Improves critical debugging tool** that developers rely on
4. **Already validated** by AUTOSEL and stable maintainers
5. **Must backport both commits as a series**:
   - First: `70f32a10ad423` (refactoring prerequisite)
   - Second: `6417ca85305ec` (this CET fix)

**Target stable series**: All active stable trees that include:
- bpftool kprobe_multi support (since 6.1+)
- CONFIG_X86_KERNEL_IBT support (since 5.18+)

The fix has **high value** for affected users and **negligible risk** to
all users.

 tools/bpf/bpftool/link.c | 54 +++++++++++++++++++++++++++++++++++++---
 1 file changed, 50 insertions(+), 4 deletions(-)

diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
index a773e05d5ade4..bdcd717b0348f 100644
--- a/tools/bpf/bpftool/link.c
+++ b/tools/bpf/bpftool/link.c
@@ -282,11 +282,52 @@ get_addr_cookie_array(__u64 *addrs, __u64 *cookies, __u32 count)
 	return data;
 }
 
+static bool is_x86_ibt_enabled(void)
+{
+#if defined(__x86_64__)
+	struct kernel_config_option options[] = {
+		{ "CONFIG_X86_KERNEL_IBT", },
+	};
+	char *values[ARRAY_SIZE(options)] = { };
+	bool ret;
+
+	if (read_kernel_config(options, ARRAY_SIZE(options), values, NULL))
+		return false;
+
+	ret = !!values[0];
+	free(values[0]);
+	return ret;
+#else
+	return false;
+#endif
+}
+
+static bool
+symbol_matches_target(__u64 sym_addr, __u64 target_addr, bool is_ibt_enabled)
+{
+	if (sym_addr == target_addr)
+		return true;
+
+	/*
+	 * On x86_64 architectures with CET (Control-flow Enforcement Technology),
+	 * function entry points have a 4-byte 'endbr' instruction prefix.
+	 * This causes kprobe hooks to target the address *after* 'endbr'
+	 * (symbol address + 4), preserving the CET instruction.
+	 * Here we check if the symbol address matches the hook target address
+	 * minus 4, indicating a CET-enabled function entry point.
+	 */
+	if (is_ibt_enabled && sym_addr == target_addr - 4)
+		return true;
+
+	return false;
+}
+
 static void
 show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
 {
 	struct addr_cookie *data;
 	__u32 i, j = 0;
+	bool is_ibt_enabled;
 
 	jsonw_bool_field(json_wtr, "retprobe",
 			 info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN);
@@ -306,11 +347,13 @@ show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
 	if (!dd.sym_count)
 		goto error;
 
+	is_ibt_enabled = is_x86_ibt_enabled();
 	for (i = 0; i < dd.sym_count; i++) {
-		if (dd.sym_mapping[i].address != data[j].addr)
+		if (!symbol_matches_target(dd.sym_mapping[i].address,
+					   data[j].addr, is_ibt_enabled))
 			continue;
 		jsonw_start_object(json_wtr);
-		jsonw_uint_field(json_wtr, "addr", dd.sym_mapping[i].address);
+		jsonw_uint_field(json_wtr, "addr", (unsigned long)data[j].addr);
 		jsonw_string_field(json_wtr, "func", dd.sym_mapping[i].name);
 		/* Print null if it is vmlinux */
 		if (dd.sym_mapping[i].module[0] == '\0') {
@@ -719,6 +762,7 @@ static void show_kprobe_multi_plain(struct bpf_link_info *info)
 {
 	struct addr_cookie *data;
 	__u32 i, j = 0;
+	bool is_ibt_enabled;
 
 	if (!info->kprobe_multi.count)
 		return;
@@ -742,12 +786,14 @@ static void show_kprobe_multi_plain(struct bpf_link_info *info)
 	if (!dd.sym_count)
 		goto error;
 
+	is_ibt_enabled = is_x86_ibt_enabled();
 	printf("\n\t%-16s %-16s %s", "addr", "cookie", "func [module]");
 	for (i = 0; i < dd.sym_count; i++) {
-		if (dd.sym_mapping[i].address != data[j].addr)
+		if (!symbol_matches_target(dd.sym_mapping[i].address,
+					   data[j].addr, is_ibt_enabled))
 			continue;
 		printf("\n\t%016lx %-16llx %s",
-		       dd.sym_mapping[i].address, data[j].cookie, dd.sym_mapping[i].name);
+		       (unsigned long)data[j].addr, data[j].cookie, dd.sym_mapping[i].name);
 		if (dd.sym_mapping[i].module[0] != '\0')
 			printf(" [%s]  ", dd.sym_mapping[i].module);
 		else
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.6] bpf: Do not limit bpf_cgroup_from_id to current's namespace
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (10 preceding siblings ...)
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.12] bpftool: Add CET-aware symbol matching for x86_64 architectures Sasha Levin
@ 2025-10-09 15:55 ` Sasha Levin
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.1] selftests/bpf: Upon failures, exit with code 1 in test_xsk.sh Sasha Levin
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:55 UTC (permalink / raw)
  To: patches, stable
  Cc: Kumar Kartikeya Dwivedi, Dan Schatzberg, Tejun Heo,
	Alexei Starovoitov, Sasha Levin, hannes, mkoutny, daniel, andrii,
	cgroups, bpf

From: Kumar Kartikeya Dwivedi <memxor@gmail.com>

[ Upstream commit 2c895133950646f45e5cf3900b168c952c8dbee8 ]

The bpf_cgroup_from_id kfunc relies on cgroup_get_from_id to obtain the
cgroup corresponding to a given cgroup ID. This helper can be called in
a lot of contexts where the current thread can be random. A recent
example was its use in sched_ext's ops.tick(), to obtain the root cgroup
pointer. Since the current task can be whatever random user space task
preempted by the timer tick, this makes the behavior of the helper
unreliable.

Refactor out __cgroup_get_from_id as the non-namespace aware version of
cgroup_get_from_id, and change bpf_cgroup_from_id to make use of it.

There is no compatibility breakage here, since changing the namespace
against which the lookup is being done to the root cgroup namespace only
permits a wider set of lookups to succeed now. The cgroup IDs across
namespaces are globally unique, and thus don't need to be retranslated.

Reported-by: Dan Schatzberg <dschatzberg@meta.com>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Link: https://lore.kernel.org/r/20250915032618.1551762-2-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

YES - This patch fixes a real functional bug in `bpf_cgroup_from_id()`
that breaks sched_ext and other BPF users when the helper runs from
contexts where `current` lives in an unrelated cgroup namespace.

**Bug Details**
- Today `bpf_cgroup_from_id()` simply calls `cgroup_get_from_id()`
  (`kernel/bpf/helpers.c:2539`), and that helper rejects IDs not visible
  from `current`’s namespace via the `cgroup_is_descendant()` check
  (`kernel/cgroup/cgroup.c:6407`). When the kfunc is invoked from timer
  and irq contexts (e.g. sched_ext `ops.tick()`), `current` is just
  whatever user task was interrupted, so the lookup spuriously returns
  `NULL` and the BPF scheduler treats valid cgroups as gone.
- Documentation already describes the ID lookup as global
  (`Documentation/bpf/kfuncs.rst:653`), so current behaviour contradicts
  the documented contract and leads to unpredictable failures for BPF
  programs that cache cgroup IDs.

**Fix Mechanics**
- The patch factors the namespace-agnostic portion of the lookup into a
  new `__cgroup_get_from_id()` placed directly above the existing helper
  in `kernel/cgroup/cgroup.c` (~6376 after applying the change). That
  routine mirrors the old code path but returns as soon as the
  refcounted `struct cgroup` is acquired, skipping the namespace filter.
- `bpf_cgroup_from_id()` is switched to call the new helper
  (`kernel/bpf/helpers.c:2539` post-patch), so BPF programs always see
  the globally unique ID mapping they rely on. The public declaration in
  `include/linux/cgroup.h:653` is added so other in-kernel users can opt
  into the unrestricted lookup if they intentionally need it.
- The original `cgroup_get_from_id()` continues to enforce namespace
  visibility for existing callers (block layer, memcg, BPF iterators),
  so their semantics are unchanged.

**Risk Assessment**
- Behaviour only widens the set of IDs that succeed for this BPF kfunc;
  no kernel data structures or locking rules change. The lookup and
  refcount handling remain identical, so regression risk is low.
- The broader visibility is acceptable because accessing kfuncs of this
  class already requires privileged BPF programs; the cgroup maintainers
  (Acked-by: Tejun Heo) agreed the helper should operate on the global
  namespace.
- No new exports or user-visible ABI are introduced—the change is
  confined to in-kernel helpers and a single BPF kfunc.

**Stable Backport Notes**
- The patch is self-contained and applies cleanly as long as commit
  332ea1f697be (“bpf: Add bpf_cgroup_from_id() kfunc”) is present, which
  is true for current stable lines. No follow-up fixes are required.
- Without it, sched_ext BPF schedulers and other consumers that cache
  cgroup IDs will continue to misbehave whenever executed from
  asynchronous contexts, so backporting is warranted.

 include/linux/cgroup.h |  1 +
 kernel/bpf/helpers.c   |  2 +-
 kernel/cgroup/cgroup.c | 24 ++++++++++++++++++++----
 3 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index b18fb5fcb38e2..b08c8e62881cd 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -650,6 +650,7 @@ static inline void cgroup_kthread_ready(void)
 }
 
 void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen);
+struct cgroup *__cgroup_get_from_id(u64 id);
 struct cgroup *cgroup_get_from_id(u64 id);
 #else /* !CONFIG_CGROUPS */
 
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 8af62cb243d9e..0bde01edf5e6e 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2540,7 +2540,7 @@ __bpf_kfunc struct cgroup *bpf_cgroup_from_id(u64 cgid)
 {
 	struct cgroup *cgrp;
 
-	cgrp = cgroup_get_from_id(cgid);
+	cgrp = __cgroup_get_from_id(cgid);
 	if (IS_ERR(cgrp))
 		return NULL;
 	return cgrp;
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 77d02f87f3f12..c62b98f027f99 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -6373,15 +6373,15 @@ void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen)
 }
 
 /*
- * cgroup_get_from_id : get the cgroup associated with cgroup id
+ * __cgroup_get_from_id : get the cgroup associated with cgroup id
  * @id: cgroup id
  * On success return the cgrp or ERR_PTR on failure
- * Only cgroups within current task's cgroup NS are valid.
+ * There are no cgroup NS restrictions.
  */
-struct cgroup *cgroup_get_from_id(u64 id)
+struct cgroup *__cgroup_get_from_id(u64 id)
 {
 	struct kernfs_node *kn;
-	struct cgroup *cgrp, *root_cgrp;
+	struct cgroup *cgrp;
 
 	kn = kernfs_find_and_get_node_by_id(cgrp_dfl_root.kf_root, id);
 	if (!kn)
@@ -6403,6 +6403,22 @@ struct cgroup *cgroup_get_from_id(u64 id)
 
 	if (!cgrp)
 		return ERR_PTR(-ENOENT);
+	return cgrp;
+}
+
+/*
+ * cgroup_get_from_id : get the cgroup associated with cgroup id
+ * @id: cgroup id
+ * On success return the cgrp or ERR_PTR on failure
+ * Only cgroups within current task's cgroup NS are valid.
+ */
+struct cgroup *cgroup_get_from_id(u64 id)
+{
+	struct cgroup *cgrp, *root_cgrp;
+
+	cgrp = __cgroup_get_from_id(id);
+	if (IS_ERR(cgrp))
+		return cgrp;
 
 	root_cgrp = current_cgns_cgroup_dfl();
 	if (!cgroup_is_descendant(cgrp, root_cgrp)) {
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.1] selftests/bpf: Upon failures, exit with code 1 in test_xsk.sh
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (11 preceding siblings ...)
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.6] bpf: Do not limit bpf_cgroup_from_id to current's namespace Sasha Levin
@ 2025-10-09 15:55 ` Sasha Levin
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.16] selftests/bpf: Fix arena_spin_lock selftest failure Sasha Levin
  2025-10-09 15:56 ` [PATCH AUTOSEL 6.17-6.6] riscv: bpf: Fix uninitialized symbol 'retval_off' Sasha Levin
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:55 UTC (permalink / raw)
  To: patches, stable
  Cc: Ricardo B. Marlière, Andrii Nakryiko, Magnus Karlsson,
	Sasha Levin, maciej.fijalkowski, ast, daniel, eddyz87, netdev,
	bpf

From: Ricardo B. Marlière <rbm@suse.com>

[ Upstream commit 2a912258c90e895363c0ffc0be8a47f112ab67b7 ]

Currently, even if some subtests fails, the end result will still yield
"ok 1 selftests: bpf: test_xsk.sh". Fix it by exiting with 1 if there are
any failures.

Signed-off-by: Ricardo B. Marlière <rbm@suse.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Link: https://lore.kernel.org/bpf/20250828-selftests-bpf-test_xsk_ret-v1-1-e6656c01f397@suse.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

Based on my comprehensive analysis, here is my determination:

**Backport Status: YES**

## Analysis

### Code Change
The commit adds a simple 2-line fix to ensure the test script exits with
code 1 when failures occur:
```bash
+else
+       exit 1
```

### Rationale for Backporting

**1. Pattern of Similar Backports**
My research shows that similar test exit code fixes have been
consistently backported to stable kernels:
- `selftests/net: have gro.sh -t return a correct exit code` (commit
  784e6abd99f24) was backported by AUTOSEL
- `selftests: ksft: Fix finished() helper exit code on skipped tests`
  (commit 170c966cbe274) was backported by AUTOSEL
- `selftests: xsk: fix reporting of failed tests` (commit 895b62eed2ab4)
  was backported to stable 6.1 branches

**2. Critical for Testing Infrastructure**
This fix addresses a real bug in test reporting that affects:
- **CI/Automated Testing**: Systems running selftests on stable kernels
  rely on correct exit codes to detect regressions
- **False Positives**: The current behavior reports "ok" even when tests
  fail, masking real problems
- **Quality Assurance**: Proper exit codes are essential for stable
  kernel validation

**3. Meets Stable Kernel Rules**
According to Documentation/process/stable-kernel-rules.rst:
- ✅ **Obviously correct**: Trivial 2-line addition with clear intent
- ✅ **Small and contained**: Only 2 lines in a single shell script
- ✅ **Fixes a real bug**: Test infrastructure incorrectly reporting
  success on failures
- ✅ **Already in mainline**: Commit 2a912258c90e exists in upstream
- ✅ **Minimal risk**: Changes only test infrastructure, cannot affect
  kernel runtime
- ✅ **Benefits users**: Helps developers and organizations running tests
  on stable kernels

**4. Historical Evidence**
The commit 8f610b24a1a44 shows this has already been selected by AUTOSEL
for backporting, with the marker `[ Upstream commit
2a912258c90e895363c0ffc0be8a47f112ab67b7 ]` and signed by Sasha Levin.

### Conclusion
This is a clear candidate for stable backporting. It fixes test
infrastructure that provides critical validation for stable kernels,
follows established backporting patterns for similar fixes, and meets
all stable kernel rules criteria.

 tools/testing/selftests/bpf/test_xsk.sh | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/testing/selftests/bpf/test_xsk.sh b/tools/testing/selftests/bpf/test_xsk.sh
index 65aafe0003db0..62db060298a4a 100755
--- a/tools/testing/selftests/bpf/test_xsk.sh
+++ b/tools/testing/selftests/bpf/test_xsk.sh
@@ -241,4 +241,6 @@ done
 
 if [ $failures -eq 0 ]; then
         echo "All tests successful!"
+else
+	exit 1
 fi
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.16] selftests/bpf: Fix arena_spin_lock selftest failure
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (12 preceding siblings ...)
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.1] selftests/bpf: Upon failures, exit with code 1 in test_xsk.sh Sasha Levin
@ 2025-10-09 15:55 ` Sasha Levin
  2025-10-09 15:56 ` [PATCH AUTOSEL 6.17-6.6] riscv: bpf: Fix uninitialized symbol 'retval_off' Sasha Levin
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:55 UTC (permalink / raw)
  To: patches, stable
  Cc: Saket Kumar Bhaskar, Venkat Rao Bagalkote, Alexei Starovoitov,
	Sasha Levin, memxor, iii, andrii, bpf

From: Saket Kumar Bhaskar <skb99@linux.ibm.com>

[ Upstream commit a9d4e9f0e871352a48a82da11a50df7196fe567a ]

For systems having CONFIG_NR_CPUS set to > 1024 in kernel config
the selftest fails as arena_spin_lock_irqsave() returns EOPNOTSUPP.
(eg - incase of powerpc default value for CONFIG_NR_CPUS is 8192)

The selftest is skipped incase bpf program returns EOPNOTSUPP,
with a descriptive message logged.

Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com>
Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
Link: https://lore.kernel.org/r/20250913091337.1841916-1-skb99@linux.ibm.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

YES
- `arena_spin_lock` returns `-EOPNOTSUPP` once `CONFIG_NR_CPUS` exceeds
  1024 (`tools/testing/selftests/bpf/progs/bpf_arena_spin_lock.h:497`),
  so on platforms like powerpc (default 8192 CPUs) every test run exits
  early and the user space harness currently asserts that the retval
  must be zero
  (`tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c:41-47`
  before this change), causing the subtests to fail outright.
- The patch teaches the harness to recognize that specific failure mode:
  `spin_lock_thread()` now short‑circuits when it sees `-EOPNOTSUPP`
  instead of tripping the ASSERT
  (`tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c:44-50`),
  and the BPF program annotates the condition by setting `test_skip = 3`
  before returning
  (`tools/testing/selftests/bpf/progs/arena_spin_lock.c:40-44`).
- After all worker threads complete, the host test checks that flag,
  prints an explicit skip message, and marks the subtest as skipped
  instead of comparing the counter and failing
  (`tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c:94-101`).
  This lets kselftest succeed on high-NR_CPUS systems while still
  reporting the unsupported configuration.
- The change is entirely confined to selftests, has no runtime or ABI
  impact, and aligns the tests with the documented hardware limitation,
  making it a low-risk fix for a real, reproducible failure on existing
  platforms.

 .../selftests/bpf/prog_tests/arena_spin_lock.c      | 13 +++++++++++++
 tools/testing/selftests/bpf/progs/arena_spin_lock.c |  5 ++++-
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c b/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c
index 0223fce4db2bc..693fd86fbde62 100644
--- a/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c
+++ b/tools/testing/selftests/bpf/prog_tests/arena_spin_lock.c
@@ -40,8 +40,13 @@ static void *spin_lock_thread(void *arg)
 
 	err = bpf_prog_test_run_opts(prog_fd, &topts);
 	ASSERT_OK(err, "test_run err");
+
+	if (topts.retval == -EOPNOTSUPP)
+		goto end;
+
 	ASSERT_EQ((int)topts.retval, 0, "test_run retval");
 
+end:
 	pthread_exit(arg);
 }
 
@@ -63,6 +68,7 @@ static void test_arena_spin_lock_size(int size)
 	skel = arena_spin_lock__open_and_load();
 	if (!ASSERT_OK_PTR(skel, "arena_spin_lock__open_and_load"))
 		return;
+
 	if (skel->data->test_skip == 2) {
 		test__skip();
 		goto end;
@@ -86,6 +92,13 @@ static void test_arena_spin_lock_size(int size)
 			goto end_barrier;
 	}
 
+	if (skel->data->test_skip == 3) {
+		printf("%s:SKIP: CONFIG_NR_CPUS exceed the maximum supported by arena spinlock\n",
+		       __func__);
+		test__skip();
+		goto end_barrier;
+	}
+
 	ASSERT_EQ(skel->bss->counter, repeat * nthreads, "check counter value");
 
 end_barrier:
diff --git a/tools/testing/selftests/bpf/progs/arena_spin_lock.c b/tools/testing/selftests/bpf/progs/arena_spin_lock.c
index c4500c37f85e0..086b57a426cf5 100644
--- a/tools/testing/selftests/bpf/progs/arena_spin_lock.c
+++ b/tools/testing/selftests/bpf/progs/arena_spin_lock.c
@@ -37,8 +37,11 @@ int prog(void *ctx)
 #if defined(ENABLE_ATOMICS_TESTS) && defined(__BPF_FEATURE_ADDR_SPACE_CAST)
 	unsigned long flags;
 
-	if ((ret = arena_spin_lock_irqsave(&lock, flags)))
+	if ((ret = arena_spin_lock_irqsave(&lock, flags))) {
+		if (ret == -EOPNOTSUPP)
+			test_skip = 3;
 		return ret;
+	}
 	if (counter != limit)
 		counter++;
 	bpf_repeat(cs_count);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH AUTOSEL 6.17-6.6] riscv: bpf: Fix uninitialized symbol 'retval_off'
       [not found] <20251009155752.773732-1-sashal@kernel.org>
                   ` (13 preceding siblings ...)
  2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.16] selftests/bpf: Fix arena_spin_lock selftest failure Sasha Levin
@ 2025-10-09 15:56 ` Sasha Levin
  14 siblings, 0 replies; 15+ messages in thread
From: Sasha Levin @ 2025-10-09 15:56 UTC (permalink / raw)
  To: patches, stable
  Cc: Chenghao Duan, Pu Lehui, Alexei Starovoitov, Sasha Levin, daniel,
	andrii, bjorn, pjw, palmer, aou, bpf, linux-riscv

From: Chenghao Duan <duanchenghao@kylinos.cn>

[ Upstream commit d0bf7cd5df18466d969bb60e8890b74cf96081ca ]

In the __arch_prepare_bpf_trampoline() function, retval_off is only
meaningful when save_ret is true, so the current logic is correct.
However, in the original logic, retval_off is only initialized under
certain conditions; for example, in the fmod_ret logic, the compiler is
not aware that the flags of the fmod_ret program (prog) have set
BPF_TRAMP_F_CALL_ORIG, which results in an uninitialized symbol
compilation warning.

So initialize retval_off unconditionally to fix it.

Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
Reviewed-by: Pu Lehui <pulehui@huawei.com>
Link: https://lore.kernel.org/r/20250922062244.822937-2-duanchenghao@kylinos.cn
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---

LLM Generated explanations, may be completely bogus:

YES

- What changed: The patch moves `retval_off = stack_size;` out of the
  `if (save_ret)` block so it’s always initialized. Previously
  `retval_off` was only assigned when `save_ret` was true.
  - Before: `retval_off` assigned only inside `if (save_ret) { ... }`
  - After: `retval_off` assigned unconditionally immediately after the
    optional `stack_size += 16`
  - Reference: arch/riscv/net/bpf_jit_comp64.c:1066

- Why it matters: `retval_off` is used in code paths not explicitly
  guarded by `save_ret`, which makes compilers think it can be used
  uninitialized and emit a warning (e.g., -Wmaybe-uninitialized), even
  though, logically, those paths only occur with flags that imply
  `save_ret` must be true.
  - Unconditional uses in fmod_ret path:
    - Zeroing return value slot: arch/riscv/net/bpf_jit_comp64.c:1157
    - Loading return value: arch/riscv/net/bpf_jit_comp64.c:1163
  - Unconditional uses in call-orig path:
    - Store original function’s return in reserved slot:
      arch/riscv/net/bpf_jit_comp64.c:1176
    - Store BPF R0: arch/riscv/net/bpf_jit_comp64.c:1177
  - Final restore guarded by `save_ret`, confirming the semantic intent:
    arch/riscv/net/bpf_jit_comp64.c:1209

- Bug scope and user impact:
  - This is a build correctness fix that eliminates spurious “maybe-
    uninitialized” warnings that can be promoted to errors in some
    configurations or toolchains. It does not change runtime behavior
    because the only meaningful use of `retval_off` (e.g., restoring
    return values) is already guarded by `save_ret`. When `save_ret` is
    false, `retval_off`’s value is ignored by the logic that matters.
  - The warning can affect users building with stricter warning settings
    or newer compilers; resolving it improves build reliability for
    RISC-V with BPF trampolines.

- Containment and risk:
  - The change is tiny and contained to a single file/function in the
    RISC-V BPF JIT trampoline.
  - No new features, APIs, or architectural changes; no functional logic
    changed for valid flag combinations.
  - Safe even if misused flags were ever passed: `retval_off` now has a
    defined value, avoiding UB from uninitialized use.

- Applicability to stable trees:
  - The affected pattern exists in stable series that have the RISC-V
    BPF trampoline (e.g., v6.6 shows the same conditional
    initialization, with unconditional uses later). See v6.6 code where
    `retval_off` is only set under `if (save_ret)` and is used in the
    fmod_ret block and call-orig sequence without an explicit `save_ret`
    guard, mirroring the warning scenario.
  - Mainline commit: d0bf7cd5df184 (“riscv: bpf: Fix uninitialized
    symbol 'retval_off'”).
  - Likely Fixes: 25ad10658dc10 (“riscv, bpf: Adapt bpf trampoline to
    optimized riscv ftrace framework”), which introduced the trampoline
    structure that uses `retval_off` this way.

- Stable criteria check:
  - Fixes a real build issue (warnings that can become errors).
  - Small, self-contained change in one function and one file.
  - No functional side effects; does not alter behavior except removing
    undefined initialization state.
  - Not a feature or refactor; low regression risk; localized to RISC-V
    BPF trampoline.

Conclusion: This is a good and safe candidate for backporting to all
stable trees that include the RISC-V BPF trampoline code path (e.g.,
6.6.y and newer where applicable).

 arch/riscv/net/bpf_jit_comp64.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index 9883a55d61b5b..8475a8ab57151 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -1079,10 +1079,9 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im,
 	stack_size += 16;
 
 	save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
-	if (save_ret) {
+	if (save_ret)
 		stack_size += 16; /* Save both A5 (BPF R0) and A0 */
-		retval_off = stack_size;
-	}
+	retval_off = stack_size;
 
 	stack_size += nr_arg_slots * 8;
 	args_off = stack_size;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2025-10-09 16:01 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20251009155752.773732-1-sashal@kernel.org>
2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-5.4] bpf: Don't use %pK through printk Sasha Levin
2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.1] bpftool: Fix -Wuninitialized-const-pointer warnings with clang >= 21 Sasha Levin
2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.12] bpf: Use tnums for JEQ/JNE is_branch_taken logic Sasha Levin
2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.16] selftests/bpf: Fix incorrect array size calculation Sasha Levin
2025-10-09 15:54 ` [PATCH AUTOSEL 6.17-6.12] selftests/bpf: Fix selftest verifier_arena_large failure Sasha Levin
2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.1] bpf: Clear pfmemalloc flag when freeing all fragments Sasha Levin
2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-5.4] selftests/bpf: Fix bpf_prog_detach2 usage in test_lirc_mode2 Sasha Levin
2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.12] selftests/bpf: Fix flaky bpf_cookie selftest Sasha Levin
2025-10-09 15:55 ` [PATCH AUTOSEL 6.17] selftests: drv-net: Pull data before parsing headers Sasha Levin
2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.12] libbpf: Fix USDT SIB argument handling causing unrecognized register error Sasha Levin
2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.12] bpftool: Add CET-aware symbol matching for x86_64 architectures Sasha Levin
2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.6] bpf: Do not limit bpf_cgroup_from_id to current's namespace Sasha Levin
2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.1] selftests/bpf: Upon failures, exit with code 1 in test_xsk.sh Sasha Levin
2025-10-09 15:55 ` [PATCH AUTOSEL 6.17-6.16] selftests/bpf: Fix arena_spin_lock selftest failure Sasha Levin
2025-10-09 15:56 ` [PATCH AUTOSEL 6.17-6.6] riscv: bpf: Fix uninitialized symbol 'retval_off' Sasha Levin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox