* [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-09 9:31 ` kernel test robot
` (2 more replies)
2025-06-06 23:29 ` [PATCH 02/12] bpf: Update the bpf_prog_calc_tag to use SHA256 KP Singh
` (12 subsequent siblings)
13 siblings, 3 replies; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
This patch introduces bpf_sha256, an internal helper function
that wraps the standard kernel crypto API to compute SHA256 digests of
the program insns and map content
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
include/linux/bpf.h | 1 +
kernel/bpf/core.c | 39 +++++++++++++++++++++++++++++++++++++++
2 files changed, 40 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 5b25d278409b..d5ae43b36e68 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2086,6 +2086,7 @@ static inline bool map_type_contains_progs(struct bpf_map *map)
}
bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp);
+int bpf_sha256(u8 *data, size_t data_size, u8 *output_digest);
int bpf_prog_calc_tag(struct bpf_prog *fp);
const struct bpf_func_proto *bpf_get_trace_printk_proto(void);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index a3e571688421..607d5322ef94 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -17,6 +17,7 @@
* Kris Katterjohn - Added many additional checks in bpf_check_classic()
*/
+#include <crypto/hash.h>
#include <uapi/linux/btf.h>
#include <linux/filter.h>
#include <linux/skbuff.h>
@@ -287,6 +288,44 @@ void __bpf_prog_free(struct bpf_prog *fp)
vfree(fp);
}
+int bpf_sha256(u8 *data, size_t data_size, u8 *output_digest)
+{
+ struct crypto_shash *tfm;
+ struct shash_desc *shash_desc;
+ size_t desc_size;
+ int ret = 0;
+
+ tfm = crypto_alloc_shash("sha256", 0, 0);
+ if (IS_ERR(tfm))
+ return PTR_ERR(tfm);
+
+
+ desc_size = crypto_shash_descsize(tfm) + sizeof(*shash_desc);
+ shash_desc = kmalloc(desc_size, GFP_KERNEL);
+ if (!shash_desc) {
+ crypto_free_shash(tfm);
+ return -ENOMEM;
+ }
+
+ shash_desc->tfm = tfm;
+ ret = crypto_shash_init(shash_desc);
+ if (ret)
+ goto out_free_desc;
+
+ ret = crypto_shash_update(shash_desc, data, data_size);
+ if (ret)
+ goto out_free_desc;
+
+ ret = crypto_shash_final(shash_desc, output_digest);
+ if (ret)
+ goto out_free_desc;
+
+out_free_desc:
+ kfree(shash_desc);
+ crypto_free_shash(tfm);
+ return ret;
+}
+
int bpf_prog_calc_tag(struct bpf_prog *fp)
{
const u32 bits_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing
2025-06-06 23:29 ` [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing KP Singh
@ 2025-06-09 9:31 ` kernel test robot
2025-06-09 16:56 ` Alexei Starovoitov
2025-06-12 19:07 ` Eric Biggers
2 siblings, 0 replies; 79+ messages in thread
From: kernel test robot @ 2025-06-09 9:31 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module
Cc: oe-kbuild-all, bboscaccy, paul, kys, ast, daniel, andrii,
KP Singh
Hi KP,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/net]
[also build test ERROR on bpf-next/master bpf/master linus/master v6.16-rc1 next-20250606]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/KP-Singh/bpf-Implement-an-internal-helper-for-SHA256-hashing/20250607-073052
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git net
patch link: https://lore.kernel.org/r/20250606232914.317094-2-kpsingh%40kernel.org
patch subject: [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing
config: alpha-defconfig (https://download.01.org/0day-ci/archive/20250609/202506091719.RN2qjs3P-lkp@intel.com/config)
compiler: alpha-linux-gcc (GCC) 15.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250609/202506091719.RN2qjs3P-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202506091719.RN2qjs3P-lkp@intel.com/
All errors (new ones prefixed by >>):
alpha-linux-ld: kernel/bpf/core.o: in function `bpf_sha256':
>> kernel/bpf/core.c:298:(.text+0x502c): undefined reference to `crypto_alloc_shash'
>> alpha-linux-ld: kernel/bpf/core.c:298:(.text+0x5058): undefined reference to `crypto_alloc_shash'
>> alpha-linux-ld: kernel/bpf/core.c:311:(.text+0x50a4): undefined reference to `crypto_shash_init'
alpha-linux-ld: kernel/bpf/core.c:311:(.text+0x50b0): undefined reference to `crypto_shash_init'
alpha-linux-ld: kernel/bpf/core.o: in function `crypto_free_shash':
>> include/crypto/hash.h:765:(.text+0x50e0): undefined reference to `crypto_destroy_tfm'
>> alpha-linux-ld: include/crypto/hash.h:765:(.text+0x50e4): undefined reference to `crypto_destroy_tfm'
alpha-linux-ld: kernel/bpf/core.o: in function `crypto_shash_update':
>> include/crypto/hash.h:992:(.text+0x5120): undefined reference to `crypto_shash_finup'
>> alpha-linux-ld: include/crypto/hash.h:992:(.text+0x5124): undefined reference to `crypto_shash_finup'
alpha-linux-ld: kernel/bpf/core.o: in function `crypto_shash_final':
include/crypto/hash.h:1011:(.text+0x5138): undefined reference to `crypto_shash_finup'
alpha-linux-ld: include/crypto/hash.h:1011:(.text+0x5148): undefined reference to `crypto_shash_finup'
alpha-linux-ld: kernel/bpf/core.o: in function `crypto_free_shash':
include/crypto/hash.h:765:(.text+0x5158): undefined reference to `crypto_destroy_tfm'
alpha-linux-ld: include/crypto/hash.h:765:(.text+0x5164): undefined reference to `crypto_destroy_tfm'
vim +298 kernel/bpf/core.c
290
291 int bpf_sha256(u8 *data, size_t data_size, u8 *output_digest)
292 {
293 struct crypto_shash *tfm;
294 struct shash_desc *shash_desc;
295 size_t desc_size;
296 int ret = 0;
297
> 298 tfm = crypto_alloc_shash("sha256", 0, 0);
299 if (IS_ERR(tfm))
300 return PTR_ERR(tfm);
301
302
303 desc_size = crypto_shash_descsize(tfm) + sizeof(*shash_desc);
304 shash_desc = kmalloc(desc_size, GFP_KERNEL);
305 if (!shash_desc) {
306 crypto_free_shash(tfm);
307 return -ENOMEM;
308 }
309
310 shash_desc->tfm = tfm;
> 311 ret = crypto_shash_init(shash_desc);
312 if (ret)
313 goto out_free_desc;
314
315 ret = crypto_shash_update(shash_desc, data, data_size);
316 if (ret)
317 goto out_free_desc;
318
319 ret = crypto_shash_final(shash_desc, output_digest);
320 if (ret)
321 goto out_free_desc;
322
323 out_free_desc:
324 kfree(shash_desc);
325 crypto_free_shash(tfm);
326 return ret;
327 }
328
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing
2025-06-06 23:29 ` [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing KP Singh
2025-06-09 9:31 ` kernel test robot
@ 2025-06-09 16:56 ` Alexei Starovoitov
2025-06-12 19:07 ` Eric Biggers
2 siblings, 0 replies; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-09 16:56 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> This patch introduces bpf_sha256, an internal helper function
> that wraps the standard kernel crypto API to compute SHA256 digests of
> the program insns and map content
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> include/linux/bpf.h | 1 +
> kernel/bpf/core.c | 39 +++++++++++++++++++++++++++++++++++++++
> 2 files changed, 40 insertions(+)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 5b25d278409b..d5ae43b36e68 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2086,6 +2086,7 @@ static inline bool map_type_contains_progs(struct bpf_map *map)
> }
>
> bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp);
> +int bpf_sha256(u8 *data, size_t data_size, u8 *output_digest);
> int bpf_prog_calc_tag(struct bpf_prog *fp);
>
> const struct bpf_func_proto *bpf_get_trace_printk_proto(void);
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index a3e571688421..607d5322ef94 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -17,6 +17,7 @@
> * Kris Katterjohn - Added many additional checks in bpf_check_classic()
> */
>
> +#include <crypto/hash.h>
> #include <uapi/linux/btf.h>
> #include <linux/filter.h>
> #include <linux/skbuff.h>
> @@ -287,6 +288,44 @@ void __bpf_prog_free(struct bpf_prog *fp)
> vfree(fp);
> }
>
> +int bpf_sha256(u8 *data, size_t data_size, u8 *output_digest)
> +{
> + struct crypto_shash *tfm;
> + struct shash_desc *shash_desc;
> + size_t desc_size;
> + int ret = 0;
> +
> + tfm = crypto_alloc_shash("sha256", 0, 0);
In kernel/bpf/Kconfig we use:
config BPF
bool
select CRYPTO_LIB_SHA1
I think it's fine to add "select CRYPTO_LIB_SHA256" in this patch,
and remove CRYPTO_LIB_SHA1 line in patch 2,
since the only user will be gone.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing
2025-06-06 23:29 ` [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing KP Singh
2025-06-09 9:31 ` kernel test robot
2025-06-09 16:56 ` Alexei Starovoitov
@ 2025-06-12 19:07 ` Eric Biggers
2025-06-16 23:40 ` KP Singh
2 siblings, 1 reply; 79+ messages in thread
From: Eric Biggers @ 2025-06-12 19:07 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Sat, Jun 07, 2025 at 01:29:03AM +0200, KP Singh wrote:
> This patch introduces bpf_sha256, an internal helper function
> that wraps the standard kernel crypto API to compute SHA256 digests of
> the program insns and map content
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> include/linux/bpf.h | 1 +
> kernel/bpf/core.c | 39 +++++++++++++++++++++++++++++++++++++++
> 2 files changed, 40 insertions(+)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 5b25d278409b..d5ae43b36e68 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2086,6 +2086,7 @@ static inline bool map_type_contains_progs(struct bpf_map *map)
> }
>
> bool bpf_prog_map_compatible(struct bpf_map *map, const struct bpf_prog *fp);
> +int bpf_sha256(u8 *data, size_t data_size, u8 *output_digest);
> int bpf_prog_calc_tag(struct bpf_prog *fp);
>
> const struct bpf_func_proto *bpf_get_trace_printk_proto(void);
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index a3e571688421..607d5322ef94 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -17,6 +17,7 @@
> * Kris Katterjohn - Added many additional checks in bpf_check_classic()
> */
>
> +#include <crypto/hash.h>
> #include <uapi/linux/btf.h>
> #include <linux/filter.h>
> #include <linux/skbuff.h>
> @@ -287,6 +288,44 @@ void __bpf_prog_free(struct bpf_prog *fp)
> vfree(fp);
> }
>
> +int bpf_sha256(u8 *data, size_t data_size, u8 *output_digest)
> +{
> + struct crypto_shash *tfm;
> + struct shash_desc *shash_desc;
> + size_t desc_size;
> + int ret = 0;
> +
> + tfm = crypto_alloc_shash("sha256", 0, 0);
> + if (IS_ERR(tfm))
> + return PTR_ERR(tfm);
> +
> +
> + desc_size = crypto_shash_descsize(tfm) + sizeof(*shash_desc);
> + shash_desc = kmalloc(desc_size, GFP_KERNEL);
> + if (!shash_desc) {
> + crypto_free_shash(tfm);
> + return -ENOMEM;
> + }
> +
> + shash_desc->tfm = tfm;
> + ret = crypto_shash_init(shash_desc);
> + if (ret)
> + goto out_free_desc;
> +
> + ret = crypto_shash_update(shash_desc, data, data_size);
> + if (ret)
> + goto out_free_desc;
> +
> + ret = crypto_shash_final(shash_desc, output_digest);
> + if (ret)
> + goto out_free_desc;
> +
> +out_free_desc:
> + kfree(shash_desc);
> + crypto_free_shash(tfm);
> + return ret;
> +}
> +
You're looking for sha256() from <crypto/sha2.h>. Just use that instead.
You'll just need to select CRYPTO_LIB_SHA256.
- Eric
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing
2025-06-12 19:07 ` Eric Biggers
@ 2025-06-16 23:40 ` KP Singh
2025-06-16 23:48 ` Eric Biggers
0 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-16 23:40 UTC (permalink / raw)
To: Eric Biggers
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Thu, Jun 12, 2025 at 9:08 PM Eric Biggers <ebiggers@kernel.org> wrote:
>
[...]
>
> You're looking for sha256() from <crypto/sha2.h>. Just use that instead.
I did look at it but my understanding is that it will always use the
non-accelerated version and in theory the program can be megabytes in
size, so might be worth using the accelerated crypto API. What do you
think?
- KP
>
> You'll just need to select CRYPTO_LIB_SHA256.
>
> - Eric
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing
2025-06-16 23:40 ` KP Singh
@ 2025-06-16 23:48 ` Eric Biggers
2025-06-17 0:04 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: Eric Biggers @ 2025-06-16 23:48 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Tue, Jun 17, 2025 at 01:40:22AM +0200, KP Singh wrote:
> On Thu, Jun 12, 2025 at 9:08 PM Eric Biggers <ebiggers@kernel.org> wrote:
> >
> [...]
> >
> > You're looking for sha256() from <crypto/sha2.h>. Just use that instead.
>
> I did look at it but my understanding is that it will always use the
> non-accelerated version and in theory the program can be megabytes in
> size, so might be worth using the accelerated crypto API. What do you
> think?
>
I fixed that in 6.16. sha256() gives you the accelerated version now.
- Eric
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing
2025-06-16 23:48 ` Eric Biggers
@ 2025-06-17 0:04 ` KP Singh
0 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-06-17 0:04 UTC (permalink / raw)
To: Eric Biggers
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Tue, Jun 17, 2025 at 1:49 AM Eric Biggers <ebiggers@kernel.org> wrote:
>
> On Tue, Jun 17, 2025 at 01:40:22AM +0200, KP Singh wrote:
> > On Thu, Jun 12, 2025 at 9:08 PM Eric Biggers <ebiggers@kernel.org> wrote:
> > >
> > [...]
> > >
> > > You're looking for sha256() from <crypto/sha2.h>. Just use that instead.
> >
> > I did look at it but my understanding is that it will always use the
> > non-accelerated version and in theory the program can be megabytes in
> > size, so might be worth using the accelerated crypto API. What do you
> > think?
> >
>
> I fixed that in 6.16. sha256() gives you the accelerated version now.
This is awesome, I will drop this patch.
- KP
>
> - Eric
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 02/12] bpf: Update the bpf_prog_calc_tag to use SHA256
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
2025-06-06 23:29 ` [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-09 17:46 ` Alexei Starovoitov
2025-06-06 23:29 ` [PATCH 03/12] bpf: Implement exclusive map creation KP Singh
` (11 subsequent siblings)
13 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
Exclusive maps restrict map access to specific programs using a hash.
The current hash used for this is SHA1, which is prone to collisions.
This patch uses SHA256, which is more resilient against
collisions. This new hash is stored in bpf_prog and used by the verifier
to determine if a program can access a given exclusive map.
The original 64-bit tags are kept, as they are used by users as a short,
possibly colliding program identifier for non-security purposes.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
include/linux/bpf.h | 8 ++++++-
include/linux/filter.h | 6 ------
kernel/bpf/core.c | 49 ++++++------------------------------------
3 files changed, 14 insertions(+), 49 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d5ae43b36e68..77d62c74a4e7 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -31,6 +31,7 @@
#include <linux/memcontrol.h>
#include <linux/cfi.h>
#include <asm/rqspinlock.h>
+#include <crypto/sha2.h>
struct bpf_verifier_env;
struct bpf_verifier_log;
@@ -1669,7 +1670,12 @@ struct bpf_prog {
enum bpf_attach_type expected_attach_type; /* For some prog types */
u32 len; /* Number of filter blocks */
u32 jited_len; /* Size of jited insns in bytes */
- u8 tag[BPF_TAG_SIZE];
+ union {
+ u8 digest[SHA256_DIGEST_SIZE];
+ struct {
+ u8 tag[BPF_TAG_SIZE];
+ };
+ };
struct bpf_prog_stats __percpu *stats;
int __percpu *active;
unsigned int (*bpf_func)(const void *ctx,
diff --git a/include/linux/filter.h b/include/linux/filter.h
index f5cf4d35d83e..3aa33e904a4e 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -997,12 +997,6 @@ static inline u32 bpf_prog_insn_size(const struct bpf_prog *prog)
return prog->len * sizeof(struct bpf_insn);
}
-static inline u32 bpf_prog_tag_scratch_size(const struct bpf_prog *prog)
-{
- return round_up(bpf_prog_insn_size(prog) +
- sizeof(__be64) + 1, SHA1_BLOCK_SIZE);
-}
-
static inline unsigned int bpf_prog_size(unsigned int proglen)
{
return max(sizeof(struct bpf_prog),
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 607d5322ef94..f280de0a306c 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -328,28 +328,18 @@ int bpf_sha256(u8 *data, size_t data_size, u8 *output_digest)
int bpf_prog_calc_tag(struct bpf_prog *fp)
{
- const u32 bits_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
- u32 raw_size = bpf_prog_tag_scratch_size(fp);
- u32 digest[SHA1_DIGEST_WORDS];
- u32 ws[SHA1_WORKSPACE_WORDS];
- u32 i, bsize, psize, blocks;
+ u32 insn_size = bpf_prog_insn_size(fp);
struct bpf_insn *dst;
bool was_ld_map;
- u8 *raw, *todo;
- __be32 *result;
- __be64 *bits;
+ int i, ret = 0;
- raw = vmalloc(raw_size);
- if (!raw)
+ dst = vmalloc(insn_size);
+ if (!dst)
return -ENOMEM;
- sha1_init(digest);
- memset(ws, 0, sizeof(ws));
-
/* We need to take out the map fd for the digest calculation
* since they are unstable from user space side.
*/
- dst = (void *)raw;
for (i = 0, was_ld_map = false; i < fp->len; i++) {
dst[i] = fp->insnsi[i];
if (!was_ld_map &&
@@ -369,34 +359,9 @@ int bpf_prog_calc_tag(struct bpf_prog *fp)
was_ld_map = false;
}
}
-
- psize = bpf_prog_insn_size(fp);
- memset(&raw[psize], 0, raw_size - psize);
- raw[psize++] = 0x80;
-
- bsize = round_up(psize, SHA1_BLOCK_SIZE);
- blocks = bsize / SHA1_BLOCK_SIZE;
- todo = raw;
- if (bsize - psize >= sizeof(__be64)) {
- bits = (__be64 *)(todo + bsize - sizeof(__be64));
- } else {
- bits = (__be64 *)(todo + bsize + bits_offset);
- blocks++;
- }
- *bits = cpu_to_be64((psize - 1) << 3);
-
- while (blocks--) {
- sha1_transform(digest, todo, ws);
- todo += SHA1_BLOCK_SIZE;
- }
-
- result = (__force __be32 *)digest;
- for (i = 0; i < SHA1_DIGEST_WORDS; i++)
- result[i] = cpu_to_be32(digest[i]);
- memcpy(fp->tag, result, sizeof(fp->tag));
-
- vfree(raw);
- return 0;
+ ret = bpf_sha256((u8 *)dst, insn_size, fp->digest);
+ vfree(dst);
+ return ret;
}
static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, s32 end_old,
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 02/12] bpf: Update the bpf_prog_calc_tag to use SHA256
2025-06-06 23:29 ` [PATCH 02/12] bpf: Update the bpf_prog_calc_tag to use SHA256 KP Singh
@ 2025-06-09 17:46 ` Alexei Starovoitov
0 siblings, 0 replies; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-09 17:46 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> Exclusive maps restrict map access to specific programs using a hash.
> The current hash used for this is SHA1, which is prone to collisions.
> This patch uses SHA256, which is more resilient against
> collisions. This new hash is stored in bpf_prog and used by the verifier
> to determine if a program can access a given exclusive map.
>
> The original 64-bit tags are kept, as they are used by users as a short,
> possibly colliding program identifier for non-security purposes.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> include/linux/bpf.h | 8 ++++++-
> include/linux/filter.h | 6 ------
> kernel/bpf/core.c | 49 ++++++------------------------------------
> 3 files changed, 14 insertions(+), 49 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index d5ae43b36e68..77d62c74a4e7 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -31,6 +31,7 @@
> #include <linux/memcontrol.h>
> #include <linux/cfi.h>
> #include <asm/rqspinlock.h>
> +#include <crypto/sha2.h>
>
> struct bpf_verifier_env;
> struct bpf_verifier_log;
> @@ -1669,7 +1670,12 @@ struct bpf_prog {
> enum bpf_attach_type expected_attach_type; /* For some prog types */
> u32 len; /* Number of filter blocks */
> u32 jited_len; /* Size of jited insns in bytes */
> - u8 tag[BPF_TAG_SIZE];
> + union {
> + u8 digest[SHA256_DIGEST_SIZE];
> + struct {
> + u8 tag[BPF_TAG_SIZE];
> + };
> + };
Why extra anon struct ?
union {
u8 digest[SHA256_DIGEST_SIZE];
u8 tag[BPF_TAG_SIZE];
};
should work ?
> struct bpf_prog_stats __percpu *stats;
> int __percpu *active;
> unsigned int (*bpf_func)(const void *ctx,
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index f5cf4d35d83e..3aa33e904a4e 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -997,12 +997,6 @@ static inline u32 bpf_prog_insn_size(const struct bpf_prog *prog)
> return prog->len * sizeof(struct bpf_insn);
> }
>
> -static inline u32 bpf_prog_tag_scratch_size(const struct bpf_prog *prog)
> -{
> - return round_up(bpf_prog_insn_size(prog) +
> - sizeof(__be64) + 1, SHA1_BLOCK_SIZE);
> -}
Nice that we don't need this roundup anymore.
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 03/12] bpf: Implement exclusive map creation
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
2025-06-06 23:29 ` [PATCH 01/12] bpf: Implement an internal helper for SHA256 hashing KP Singh
2025-06-06 23:29 ` [PATCH 02/12] bpf: Update the bpf_prog_calc_tag to use SHA256 KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-09 20:58 ` Alexei Starovoitov
2025-06-06 23:29 ` [PATCH 04/12] libbpf: Implement SHA256 internal helper KP Singh
` (10 subsequent siblings)
13 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
Exclusive maps allow maps to only be accessed by a trusted loader
program with a matching hash. This allows the trusted loader program
to load the map and verify the integrity.
Both maps of maps (array, hash) cannot be exclusive and exclusive maps
cannot be added as inner maps. This is because one would need to
guarantee the exclusivity of the inner maps and would require
significant changes in the verifier.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 3 ++-
kernel/bpf/arraymap.c | 4 ++++
kernel/bpf/hashtab.c | 15 +++++++++------
kernel/bpf/syscall.c | 35 ++++++++++++++++++++++++++++++----
kernel/bpf/verifier.c | 7 +++++++
tools/include/uapi/linux/bpf.h | 3 ++-
7 files changed, 56 insertions(+), 12 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 77d62c74a4e7..cb1bea99702a 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -311,6 +311,7 @@ struct bpf_map {
bool free_after_rcu_gp;
atomic64_t sleepable_refcnt;
s64 __percpu *elem_count;
+ char *excl_prog_sha;
};
static inline const char *btf_field_type_name(enum btf_field_type type)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 16e95398c91c..6f2f4f3b3822 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1504,6 +1504,8 @@ union bpf_attr {
* If provided, map_flags should have BPF_F_TOKEN_FD flag set.
*/
__s32 map_token_fd;
+ __u32 excl_prog_hash_size;
+ __aligned_u64 excl_prog_hash;
};
struct { /* anonymous struct used by BPF_MAP_*_ELEM and BPF_MAP_FREEZE commands */
@@ -1841,7 +1843,6 @@ union bpf_attr {
__u32 flags;
__u32 bpffs_fd;
} token_create;
-
} __attribute__((aligned(8)));
/* The description below is an attempt at providing documentation to eBPF
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index eb28c0f219ee..8719aa821b63 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -896,6 +896,10 @@ int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file,
if (IS_ERR(new_ptr))
return PTR_ERR(new_ptr);
+ if (map->map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS &&
+ ((struct bpf_map *)new_ptr)->excl_prog_sha)
+ return -EOPNOTSUPP;
+
if (map->ops->map_poke_run) {
mutex_lock(&array->aux->poke_mutex);
old_ptr = xchg(array->ptrs + index, new_ptr);
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 71f9931ac64c..2732b4a23c27 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -2537,22 +2537,25 @@ int bpf_fd_htab_map_lookup_elem(struct bpf_map *map, void *key, u32 *value)
int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
void *key, void *value, u64 map_flags)
{
- void *ptr;
+ struct bpf_map *inner_map;
int ret;
- ptr = map->ops->map_fd_get_ptr(map, map_file, *(int *)value);
- if (IS_ERR(ptr))
- return PTR_ERR(ptr);
+ inner_map = map->ops->map_fd_get_ptr(map, map_file, *(int *)value);
+ if (IS_ERR(inner_map))
+ return PTR_ERR(inner_map);
+
+ if (inner_map->excl_prog_sha)
+ return -EOPNOTSUPP;
/* The htab bucket lock is always held during update operations in fd
* htab map, and the following rcu_read_lock() is only used to avoid
* the WARN_ON_ONCE in htab_map_update_elem_in_place().
*/
rcu_read_lock();
- ret = htab_map_update_elem_in_place(map, key, &ptr, map_flags, false, false);
+ ret = htab_map_update_elem_in_place(map, key, &inner_map, map_flags, false, false);
rcu_read_unlock();
if (ret)
- map->ops->map_fd_put_ptr(map, ptr, false);
+ map->ops->map_fd_put_ptr(map, inner_map, false);
return ret;
}
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 4b5f29168618..bef9edcfdb76 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -858,6 +858,7 @@ static void bpf_map_free(struct bpf_map *map)
* the free of values or special fields allocated from bpf memory
* allocator.
*/
+ kfree(map->excl_prog_sha);
migrate_disable();
map->ops->map_free(map);
migrate_enable();
@@ -1335,9 +1336,9 @@ static bool bpf_net_capable(void)
return capable(CAP_NET_ADMIN) || capable(CAP_SYS_ADMIN);
}
-#define BPF_MAP_CREATE_LAST_FIELD map_token_fd
+#define BPF_MAP_CREATE_LAST_FIELD excl_prog_hash
/* called via syscall */
-static int map_create(union bpf_attr *attr, bool kernel)
+static int map_create(union bpf_attr *attr, bpfptr_t uattr)
{
const struct bpf_map_ops *ops;
struct bpf_token *token = NULL;
@@ -1527,7 +1528,33 @@ static int map_create(union bpf_attr *attr, bool kernel)
attr->btf_vmlinux_value_type_id;
}
- err = security_bpf_map_create(map, attr, token, kernel);
+ if (attr->excl_prog_hash) {
+ bpfptr_t uprog_hash = make_bpfptr(attr->excl_prog_hash, uattr.is_kernel);
+
+ if (map->inner_map_meta) {
+ err = -EOPNOTSUPP;
+ goto free_map;
+ }
+
+ map->excl_prog_sha = kzalloc(SHA256_DIGEST_SIZE, GFP_KERNEL);
+ if (!map->excl_prog_sha) {
+ err = -EINVAL;
+ goto free_map;
+ }
+
+ if (attr->excl_prog_hash_size < SHA256_DIGEST_SIZE) {
+ err = -EINVAL;
+ goto free_map;
+ }
+
+ if (copy_from_bpfptr(map->excl_prog_sha, uprog_hash,
+ SHA256_DIGEST_SIZE)) {
+ err = -EFAULT;
+ goto free_map;
+ }
+ }
+
+ err = security_bpf_map_create(map, attr, token, uattr.is_kernel);
if (err)
goto free_map_sec;
@@ -5815,7 +5842,7 @@ static int __sys_bpf(enum bpf_cmd cmd, bpfptr_t uattr, unsigned int size)
switch (cmd) {
case BPF_MAP_CREATE:
- err = map_create(&attr, uattr.is_kernel);
+ err = map_create(&attr, uattr);
break;
case BPF_MAP_LOOKUP_ELEM:
err = map_lookup_elem(&attr);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index d5807d2efc92..15fdd63bdcf9 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -19943,6 +19943,12 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env,
{
enum bpf_prog_type prog_type = resolve_prog_type(prog);
+ if (map->excl_prog_sha &&
+ memcmp(map->excl_prog_sha, prog->digest, SHA256_DIGEST_SIZE)) {
+ verbose(env, "exclusive map access denied\n");
+ return -EACCES;
+ }
+
if (btf_record_has_field(map->record, BPF_LIST_HEAD) ||
btf_record_has_field(map->record, BPF_RB_ROOT)) {
if (is_tracing_prog_type(prog_type)) {
@@ -20051,6 +20057,7 @@ static int __add_used_map(struct bpf_verifier_env *env, struct bpf_map *map)
{
int i, err;
+ /* check if the map is used already*/
/* check whether we recorded this map already */
for (i = 0; i < env->used_map_cnt; i++)
if (env->used_maps[i] == map)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 16e95398c91c..6f2f4f3b3822 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1504,6 +1504,8 @@ union bpf_attr {
* If provided, map_flags should have BPF_F_TOKEN_FD flag set.
*/
__s32 map_token_fd;
+ __u32 excl_prog_hash_size;
+ __aligned_u64 excl_prog_hash;
};
struct { /* anonymous struct used by BPF_MAP_*_ELEM and BPF_MAP_FREEZE commands */
@@ -1841,7 +1843,6 @@ union bpf_attr {
__u32 flags;
__u32 bpffs_fd;
} token_create;
-
} __attribute__((aligned(8)));
/* The description below is an attempt at providing documentation to eBPF
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 03/12] bpf: Implement exclusive map creation
2025-06-06 23:29 ` [PATCH 03/12] bpf: Implement exclusive map creation KP Singh
@ 2025-06-09 20:58 ` Alexei Starovoitov
2025-06-11 21:44 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-09 20:58 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> Exclusive maps allow maps to only be accessed by a trusted loader
> program with a matching hash. This allows the trusted loader program
> to load the map and verify the integrity.
>
> Both maps of maps (array, hash) cannot be exclusive and exclusive maps
> cannot be added as inner maps. This is because one would need to
> guarantee the exclusivity of the inner maps and would require
> significant changes in the verifier.
I was back and forth on it early, but after sleeping on it
I think we should think of exclusive maps as a generic concept and
not tied to trusted loader and prog signatures.
So any map type should be allowed to be exclusive and this patch
can handle it fine without adding more complexity.
In map-in-map case the outer map can be created exclusive
to a particular program, but inner maps don't have to be exclusive,
and it's fine. The lskel loader won't be using map-in-map anyway,
so no issues there.
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> include/linux/bpf.h | 1 +
> include/uapi/linux/bpf.h | 3 ++-
> kernel/bpf/arraymap.c | 4 ++++
> kernel/bpf/hashtab.c | 15 +++++++++------
> kernel/bpf/syscall.c | 35 ++++++++++++++++++++++++++++++----
> kernel/bpf/verifier.c | 7 +++++++
> tools/include/uapi/linux/bpf.h | 3 ++-
> 7 files changed, 56 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 77d62c74a4e7..cb1bea99702a 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -311,6 +311,7 @@ struct bpf_map {
> bool free_after_rcu_gp;
> atomic64_t sleepable_refcnt;
> s64 __percpu *elem_count;
> + char *excl_prog_sha;
> };
>
> static inline const char *btf_field_type_name(enum btf_field_type type)
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 16e95398c91c..6f2f4f3b3822 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1504,6 +1504,8 @@ union bpf_attr {
> * If provided, map_flags should have BPF_F_TOKEN_FD flag set.
> */
> __s32 map_token_fd;
> + __u32 excl_prog_hash_size;
> + __aligned_u64 excl_prog_hash;
> };
>
> struct { /* anonymous struct used by BPF_MAP_*_ELEM and BPF_MAP_FREEZE commands */
> @@ -1841,7 +1843,6 @@ union bpf_attr {
> __u32 flags;
> __u32 bpffs_fd;
> } token_create;
> -
> } __attribute__((aligned(8)));
>
> /* The description below is an attempt at providing documentation to eBPF
> diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
> index eb28c0f219ee..8719aa821b63 100644
> --- a/kernel/bpf/arraymap.c
> +++ b/kernel/bpf/arraymap.c
> @@ -896,6 +896,10 @@ int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file,
> if (IS_ERR(new_ptr))
> return PTR_ERR(new_ptr);
>
> + if (map->map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS &&
> + ((struct bpf_map *)new_ptr)->excl_prog_sha)
> + return -EOPNOTSUPP;
> +
bpf_fd_array_map_update_elem() is called for prog_array too,
so new_ptr can be a pointer to a bpf_prog.
If we support all map types this check can be dropped.
> if (map->ops->map_poke_run) {
> mutex_lock(&array->aux->poke_mutex);
> old_ptr = xchg(array->ptrs + index, new_ptr);
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index 71f9931ac64c..2732b4a23c27 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -2537,22 +2537,25 @@ int bpf_fd_htab_map_lookup_elem(struct bpf_map *map, void *key, u32 *value)
> int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
> void *key, void *value, u64 map_flags)
> {
> - void *ptr;
> + struct bpf_map *inner_map;
> int ret;
>
> - ptr = map->ops->map_fd_get_ptr(map, map_file, *(int *)value);
> - if (IS_ERR(ptr))
> - return PTR_ERR(ptr);
> + inner_map = map->ops->map_fd_get_ptr(map, map_file, *(int *)value);
> + if (IS_ERR(inner_map))
> + return PTR_ERR(inner_map);
> +
> + if (inner_map->excl_prog_sha)
> + return -EOPNOTSUPP;
I would simply drop these checks too.
> /* The htab bucket lock is always held during update operations in fd
> * htab map, and the following rcu_read_lock() is only used to avoid
> * the WARN_ON_ONCE in htab_map_update_elem_in_place().
> */
> rcu_read_lock();
> - ret = htab_map_update_elem_in_place(map, key, &ptr, map_flags, false, false);
> + ret = htab_map_update_elem_in_place(map, key, &inner_map, map_flags, false, false);
> rcu_read_unlock();
> if (ret)
> - map->ops->map_fd_put_ptr(map, ptr, false);
> + map->ops->map_fd_put_ptr(map, inner_map, false);
>
> return ret;
> }
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 4b5f29168618..bef9edcfdb76 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -858,6 +858,7 @@ static void bpf_map_free(struct bpf_map *map)
> * the free of values or special fields allocated from bpf memory
> * allocator.
> */
> + kfree(map->excl_prog_sha);
> migrate_disable();
> map->ops->map_free(map);
> migrate_enable();
> @@ -1335,9 +1336,9 @@ static bool bpf_net_capable(void)
> return capable(CAP_NET_ADMIN) || capable(CAP_SYS_ADMIN);
> }
>
> -#define BPF_MAP_CREATE_LAST_FIELD map_token_fd
> +#define BPF_MAP_CREATE_LAST_FIELD excl_prog_hash
> /* called via syscall */
> -static int map_create(union bpf_attr *attr, bool kernel)
> +static int map_create(union bpf_attr *attr, bpfptr_t uattr)
> {
> const struct bpf_map_ops *ops;
> struct bpf_token *token = NULL;
> @@ -1527,7 +1528,33 @@ static int map_create(union bpf_attr *attr, bool kernel)
> attr->btf_vmlinux_value_type_id;
> }
>
> - err = security_bpf_map_create(map, attr, token, kernel);
> + if (attr->excl_prog_hash) {
> + bpfptr_t uprog_hash = make_bpfptr(attr->excl_prog_hash, uattr.is_kernel);
> +
> + if (map->inner_map_meta) {
> + err = -EOPNOTSUPP;
> + goto free_map;
> + }
drop this one too.
> +
> + map->excl_prog_sha = kzalloc(SHA256_DIGEST_SIZE, GFP_KERNEL);
> + if (!map->excl_prog_sha) {
> + err = -EINVAL;
ENOMEM
> + goto free_map;
> + }
> +
> + if (attr->excl_prog_hash_size < SHA256_DIGEST_SIZE) {
The idea here is to allow extensibility with different sizes?
Then use == here.
> + err = -EINVAL;
> + goto free_map;
> + }
> +
> + if (copy_from_bpfptr(map->excl_prog_sha, uprog_hash,
> + SHA256_DIGEST_SIZE)) {
> + err = -EFAULT;
> + goto free_map;
> + }
> + }
in the 'else' part let's also check that excl_prog_hash_size != 0
while excl_prog_hash == 0 is an invalid combination.
> +
> + err = security_bpf_map_create(map, attr, token, uattr.is_kernel);
> if (err)
> goto free_map_sec;
>
> @@ -5815,7 +5842,7 @@ static int __sys_bpf(enum bpf_cmd cmd, bpfptr_t uattr, unsigned int size)
>
> switch (cmd) {
> case BPF_MAP_CREATE:
> - err = map_create(&attr, uattr.is_kernel);
> + err = map_create(&attr, uattr);
> break;
> case BPF_MAP_LOOKUP_ELEM:
> err = map_lookup_elem(&attr);
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index d5807d2efc92..15fdd63bdcf9 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -19943,6 +19943,12 @@ static int check_map_prog_compatibility(struct bpf_verifier_env *env,
> {
> enum bpf_prog_type prog_type = resolve_prog_type(prog);
>
> + if (map->excl_prog_sha &&
> + memcmp(map->excl_prog_sha, prog->digest, SHA256_DIGEST_SIZE)) {
> + verbose(env, "exclusive map access denied\n");
May be make it a bit more precise:
"program's digest doesn't match map's digest" ?
> + return -EACCES;
> + }
> +
> if (btf_record_has_field(map->record, BPF_LIST_HEAD) ||
> btf_record_has_field(map->record, BPF_RB_ROOT)) {
> if (is_tracing_prog_type(prog_type)) {
> @@ -20051,6 +20057,7 @@ static int __add_used_map(struct bpf_verifier_env *env, struct bpf_map *map)
> {
> int i, err;
>
> + /* check if the map is used already*/
left over comment?
> /* check whether we recorded this map already */
> for (i = 0; i < env->used_map_cnt; i++)
> if (env->used_maps[i] == map)
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 16e95398c91c..6f2f4f3b3822 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -1504,6 +1504,8 @@ union bpf_attr {
> * If provided, map_flags should have BPF_F_TOKEN_FD flag set.
> */
> __s32 map_token_fd;
> + __u32 excl_prog_hash_size;
> + __aligned_u64 excl_prog_hash;
> };
>
> struct { /* anonymous struct used by BPF_MAP_*_ELEM and BPF_MAP_FREEZE commands */
> @@ -1841,7 +1843,6 @@ union bpf_attr {
> __u32 flags;
> __u32 bpffs_fd;
> } token_create;
> -
> } __attribute__((aligned(8)));
>
> /* The description below is an attempt at providing documentation to eBPF
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 03/12] bpf: Implement exclusive map creation
2025-06-09 20:58 ` Alexei Starovoitov
@ 2025-06-11 21:44 ` KP Singh
2025-06-11 22:55 ` Alexei Starovoitov
0 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-11 21:44 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Mon, Jun 9, 2025 at 10:58 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> >
> > Exclusive maps allow maps to only be accessed by a trusted loader
> > program with a matching hash. This allows the trusted loader program
> > to load the map and verify the integrity.
> >
> > Both maps of maps (array, hash) cannot be exclusive and exclusive maps
> > cannot be added as inner maps. This is because one would need to
> > guarantee the exclusivity of the inner maps and would require
> > significant changes in the verifier.
>
> I was back and forth on it early, but after sleeping on it
> I think we should think of exclusive maps as a generic concept and
> not tied to trusted loader and prog signatures.
> So any map type should be allowed to be exclusive and this patch
> can handle it fine without adding more complexity.
> In map-in-map case the outer map can be created exclusive
> to a particular program, but inner maps don't have to be exclusive,
> and it's fine. The lskel loader won't be using map-in-map anyway,
> so no issues there.
So the idea here is that if an outer map has exclusive access, only it
can add inner maps. I think this is a valid combination as it would
still retain exclusivity over the outer maps elements.
- KP
>
> > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > ---
[...]
> >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 03/12] bpf: Implement exclusive map creation
2025-06-11 21:44 ` KP Singh
@ 2025-06-11 22:55 ` Alexei Starovoitov
2025-06-11 23:05 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-11 22:55 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Wed, Jun 11, 2025 at 2:44 PM KP Singh <kpsingh@kernel.org> wrote:
>
> On Mon, Jun 9, 2025 at 10:58 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> > >
> > > Exclusive maps allow maps to only be accessed by a trusted loader
> > > program with a matching hash. This allows the trusted loader program
> > > to load the map and verify the integrity.
> > >
> > > Both maps of maps (array, hash) cannot be exclusive and exclusive maps
> > > cannot be added as inner maps. This is because one would need to
> > > guarantee the exclusivity of the inner maps and would require
> > > significant changes in the verifier.
> >
> > I was back and forth on it early, but after sleeping on it
> > I think we should think of exclusive maps as a generic concept and
> > not tied to trusted loader and prog signatures.
> > So any map type should be allowed to be exclusive and this patch
> > can handle it fine without adding more complexity.
> > In map-in-map case the outer map can be created exclusive
> > to a particular program, but inner maps don't have to be exclusive,
> > and it's fine. The lskel loader won't be using map-in-map anyway,
> > so no issues there.
>
> So the idea here is that if an outer map has exclusive access, only it
> can add inner maps. I think this is a valid combination as it would
> still retain exclusivity over the outer maps elements.
I don't follow.
What do you mean by "map can add inner maps ?"
The exclusivity is a contract between prog<->map.
It doesn't matter whether the map is outer or inner.
The prog cannot add an inner map.
Only the user space can and such inner maps are detached
from anything.
Technically we can come up with a requirement that inner maps
have to have the same prog sha as outer map.
This can be enforced by bpf_map_meta_equal() logic.
But that feels like overkill.
The user space can query prog's sha, create an inner map with
such prog sha and add it to outer map. So the additional check
in bpf_map_meta_equal() would be easy to bypass.
Since so, I would not add such artificial obstacle.
Let all types of maps have this exclusive feature.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 03/12] bpf: Implement exclusive map creation
2025-06-11 22:55 ` Alexei Starovoitov
@ 2025-06-11 23:05 ` KP Singh
0 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-06-11 23:05 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Thu, Jun 12, 2025 at 12:55 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Jun 11, 2025 at 2:44 PM KP Singh <kpsingh@kernel.org> wrote:
> >
> > On Mon, Jun 9, 2025 at 10:58 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
[...]
> > can add inner maps. I think this is a valid combination as it would
> > still retain exclusivity over the outer maps elements.
>
> I don't follow.
> What do you mean by "map can add inner maps ?"
Ah, I missed this bit, a program cannot call bpf_map_update_elem on
maps of maps and such updates happen only in userspace.
Thanks, updated the code.
- KP
> The exclusivity is a contract between prog<->map.
> It doesn't matter whether the map is outer or inner.
> The prog cannot add an inner map.
> Only the user space can and such inner maps are detached
> from anything.
> Technically we can come up with a requirement that inner maps
> have to have the same prog sha as outer map.
> This can be enforced by bpf_map_meta_equal() logic.
> But that feels like overkill.
> The user space can query prog's sha, create an inner map with
> such prog sha and add it to outer map. So the additional check
> in bpf_map_meta_equal() would be easy to bypass.
> Since so, I would not add such artificial obstacle.
> Let all types of maps have this exclusive feature.
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 04/12] libbpf: Implement SHA256 internal helper
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (2 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 03/12] bpf: Implement exclusive map creation KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-12 22:55 ` Andrii Nakryiko
2025-06-06 23:29 ` [PATCH 05/12] libbpf: Support exclusive map creation KP Singh
` (9 subsequent siblings)
13 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
Use AF_ALG sockets to not have libbpf depend on OpenSSL. The helper is
used for the loader generation code to embed the metadata hash in the
loader program and also by the bpf_map__make_exclusive API to calculate
the hash of the program the map is exclusive to.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
tools/lib/bpf/libbpf.c | 57 +++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf_internal.h | 9 ++++++
2 files changed, 66 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index e9c641a2fb20..475038d04cb4 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -43,6 +43,9 @@
#include <sys/vfs.h>
#include <sys/utsname.h>
#include <sys/resource.h>
+#include <sys/socket.h>
+#include <linux/if_alg.h>
+#include <linux/socket.h>
#include <libelf.h>
#include <gelf.h>
#include <zlib.h>
@@ -14161,3 +14164,57 @@ void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s)
free(s->progs);
free(s);
}
+
+int libbpf_sha256(const void *data, size_t data_size, void *sha_out)
+{
+ int sock_fd = -1;
+ int op_fd = -1;
+ int err = 0;
+
+ struct sockaddr_alg sa = {
+ .salg_family = AF_ALG,
+ .salg_type = "hash",
+ .salg_name = "sha256"
+ };
+
+ if (!data || !sha_out)
+ return -EINVAL;
+
+ sock_fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
+ if (sock_fd < 0) {
+ err = -errno;
+ pr_warn("failed to create AF_ALG socket for SHA256: %s\n", errstr(err));
+ return libbpf_err(err);
+ }
+
+ if (bind(sock_fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) {
+ err = -errno;
+ pr_warn("failed to bind to AF_ALG socket for SHA256: %s\n", errstr(err));
+ goto out_sock;
+ }
+
+ op_fd = accept(sock_fd, NULL, 0);
+ if (op_fd < 0) {
+ err = -errno;
+ pr_warn("failed to accept from AF_ALG socket for SHA256: %s\n", errstr(err));
+ goto out_sock;
+ }
+
+ if (write(op_fd, data, data_size) != data_size) {
+ err = -errno;
+ pr_warn("failed to write data to AF_ALG socket for SHA256: %s\n", errstr(err));
+ goto out;
+ }
+
+ if (read(op_fd, sha_out, SHA256_DIGEST_LENGTH) != SHA256_DIGEST_LENGTH) {
+ err = -errno;
+ pr_warn("failed to read SHA256 from AF_ALG socket: %s\n", errstr(err));
+ goto out;
+ }
+
+out:
+ close(op_fd);
+out_sock:
+ close(sock_fd);
+ return libbpf_err(err);
+}
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
index 477a3b3389a0..79c6c0dac878 100644
--- a/tools/lib/bpf/libbpf_internal.h
+++ b/tools/lib/bpf/libbpf_internal.h
@@ -736,4 +736,13 @@ int elf_resolve_pattern_offsets(const char *binary_path, const char *pattern,
int probe_fd(int fd);
+#ifndef SHA256_DIGEST_LENGTH
+#define SHA256_DIGEST_LENGTH 32
+#endif
+
+#ifndef SHA256_DWORD_SIZE
+#define SHA256_DWORD_SIZE SHA256_DIGEST_LENGTH / sizeof(__u64)
+#endif
+
+int libbpf_sha256(const void *data, size_t data_size, void *sha_out);
#endif /* __LIBBPF_LIBBPF_INTERNAL_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 04/12] libbpf: Implement SHA256 internal helper
2025-06-06 23:29 ` [PATCH 04/12] libbpf: Implement SHA256 internal helper KP Singh
@ 2025-06-12 22:55 ` Andrii Nakryiko
0 siblings, 0 replies; 79+ messages in thread
From: Andrii Nakryiko @ 2025-06-12 22:55 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> Use AF_ALG sockets to not have libbpf depend on OpenSSL. The helper is
> used for the loader generation code to embed the metadata hash in the
> loader program and also by the bpf_map__make_exclusive API to calculate
> the hash of the program the map is exclusive to.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> tools/lib/bpf/libbpf.c | 57 +++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf_internal.h | 9 ++++++
> 2 files changed, 66 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index e9c641a2fb20..475038d04cb4 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -43,6 +43,9 @@
> #include <sys/vfs.h>
> #include <sys/utsname.h>
> #include <sys/resource.h>
> +#include <sys/socket.h>
> +#include <linux/if_alg.h>
> +#include <linux/socket.h>
> #include <libelf.h>
> #include <gelf.h>
> #include <zlib.h>
> @@ -14161,3 +14164,57 @@ void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s)
> free(s->progs);
> free(s);
> }
> +
> +int libbpf_sha256(const void *data, size_t data_size, void *sha_out)
naming convention nit: in libbpf sources we usually use _sz suffix for size
> +{
> + int sock_fd = -1;
> + int op_fd = -1;
> + int err = 0;
> +
nit: unnecessary empty line, please keep all variable decls in one
contiguous block
> + struct sockaddr_alg sa = {
> + .salg_family = AF_ALG,
> + .salg_type = "hash",
> + .salg_name = "sha256"
> + };
> +
> + if (!data || !sha_out)
> + return -EINVAL;
this is internal API, no need for this (and we don't really check for
NULL for mandatory arguments even in public APIs), so let's just drop
this check
if anything, I'd probably require passing sha_out_sz and validate that
it's equal to SHA256_DIGEST_LENGTH to prevent silent corruptions
> +
> + sock_fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
> + if (sock_fd < 0) {
> + err = -errno;
> + pr_warn("failed to create AF_ALG socket for SHA256: %s\n", errstr(err));
> + return libbpf_err(err);
> + }
> +
> + if (bind(sock_fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) {
> + err = -errno;
> + pr_warn("failed to bind to AF_ALG socket for SHA256: %s\n", errstr(err));
> + goto out_sock;
> + }
> +
> + op_fd = accept(sock_fd, NULL, 0);
> + if (op_fd < 0) {
> + err = -errno;
> + pr_warn("failed to accept from AF_ALG socket for SHA256: %s\n", errstr(err));
> + goto out_sock;
> + }
> +
> + if (write(op_fd, data, data_size) != data_size) {
> + err = -errno;
> + pr_warn("failed to write data to AF_ALG socket for SHA256: %s\n", errstr(err));
> + goto out;
> + }
> +
> + if (read(op_fd, sha_out, SHA256_DIGEST_LENGTH) != SHA256_DIGEST_LENGTH) {
> + err = -errno;
> + pr_warn("failed to read SHA256 from AF_ALG socket: %s\n", errstr(err));
> + goto out;
> + }
> +
> +out:
> + close(op_fd);
> +out_sock:
> + close(sock_fd);
nit: given you init fds to -1, you can simplify out* jumping to just
single out: clause with if (fd >= 0) close(fd); sequence
> + return libbpf_err(err);
> +}
> diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
> index 477a3b3389a0..79c6c0dac878 100644
> --- a/tools/lib/bpf/libbpf_internal.h
> +++ b/tools/lib/bpf/libbpf_internal.h
> @@ -736,4 +736,13 @@ int elf_resolve_pattern_offsets(const char *binary_path, const char *pattern,
>
> int probe_fd(int fd);
>
> +#ifndef SHA256_DIGEST_LENGTH
> +#define SHA256_DIGEST_LENGTH 32
> +#endif
> +
> +#ifndef SHA256_DWORD_SIZE
> +#define SHA256_DWORD_SIZE SHA256_DIGEST_LENGTH / sizeof(__u64)
> +#endif
do we really need ifndef guarding these?...
> +
> +int libbpf_sha256(const void *data, size_t data_size, void *sha_out);
> #endif /* __LIBBPF_LIBBPF_INTERNAL_H */
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 05/12] libbpf: Support exclusive map creation
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (3 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 04/12] libbpf: Implement SHA256 internal helper KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-07 9:16 ` kernel test robot
2025-06-12 22:55 ` Andrii Nakryiko
2025-06-06 23:29 ` [PATCH 06/12] selftests/bpf: Add tests for exclusive maps KP Singh
` (8 subsequent siblings)
13 siblings, 2 replies; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
Implement a convenient method i.e. bpf_map__make_exclusive which
calculates the hash for the program and registers it with the map for
creation as an exclusive map when the objects are loaded.
The hash of the program must be computed after all the relocations are
done.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
tools/lib/bpf/bpf.c | 4 +-
tools/lib/bpf/bpf.h | 4 +-
tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
tools/lib/bpf/libbpf.h | 13 +++++++
tools/lib/bpf/libbpf.map | 5 +++
tools/lib/bpf/libbpf_version.h | 2 +-
6 files changed, 92 insertions(+), 4 deletions(-)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index a9c3e33d0f8a..11fa2d64ccca 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -172,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
__u32 max_entries,
const struct bpf_map_create_opts *opts)
{
- const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
+ const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
union bpf_attr attr;
int fd;
@@ -203,6 +203,8 @@ int bpf_map_create(enum bpf_map_type map_type,
attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
+ attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
+ attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
return libbpf_err_errno(fd);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 777627d33d25..a82b79c0c349 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -54,9 +54,11 @@ struct bpf_map_create_opts {
__s32 value_type_btf_obj_fd;
__u32 token_fd;
+ __u32 excl_prog_hash_size;
+ const void *excl_prog_hash;
size_t :0;
};
-#define bpf_map_create_opts__last_field token_fd
+#define bpf_map_create_opts__last_field excl_prog_hash
LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
const char *map_name,
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 475038d04cb4..17de756973f4 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -499,6 +499,7 @@ struct bpf_program {
__u32 line_info_rec_size;
__u32 line_info_cnt;
__u32 prog_flags;
+ __u8 hash[SHA256_DIGEST_LENGTH];
};
struct bpf_struct_ops {
@@ -578,6 +579,8 @@ struct bpf_map {
bool autocreate;
bool autoattach;
__u64 map_extra;
+ const void *excl_prog_sha;
+ __u32 excl_prog_sha_size;
};
enum extern_type {
@@ -4485,6 +4488,43 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
}
}
+static int bpf_program__compute_hash(struct bpf_program *prog)
+{
+ struct bpf_insn *purged;
+ bool was_ld_map;
+ int i, err;
+
+ purged = calloc(1, BPF_INSN_SZ * prog->insns_cnt);
+ if (!purged)
+ return -ENOMEM;
+
+ /* If relocations have been done, the map_fd needs to be
+ * discarded for the digest calculation.
+ */
+ for (i = 0, was_ld_map = false; i < prog->insns_cnt; i++) {
+ purged[i] = prog->insns[i];
+ if (!was_ld_map &&
+ purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
+ (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
+ purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
+ was_ld_map = true;
+ purged[i].imm = 0;
+ } else if (was_ld_map && purged[i].code == 0 &&
+ purged[i].dst_reg == 0 && purged[i].src_reg == 0 &&
+ purged[i].off == 0) {
+ was_ld_map = false;
+ purged[i].imm = 0;
+ } else {
+ was_ld_map = false;
+ }
+ }
+ err = libbpf_sha256(purged,
+ prog->insns_cnt * sizeof(struct bpf_insn),
+ prog->hash);
+ free(purged);
+ return err;
+}
+
static int bpf_program__record_reloc(struct bpf_program *prog,
struct reloc_desc *reloc_desc,
__u32 insn_idx, const char *sym_name,
@@ -5214,6 +5254,10 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
create_attr.token_fd = obj->token_fd;
if (obj->token_fd)
create_attr.map_flags |= BPF_F_TOKEN_FD;
+ if (map->excl_prog_sha) {
+ create_attr.excl_prog_hash = map->excl_prog_sha;
+ create_attr.excl_prog_hash_size = map->excl_prog_sha_size;
+ }
if (bpf_map__is_struct_ops(map)) {
create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
@@ -7933,6 +7977,11 @@ static int bpf_object_prepare_progs(struct bpf_object *obj)
err = bpf_object__sanitize_prog(obj, prog);
if (err)
return err;
+ /* Now that the instruction buffer is stable finalize the hash
+ */
+ err = bpf_program__compute_hash(&obj->programs[i]);
+ if (err)
+ return err;
}
return 0;
}
@@ -8602,8 +8651,8 @@ static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_pat
err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
err = err ? : bpf_object__sanitize_and_load_btf(obj);
- err = err ? : bpf_object__create_maps(obj);
err = err ? : bpf_object_prepare_progs(obj);
+ err = err ? : bpf_object__create_maps(obj);
if (err) {
bpf_object_unpin(obj);
@@ -10502,6 +10551,23 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
return 0;
}
+int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog)
+{
+ if (map_is_created(map)) {
+ pr_warn("%s must be called before creation\n", __func__);
+ return libbpf_err(-EINVAL);
+ }
+
+ if (prog->obj->state == OBJ_LOADED) {
+ pr_warn("%s must be called before the prog load\n", __func__);
+ return libbpf_err(-EINVAL);
+ }
+ map->excl_prog_sha = prog->hash;
+ map->excl_prog_sha_size = SHA256_DIGEST_LENGTH;
+ return 0;
+}
+
+
static struct bpf_map *
__bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
{
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index d39f19c8396d..b6ee9870523a 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -1249,6 +1249,19 @@ LIBBPF_API int bpf_map__lookup_and_delete_elem(const struct bpf_map *map,
*/
LIBBPF_API int bpf_map__get_next_key(const struct bpf_map *map,
const void *cur_key, void *next_key, size_t key_sz);
+/**
+ * @brief **bpf_map__make_exclusive()** makes the map exclusive to a single program.
+ * @param map BPF map to make exclusive.
+ * @param prog BPF program to be the exclusive user of the map.
+ * @return 0 on success; a negative error code otherwise.
+ *
+ * Once a map is made exclusive, only the specified program can access its
+ * contents. **bpf_map__make_exclusive** must be called before the objects are
+ * loaded.
+ */
+LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
+
+int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
struct bpf_xdp_set_link_opts {
size_t sz;
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 1205f9a4fe04..67b1ff4202a1 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -444,3 +444,8 @@ LIBBPF_1.6.0 {
btf__add_decl_attr;
btf__add_type_attr;
} LIBBPF_1.5.0;
+
+LIBBPF_1.7.0 {
+ global:
+ bpf_map__make_exclusive;
+} LIBBPF_1.6.0;
diff --git a/tools/lib/bpf/libbpf_version.h b/tools/lib/bpf/libbpf_version.h
index 28c58fb17250..99331e317dee 100644
--- a/tools/lib/bpf/libbpf_version.h
+++ b/tools/lib/bpf/libbpf_version.h
@@ -4,6 +4,6 @@
#define __LIBBPF_VERSION_H
#define LIBBPF_MAJOR_VERSION 1
-#define LIBBPF_MINOR_VERSION 6
+#define LIBBPF_MINOR_VERSION 7
#endif /* __LIBBPF_VERSION_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-06-06 23:29 ` [PATCH 05/12] libbpf: Support exclusive map creation KP Singh
@ 2025-06-07 9:16 ` kernel test robot
2025-06-12 22:55 ` Andrii Nakryiko
1 sibling, 0 replies; 79+ messages in thread
From: kernel test robot @ 2025-06-07 9:16 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module
Cc: oe-kbuild-all, bboscaccy, paul, kys, ast, daniel, andrii,
KP Singh
Hi KP,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/net]
[also build test ERROR on bpf-next/master bpf/master linus/master next-20250606]
[cannot apply to v6.15]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/KP-Singh/bpf-Implement-an-internal-helper-for-SHA256-hashing/20250607-073052
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git net
patch link: https://lore.kernel.org/r/20250606232914.317094-6-kpsingh%40kernel.org
patch subject: [PATCH 05/12] libbpf: Support exclusive map creation
config: i386-buildonly-randconfig-003-20250607 (https://download.01.org/0day-ci/archive/20250607/202506071746.cWvht6xb-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250607/202506071746.cWvht6xb-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202506071746.cWvht6xb-lkp@intel.com/
All errors (new ones prefixed by >>):
In file included from libbpf_errno.c:14:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from btf_relocate.c:31:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from libbpf_internal.h:43,
from nlattr.c:14:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from relo_core.c:64:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from linker.c:24:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from libbpf_internal.h:43,
from strset.c:9:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from bpf_prog_linfo.c:8:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from btf_dump.c:22:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from ringbuf.c:21:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from libbpf_internal.h:43,
from elf.c:11:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from netlink.c:18:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from features.c:6:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from usdt.c:19:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from gen_loader.c:11:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from btf.c:22:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from libbpf_internal.h:43,
from zip.c:16:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
In file included from libbpf_probes.c:18:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
In file included from libbpf_internal.h:43,
from btf_iter.c:13:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/libbpf_errno.o] Error 1 shuffle=3326748311
In file included from libbpf.c:53:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
In file included from bpf.c:36:
>> libbpf.h:1264:5: error: redundant redeclaration of 'bpf_map__make_exclusive' [-Werror=redundant-decls]
1264 | int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/nlattr.o] Error 1 shuffle=3326748311
libbpf.h:1262:16: note: previous declaration of 'bpf_map__make_exclusive' with type 'int(struct bpf_map *, struct bpf_program *)'
1262 | LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
| ^~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/strset.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/btf_iter.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/bpf_prog_linfo.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/zip.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/libbpf_probes.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/elf.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/btf_relocate.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/features.o] Error 1 shuffle=3326748311
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/ringbuf.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/netlink.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/usdt.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/gen_loader.o] Error 1 shuffle=3326748311
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/relo_core.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/btf_dump.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/bpf.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/linker.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/btf.o] Error 1 shuffle=3326748311
cc1: all warnings being treated as errors
make[6]: *** [tools/build/Makefile.build:85: tools/bpf/resolve_btfids/libbpf/staticobjs/libbpf.o] Error 1 shuffle=3326748311
make[6]: Target '__build' not remade because of errors.
make[5]: *** [Makefile:152: tools/bpf/resolve_btfids/libbpf/staticobjs/libbpf-in.o] Error 2 shuffle=3326748311
make[5]: Target 'tools/bpf/resolve_btfids/libbpf/libbpf.a' not remade because of errors.
make[4]: *** [Makefile:61: tools/bpf/resolve_btfids//libbpf/libbpf.a] Error 2 shuffle=3326748311
make[4]: Target 'all' not remade because of errors.
make[3]: *** [Makefile:76: bpf/resolve_btfids] Error 2 shuffle=3326748311
make[2]: *** [Makefile:1448: tools/bpf/resolve_btfids] Error 2 shuffle=3326748311
make[2]: Target 'prepare' not remade because of errors.
make[1]: *** [Makefile:248: __sub-make] Error 2 shuffle=3326748311
make[1]: Target 'prepare' not remade because of errors.
make: *** [Makefile:248: __sub-make] Error 2 shuffle=3326748311
make: Target 'prepare' not remade because of errors.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-06-06 23:29 ` [PATCH 05/12] libbpf: Support exclusive map creation KP Singh
2025-06-07 9:16 ` kernel test robot
@ 2025-06-12 22:55 ` Andrii Nakryiko
2025-06-12 23:41 ` KP Singh
` (2 more replies)
1 sibling, 3 replies; 79+ messages in thread
From: Andrii Nakryiko @ 2025-06-12 22:55 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> Implement a convenient method i.e. bpf_map__make_exclusive which
> calculates the hash for the program and registers it with the map for
> creation as an exclusive map when the objects are loaded.
>
> The hash of the program must be computed after all the relocations are
> done.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> tools/lib/bpf/bpf.c | 4 +-
> tools/lib/bpf/bpf.h | 4 +-
> tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
> tools/lib/bpf/libbpf.h | 13 +++++++
> tools/lib/bpf/libbpf.map | 5 +++
> tools/lib/bpf/libbpf_version.h | 2 +-
> 6 files changed, 92 insertions(+), 4 deletions(-)
>
> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> index a9c3e33d0f8a..11fa2d64ccca 100644
> --- a/tools/lib/bpf/bpf.c
> +++ b/tools/lib/bpf/bpf.c
> @@ -172,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
> __u32 max_entries,
> const struct bpf_map_create_opts *opts)
> {
> - const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
> + const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
> union bpf_attr attr;
> int fd;
>
> @@ -203,6 +203,8 @@ int bpf_map_create(enum bpf_map_type map_type,
> attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
>
> attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
> + attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
> + attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
>
> fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
> return libbpf_err_errno(fd);
> diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
> index 777627d33d25..a82b79c0c349 100644
> --- a/tools/lib/bpf/bpf.h
> +++ b/tools/lib/bpf/bpf.h
> @@ -54,9 +54,11 @@ struct bpf_map_create_opts {
> __s32 value_type_btf_obj_fd;
>
> __u32 token_fd;
> + __u32 excl_prog_hash_size;
> + const void *excl_prog_hash;
> size_t :0;
> };
> -#define bpf_map_create_opts__last_field token_fd
> +#define bpf_map_create_opts__last_field excl_prog_hash
>
> LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
> const char *map_name,
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 475038d04cb4..17de756973f4 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -499,6 +499,7 @@ struct bpf_program {
> __u32 line_info_rec_size;
> __u32 line_info_cnt;
> __u32 prog_flags;
> + __u8 hash[SHA256_DIGEST_LENGTH];
> };
>
> struct bpf_struct_ops {
> @@ -578,6 +579,8 @@ struct bpf_map {
> bool autocreate;
> bool autoattach;
> __u64 map_extra;
> + const void *excl_prog_sha;
> + __u32 excl_prog_sha_size;
> };
>
> enum extern_type {
> @@ -4485,6 +4488,43 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
> }
> }
>
> +static int bpf_program__compute_hash(struct bpf_program *prog)
> +{
> + struct bpf_insn *purged;
> + bool was_ld_map;
> + int i, err;
> +
> + purged = calloc(1, BPF_INSN_SZ * prog->insns_cnt);
> + if (!purged)
> + return -ENOMEM;
> +
> + /* If relocations have been done, the map_fd needs to be
> + * discarded for the digest calculation.
> + */
all this looks sketchy, let's think about some more robust approach
here rather than randomly clearing some fields of some instructions...
> + for (i = 0, was_ld_map = false; i < prog->insns_cnt; i++) {
> + purged[i] = prog->insns[i];
> + if (!was_ld_map &&
> + purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> + (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> + purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> + was_ld_map = true;
> + purged[i].imm = 0;
> + } else if (was_ld_map && purged[i].code == 0 &&
> + purged[i].dst_reg == 0 && purged[i].src_reg == 0 &&
> + purged[i].off == 0) {
> + was_ld_map = false;
> + purged[i].imm = 0;
> + } else {
> + was_ld_map = false;
> + }
> + }
this was_ld_map business is... unnecessary? Just access purged[i + 1]
(checking i + 1 < prog->insns_cnt, of course), and i += 1. This
stateful approach is an unnecessary complication, IMO
> + err = libbpf_sha256(purged,
> + prog->insns_cnt * sizeof(struct bpf_insn),
> + prog->hash);
fits on a single line?
> + free(purged);
> + return err;
> +}
> +
> static int bpf_program__record_reloc(struct bpf_program *prog,
> struct reloc_desc *reloc_desc,
> __u32 insn_idx, const char *sym_name,
> @@ -5214,6 +5254,10 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
> create_attr.token_fd = obj->token_fd;
> if (obj->token_fd)
> create_attr.map_flags |= BPF_F_TOKEN_FD;
> + if (map->excl_prog_sha) {
> + create_attr.excl_prog_hash = map->excl_prog_sha;
> + create_attr.excl_prog_hash_size = map->excl_prog_sha_size;
> + }
>
> if (bpf_map__is_struct_ops(map)) {
> create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
> @@ -7933,6 +7977,11 @@ static int bpf_object_prepare_progs(struct bpf_object *obj)
> err = bpf_object__sanitize_prog(obj, prog);
> if (err)
> return err;
> + /* Now that the instruction buffer is stable finalize the hash
> + */
> + err = bpf_program__compute_hash(&obj->programs[i]);
> + if (err)
> + return err;
we'll do this unconditionally for any program?.. why?
> }
> return 0;
> }
> @@ -8602,8 +8651,8 @@ static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_pat
> err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
> err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
> err = err ? : bpf_object__sanitize_and_load_btf(obj);
> - err = err ? : bpf_object__create_maps(obj);
> err = err ? : bpf_object_prepare_progs(obj);
> + err = err ? : bpf_object__create_maps(obj);
>
> if (err) {
> bpf_object_unpin(obj);
> @@ -10502,6 +10551,23 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
> return 0;
> }
>
> +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog)
> +{
> + if (map_is_created(map)) {
> + pr_warn("%s must be called before creation\n", __func__);
we don't really add __func__ for a long while now, please drop, we
have a consistent "map '%s': what the problem is" format
but for checks like this we also just return -EBUSY or something like
that without error message, so I'd just drop the message altogether
> + return libbpf_err(-EINVAL);
> + }
> +
> + if (prog->obj->state == OBJ_LOADED) {
> + pr_warn("%s must be called before the prog load\n", __func__);
> + return libbpf_err(-EINVAL);
> + }
this is unnecessary, map_is_created() takes care of this
> + map->excl_prog_sha = prog->hash;
> + map->excl_prog_sha_size = SHA256_DIGEST_LENGTH;
this is a hack, I assume that's why you compute that hash for any
program all the time, right? Well, first, if this is called before
bpf_object_prepare(), it will silently do the wrong thing.
But also I don't think we should calculate hash proactively, we could
do this lazily.
> + return 0;
> +}
> +
> +
> static struct bpf_map *
> __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
> {
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index d39f19c8396d..b6ee9870523a 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -1249,6 +1249,19 @@ LIBBPF_API int bpf_map__lookup_and_delete_elem(const struct bpf_map *map,
> */
> LIBBPF_API int bpf_map__get_next_key(const struct bpf_map *map,
> const void *cur_key, void *next_key, size_t key_sz);
> +/**
> + * @brief **bpf_map__make_exclusive()** makes the map exclusive to a single program.
we should also probably error out if map was already marked as
exclusive to some other program
> + * @param map BPF map to make exclusive.
> + * @param prog BPF program to be the exclusive user of the map.
> + * @return 0 on success; a negative error code otherwise.
> + *
> + * Once a map is made exclusive, only the specified program can access its
> + * contents. **bpf_map__make_exclusive** must be called before the objects are
> + * loaded.
> + */
> +LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
> +
> +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
>
> struct bpf_xdp_set_link_opts {
> size_t sz;
> diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> index 1205f9a4fe04..67b1ff4202a1 100644
> --- a/tools/lib/bpf/libbpf.map
> +++ b/tools/lib/bpf/libbpf.map
> @@ -444,3 +444,8 @@ LIBBPF_1.6.0 {
> btf__add_decl_attr;
> btf__add_type_attr;
> } LIBBPF_1.5.0;
> +
> +LIBBPF_1.7.0 {
> + global:
> + bpf_map__make_exclusive;
> +} LIBBPF_1.6.0;
we are still in v1.6 dev phase, no need to add 1.7 just yet
> diff --git a/tools/lib/bpf/libbpf_version.h b/tools/lib/bpf/libbpf_version.h
> index 28c58fb17250..99331e317dee 100644
> --- a/tools/lib/bpf/libbpf_version.h
> +++ b/tools/lib/bpf/libbpf_version.h
> @@ -4,6 +4,6 @@
> #define __LIBBPF_VERSION_H
>
> #define LIBBPF_MAJOR_VERSION 1
> -#define LIBBPF_MINOR_VERSION 6
> +#define LIBBPF_MINOR_VERSION 7
>
> #endif /* __LIBBPF_VERSION_H */
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-06-12 22:55 ` Andrii Nakryiko
@ 2025-06-12 23:41 ` KP Singh
2025-06-13 16:51 ` Andrii Nakryiko
2025-07-12 0:53 ` KP Singh
2025-07-14 12:29 ` KP Singh
2 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-12 23:41 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Fri, Jun 13, 2025 at 12:56 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> >
> > Implement a convenient method i.e. bpf_map__make_exclusive which
> > calculates the hash for the program and registers it with the map for
> > creation as an exclusive map when the objects are loaded.
> >
> > The hash of the program must be computed after all the relocations are
> > done.
> >
> > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > ---
> > tools/lib/bpf/bpf.c | 4 +-
> > tools/lib/bpf/bpf.h | 4 +-
> > tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
> > tools/lib/bpf/libbpf.h | 13 +++++++
> > tools/lib/bpf/libbpf.map | 5 +++
> > tools/lib/bpf/libbpf_version.h | 2 +-
> > 6 files changed, 92 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> > index a9c3e33d0f8a..11fa2d64ccca 100644
> > --- a/tools/lib/bpf/bpf.c
> > +++ b/tools/lib/bpf/bpf.c
> > @@ -172,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
> > __u32 max_entries,
> > const struct bpf_map_create_opts *opts)
> > {
> > - const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
> > + const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
> > union bpf_attr attr;
> > int fd;
> >
> > @@ -203,6 +203,8 @@ int bpf_map_create(enum bpf_map_type map_type,
> > attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
> >
> > attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
> > + attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
> > + attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
> >
> > fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
> > return libbpf_err_errno(fd);
> > diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
> > index 777627d33d25..a82b79c0c349 100644
> > --- a/tools/lib/bpf/bpf.h
> > +++ b/tools/lib/bpf/bpf.h
> > @@ -54,9 +54,11 @@ struct bpf_map_create_opts {
> > __s32 value_type_btf_obj_fd;
> >
> > __u32 token_fd;
> > + __u32 excl_prog_hash_size;
> > + const void *excl_prog_hash;
> > size_t :0;
> > };
> > -#define bpf_map_create_opts__last_field token_fd
> > +#define bpf_map_create_opts__last_field excl_prog_hash
> >
> > LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
> > const char *map_name,
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 475038d04cb4..17de756973f4 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -499,6 +499,7 @@ struct bpf_program {
> > __u32 line_info_rec_size;
> > __u32 line_info_cnt;
> > __u32 prog_flags;
> > + __u8 hash[SHA256_DIGEST_LENGTH];
> > };
> >
> > struct bpf_struct_ops {
> > @@ -578,6 +579,8 @@ struct bpf_map {
> > bool autocreate;
> > bool autoattach;
> > __u64 map_extra;
> > + const void *excl_prog_sha;
> > + __u32 excl_prog_sha_size;
> > };
> >
> > enum extern_type {
> > @@ -4485,6 +4488,43 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
> > }
> > }
> >
> > +static int bpf_program__compute_hash(struct bpf_program *prog)
> > +{
> > + struct bpf_insn *purged;
> > + bool was_ld_map;
> > + int i, err;
> > +
> > + purged = calloc(1, BPF_INSN_SZ * prog->insns_cnt);
> > + if (!purged)
> > + return -ENOMEM;
> > +
> > + /* If relocations have been done, the map_fd needs to be
> > + * discarded for the digest calculation.
> > + */
>
> all this looks sketchy, let's think about some more robust approach
> here rather than randomly clearing some fields of some instructions...
This is exactly what the kernel does:
https://elixir.bootlin.com/linux/v6.15.1/source/kernel/bpf/core.c#L314
We will need to update both, it does not clear them of instructions,
it clears an immediate value that is the FD of the map which is
unstable.
>
> > + for (i = 0, was_ld_map = false; i < prog->insns_cnt; i++) {
> > + purged[i] = prog->insns[i];
> > + if (!was_ld_map &&
> > + purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> > + (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> > + purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> > + was_ld_map = true;
> > + purged[i].imm = 0;
> > + } else if (was_ld_map && purged[i].code == 0 &&
> > + purged[i].dst_reg == 0 && purged[i].src_reg == 0 &&
> > + purged[i].off == 0) {
> > + was_ld_map = false;
> > + purged[i].imm = 0;
> > + } else {
> > + was_ld_map = false;
> > + }
> > + }
>
> this was_ld_map business is... unnecessary? Just access purged[i + 1]
> (checking i + 1 < prog->insns_cnt, of course), and i += 1. This
> stateful approach is an unnecessary complication, IMO
Again, I did not do much here. Happy to make it better though.
- KP
>
> > + err = libbpf_sha256(purged,
> > + prog->insns_cnt * sizeof(struct bpf_insn),
> > + prog->hash);
>
> fits on a single line?
>
> > + free(purged);
> > + return err;
> > +}
> > +
> > static int bpf_program__record_reloc(struct bpf_program *prog,
> > struct reloc_desc *reloc_desc,
> > __u32 insn_idx, const char *sym_name,
> > @@ -5214,6 +5254,10 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
> > create_attr.token_fd = obj->token_fd;
> > if (obj->token_fd)
> > create_attr.map_flags |= BPF_F_TOKEN_FD;
> > + if (map->excl_prog_sha) {
> > + create_attr.excl_prog_hash = map->excl_prog_sha;
> > + create_attr.excl_prog_hash_size = map->excl_prog_sha_size;
> > + }
> >
> > if (bpf_map__is_struct_ops(map)) {
> > create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
> > @@ -7933,6 +7977,11 @@ static int bpf_object_prepare_progs(struct bpf_object *obj)
> > err = bpf_object__sanitize_prog(obj, prog);
> > if (err)
> > return err;
> > + /* Now that the instruction buffer is stable finalize the hash
> > + */
> > + err = bpf_program__compute_hash(&obj->programs[i]);
> > + if (err)
> > + return err;
>
> we'll do this unconditionally for any program?.. why?
>
> > }
> > return 0;
> > }
> > @@ -8602,8 +8651,8 @@ static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_pat
> > err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
> > err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
> > err = err ? : bpf_object__sanitize_and_load_btf(obj);
> > - err = err ? : bpf_object__create_maps(obj);
> > err = err ? : bpf_object_prepare_progs(obj);
> > + err = err ? : bpf_object__create_maps(obj);
> >
> > if (err) {
> > bpf_object_unpin(obj);
> > @@ -10502,6 +10551,23 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
> > return 0;
> > }
> >
> > +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog)
> > +{
> > + if (map_is_created(map)) {
> > + pr_warn("%s must be called before creation\n", __func__);
>
> we don't really add __func__ for a long while now, please drop, we
> have a consistent "map '%s': what the problem is" format
>
> but for checks like this we also just return -EBUSY or something like
> that without error message, so I'd just drop the message altogether
>
> > + return libbpf_err(-EINVAL);
> > + }
> > +
> > + if (prog->obj->state == OBJ_LOADED) {
> > + pr_warn("%s must be called before the prog load\n", __func__);
> > + return libbpf_err(-EINVAL);
> > + }
>
> this is unnecessary, map_is_created() takes care of this
>
> > + map->excl_prog_sha = prog->hash;
> > + map->excl_prog_sha_size = SHA256_DIGEST_LENGTH;
>
> this is a hack, I assume that's why you compute that hash for any
> program all the time, right? Well, first, if this is called before
> bpf_object_prepare(), it will silently do the wrong thing.
>
> But also I don't think we should calculate hash proactively, we could
> do this lazily.
>
> > + return 0;
> > +}
> > +
> > +
> > static struct bpf_map *
> > __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
> > {
> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > index d39f19c8396d..b6ee9870523a 100644
> > --- a/tools/lib/bpf/libbpf.h
> > +++ b/tools/lib/bpf/libbpf.h
> > @@ -1249,6 +1249,19 @@ LIBBPF_API int bpf_map__lookup_and_delete_elem(const struct bpf_map *map,
> > */
> > LIBBPF_API int bpf_map__get_next_key(const struct bpf_map *map,
> > const void *cur_key, void *next_key, size_t key_sz);
> > +/**
> > + * @brief **bpf_map__make_exclusive()** makes the map exclusive to a single program.
>
> we should also probably error out if map was already marked as
> exclusive to some other program
>
> > + * @param map BPF map to make exclusive.
> > + * @param prog BPF program to be the exclusive user of the map.
> > + * @return 0 on success; a negative error code otherwise.
> > + *
> > + * Once a map is made exclusive, only the specified program can access its
> > + * contents. **bpf_map__make_exclusive** must be called before the objects are
> > + * loaded.
> > + */
> > +LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
> > +
> > +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
> >
> > struct bpf_xdp_set_link_opts {
> > size_t sz;
> > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > index 1205f9a4fe04..67b1ff4202a1 100644
> > --- a/tools/lib/bpf/libbpf.map
> > +++ b/tools/lib/bpf/libbpf.map
> > @@ -444,3 +444,8 @@ LIBBPF_1.6.0 {
> > btf__add_decl_attr;
> > btf__add_type_attr;
> > } LIBBPF_1.5.0;
> > +
> > +LIBBPF_1.7.0 {
> > + global:
> > + bpf_map__make_exclusive;
> > +} LIBBPF_1.6.0;
>
> we are still in v1.6 dev phase, no need to add 1.7 just yet
>
>
> > diff --git a/tools/lib/bpf/libbpf_version.h b/tools/lib/bpf/libbpf_version.h
> > index 28c58fb17250..99331e317dee 100644
> > --- a/tools/lib/bpf/libbpf_version.h
> > +++ b/tools/lib/bpf/libbpf_version.h
> > @@ -4,6 +4,6 @@
> > #define __LIBBPF_VERSION_H
> >
> > #define LIBBPF_MAJOR_VERSION 1
> > -#define LIBBPF_MINOR_VERSION 6
> > +#define LIBBPF_MINOR_VERSION 7
> >
> > #endif /* __LIBBPF_VERSION_H */
> > --
> > 2.43.0
> >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-06-12 23:41 ` KP Singh
@ 2025-06-13 16:51 ` Andrii Nakryiko
2025-07-12 0:50 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: Andrii Nakryiko @ 2025-06-13 16:51 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Thu, Jun 12, 2025 at 4:42 PM KP Singh <kpsingh@kernel.org> wrote:
>
> On Fri, Jun 13, 2025 at 12:56 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> > >
> > > Implement a convenient method i.e. bpf_map__make_exclusive which
> > > calculates the hash for the program and registers it with the map for
> > > creation as an exclusive map when the objects are loaded.
> > >
> > > The hash of the program must be computed after all the relocations are
> > > done.
> > >
> > > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > > ---
> > > tools/lib/bpf/bpf.c | 4 +-
> > > tools/lib/bpf/bpf.h | 4 +-
> > > tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
> > > tools/lib/bpf/libbpf.h | 13 +++++++
> > > tools/lib/bpf/libbpf.map | 5 +++
> > > tools/lib/bpf/libbpf_version.h | 2 +-
> > > 6 files changed, 92 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> > > index a9c3e33d0f8a..11fa2d64ccca 100644
> > > --- a/tools/lib/bpf/bpf.c
> > > +++ b/tools/lib/bpf/bpf.c
> > > @@ -172,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
> > > __u32 max_entries,
> > > const struct bpf_map_create_opts *opts)
> > > {
> > > - const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
> > > + const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
> > > union bpf_attr attr;
> > > int fd;
> > >
> > > @@ -203,6 +203,8 @@ int bpf_map_create(enum bpf_map_type map_type,
> > > attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
> > >
> > > attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
> > > + attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
> > > + attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
> > >
> > > fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
> > > return libbpf_err_errno(fd);
> > > diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
> > > index 777627d33d25..a82b79c0c349 100644
> > > --- a/tools/lib/bpf/bpf.h
> > > +++ b/tools/lib/bpf/bpf.h
> > > @@ -54,9 +54,11 @@ struct bpf_map_create_opts {
> > > __s32 value_type_btf_obj_fd;
> > >
> > > __u32 token_fd;
> > > + __u32 excl_prog_hash_size;
> > > + const void *excl_prog_hash;
> > > size_t :0;
> > > };
> > > -#define bpf_map_create_opts__last_field token_fd
> > > +#define bpf_map_create_opts__last_field excl_prog_hash
> > >
> > > LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
> > > const char *map_name,
> > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > index 475038d04cb4..17de756973f4 100644
> > > --- a/tools/lib/bpf/libbpf.c
> > > +++ b/tools/lib/bpf/libbpf.c
> > > @@ -499,6 +499,7 @@ struct bpf_program {
> > > __u32 line_info_rec_size;
> > > __u32 line_info_cnt;
> > > __u32 prog_flags;
> > > + __u8 hash[SHA256_DIGEST_LENGTH];
> > > };
> > >
> > > struct bpf_struct_ops {
> > > @@ -578,6 +579,8 @@ struct bpf_map {
> > > bool autocreate;
> > > bool autoattach;
> > > __u64 map_extra;
> > > + const void *excl_prog_sha;
> > > + __u32 excl_prog_sha_size;
> > > };
> > >
> > > enum extern_type {
> > > @@ -4485,6 +4488,43 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
> > > }
> > > }
> > >
> > > +static int bpf_program__compute_hash(struct bpf_program *prog)
> > > +{
> > > + struct bpf_insn *purged;
> > > + bool was_ld_map;
> > > + int i, err;
> > > +
> > > + purged = calloc(1, BPF_INSN_SZ * prog->insns_cnt);
> > > + if (!purged)
> > > + return -ENOMEM;
> > > +
> > > + /* If relocations have been done, the map_fd needs to be
> > > + * discarded for the digest calculation.
> > > + */
> >
> > all this looks sketchy, let's think about some more robust approach
> > here rather than randomly clearing some fields of some instructions...
>
> This is exactly what the kernel does:
>
> https://elixir.bootlin.com/linux/v6.15.1/source/kernel/bpf/core.c#L314
>
> We will need to update both, it does not clear them of instructions,
> it clears an immediate value that is the FD of the map which is
> unstable.
Looking at what libbpf is doing with relocations, we are missing the
case of src_reg == BPF_PSEUDO_BTF_ID in which we are setting
insn[1].imm to kernel module BTF FD (so unstable value as well). So I
guess we should fix kernel-side logic there as well?
But overall, it's of course funny, because for a long while we've been
saying that calculating the signature/hash of a BPF program by masking
some parts of instructions (containing FDs and addresses) is not good
and not secure. Now we are doing exactly that to "predict" and define
which BPF program has exclusivity rights. This shouldn't be a problem
for lskel as BPF program code is supposed to be stable, but it feels
weird to do it as a general case.
>
> >
> > > + for (i = 0, was_ld_map = false; i < prog->insns_cnt; i++) {
> > > + purged[i] = prog->insns[i];
> > > + if (!was_ld_map &&
> > > + purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> > > + (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> > > + purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> > > + was_ld_map = true;
> > > + purged[i].imm = 0;
> > > + } else if (was_ld_map && purged[i].code == 0 &&
> > > + purged[i].dst_reg == 0 && purged[i].src_reg == 0 &&
> > > + purged[i].off == 0) {
> > > + was_ld_map = false;
> > > + purged[i].imm = 0;
> > > + } else {
> > > + was_ld_map = false;
> > > + }
> > > + }
> >
> > this was_ld_map business is... unnecessary? Just access purged[i + 1]
> > (checking i + 1 < prog->insns_cnt, of course), and i += 1. This
> > stateful approach is an unnecessary complication, IMO
>
> Again, I did not do much here. Happy to make it better though.
>
I don't know why kernel code was written in this more stateful form,
but I find it much harder to follow, especially given that it's
trivial to handle two-instruction ldimm64 in one go. And both libbpf
and verifier code does handle ldimm64 as insn[0] and insn[1] parts at
the same time elsewhere (e.g., see resolve_pseudo_ldimm64()). So at
least for libbpf side, let's do a simpler implementation (I'd do it
for kernel code as well, but I'm not insisting).
[...]
> > > + map->excl_prog_sha = prog->hash;
> > > + map->excl_prog_sha_size = SHA256_DIGEST_LENGTH;
> >
> > this is a hack, I assume that's why you compute that hash for any
> > program all the time, right? Well, first, if this is called before
> > bpf_object_prepare(), it will silently do the wrong thing.
> >
> > But also I don't think we should calculate hash proactively, we could
> > do this lazily.
> >
So this bothered me and felt wrong. And I realized that you are doing
it at the wrong abstraction level here. Instead of trying to calculate
exclusivity hash in bpf_map__make_exclusive(), we should just record
`struct bpf_program *` pointer as an exclusive target bpf program. And
only calculate hash at map creation time where we know that the BPF
program has been relocated.
At that time we can also error out if bpf_program turned out to be
disabled to be loaded, etc.
I'd also call this a bit more like any other setter (and would add a
getter for completeness): bpf_map__set_exclusive_program() and
bpf_map__exclusive_program(). And I guess we can allow overwriting
exclusive program pointer, given this explicit setter semantics.
> > > + return 0;
> > > +}
> > > +
> > > +
> > > static struct bpf_map *
> > > __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
> > > {
[...]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-06-13 16:51 ` Andrii Nakryiko
@ 2025-07-12 0:50 ` KP Singh
0 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-07-12 0:50 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Fri, Jun 13, 2025 at 6:51 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Jun 12, 2025 at 4:42 PM KP Singh <kpsingh@kernel.org> wrote:
> >
> > On Fri, Jun 13, 2025 at 12:56 AM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> > > >
> > > > Implement a convenient method i.e. bpf_map__make_exclusive which
> > > > calculates the hash for the program and registers it with the map for
> > > > creation as an exclusive map when the objects are loaded.
> > > >
> > > > The hash of the program must be computed after all the relocations are
> > > > done.
> > > >
> > > > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > > > ---
> > > > tools/lib/bpf/bpf.c | 4 +-
> > > > tools/lib/bpf/bpf.h | 4 +-
> > > > tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
> > > > tools/lib/bpf/libbpf.h | 13 +++++++
> > > > tools/lib/bpf/libbpf.map | 5 +++
> > > > tools/lib/bpf/libbpf_version.h | 2 +-
> > > > 6 files changed, 92 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> > > > index a9c3e33d0f8a..11fa2d64ccca 100644
> > > > --- a/tools/lib/bpf/bpf.c
> > > > +++ b/tools/lib/bpf/bpf.c
> > > > @@ -172,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
> > > > __u32 max_entries,
> > > > const struct bpf_map_create_opts *opts)
> > > > {
> > > > - const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
> > > > + const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
> > > > union bpf_attr attr;
> > > > int fd;
> > > >
> > > > @@ -203,6 +203,8 @@ int bpf_map_create(enum bpf_map_type map_type,
> > > > attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
> > > >
> > > > attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
> > > > + attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
> > > > + attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
> > > >
> > > > fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
> > > > return libbpf_err_errno(fd);
> > > > diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
> > > > index 777627d33d25..a82b79c0c349 100644
> > > > --- a/tools/lib/bpf/bpf.h
> > > > +++ b/tools/lib/bpf/bpf.h
> > > > @@ -54,9 +54,11 @@ struct bpf_map_create_opts {
> > > > __s32 value_type_btf_obj_fd;
> > > >
> > > > __u32 token_fd;
> > > > + __u32 excl_prog_hash_size;
> > > > + const void *excl_prog_hash;
> > > > size_t :0;
> > > > };
> > > > -#define bpf_map_create_opts__last_field token_fd
> > > > +#define bpf_map_create_opts__last_field excl_prog_hash
> > > >
> > > > LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
> > > > const char *map_name,
> > > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > > index 475038d04cb4..17de756973f4 100644
> > > > --- a/tools/lib/bpf/libbpf.c
> > > > +++ b/tools/lib/bpf/libbpf.c
> > > > @@ -499,6 +499,7 @@ struct bpf_program {
> > > > __u32 line_info_rec_size;
> > > > __u32 line_info_cnt;
> > > > __u32 prog_flags;
> > > > + __u8 hash[SHA256_DIGEST_LENGTH];
> > > > };
> > > >
> > > > struct bpf_struct_ops {
> > > > @@ -578,6 +579,8 @@ struct bpf_map {
> > > > bool autocreate;
> > > > bool autoattach;
> > > > __u64 map_extra;
> > > > + const void *excl_prog_sha;
> > > > + __u32 excl_prog_sha_size;
> > > > };
> > > >
> > > > enum extern_type {
> > > > @@ -4485,6 +4488,43 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
> > > > }
> > > > }
> > > >
> > > > +static int bpf_program__compute_hash(struct bpf_program *prog)
> > > > +{
> > > > + struct bpf_insn *purged;
> > > > + bool was_ld_map;
> > > > + int i, err;
> > > > +
> > > > + purged = calloc(1, BPF_INSN_SZ * prog->insns_cnt);
> > > > + if (!purged)
> > > > + return -ENOMEM;
> > > > +
> > > > + /* If relocations have been done, the map_fd needs to be
> > > > + * discarded for the digest calculation.
> > > > + */
> > >
> > > all this looks sketchy, let's think about some more robust approach
> > > here rather than randomly clearing some fields of some instructions...
> >
> > This is exactly what the kernel does:
> >
> > https://elixir.bootlin.com/linux/v6.15.1/source/kernel/bpf/core.c#L314
> >
> > We will need to update both, it does not clear them of instructions,
> > it clears an immediate value that is the FD of the map which is
> > unstable.
>
> Looking at what libbpf is doing with relocations, we are missing the
> case of src_reg == BPF_PSEUDO_BTF_ID in which we are setting
> insn[1].imm to kernel module BTF FD (so unstable value as well). So I
> guess we should fix kernel-side logic there as well?
One can consider a magic number here, although really zero is fine.
It's an unstable parameter and It just means that the signature does
not attest to this value that is obtained because it is obtained at
runtime.
>
>
> But overall, it's of course funny, because for a long while we've been
> saying that calculating the signature/hash of a BPF program by masking
> some parts of instructions (containing FDs and addresses) is not good
> and not secure. Now we are doing exactly that to "predict" and define
> which BPF program has exclusivity rights. This shouldn't be a problem
> for lskel as BPF program code is supposed to be stable, but it feels
> weird to do it as a general case.
>
> >
> > >
> > > > + for (i = 0, was_ld_map = false; i < prog->insns_cnt; i++) {
> > > > + purged[i] = prog->insns[i];
> > > > + if (!was_ld_map &&
> > > > + purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> > > > + (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> > > > + purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> > > > + was_ld_map = true;
> > > > + purged[i].imm = 0;
> > > > + } else if (was_ld_map && purged[i].code == 0 &&
> > > > + purged[i].dst_reg == 0 && purged[i].src_reg == 0 &&
> > > > + purged[i].off == 0) {
> > > > + was_ld_map = false;
> > > > + purged[i].imm = 0;
> > > > + } else {
> > > > + was_ld_map = false;
> > > > + }
> > > > + }
> > >
> > > this was_ld_map business is... unnecessary? Just access purged[i + 1]
> > > (checking i + 1 < prog->insns_cnt, of course), and i += 1. This
> > > stateful approach is an unnecessary complication, IMO
> >
> > Again, I did not do much here. Happy to make it better though.
> >
>
> I don't know why kernel code was written in this more stateful form,
> but I find it much harder to follow, especially given that it's
> trivial to handle two-instruction ldimm64 in one go. And both libbpf
> and verifier code does handle ldimm64 as insn[0] and insn[1] parts at
> the same time elsewhere (e.g., see resolve_pseudo_ldimm64()). So at
> least for libbpf side, let's do a simpler implementation (I'd do it
> for kernel code as well, but I'm not insisting).
>
> [...]
>
> > > > + map->excl_prog_sha = prog->hash;
> > > > + map->excl_prog_sha_size = SHA256_DIGEST_LENGTH;
> > >
> > > this is a hack, I assume that's why you compute that hash for any
> > > program all the time, right? Well, first, if this is called before
> > > bpf_object_prepare(), it will silently do the wrong thing.
> > >
> > > But also I don't think we should calculate hash proactively, we could
> > > do this lazily.
> > >
>
> So this bothered me and felt wrong. And I realized that you are doing
> it at the wrong abstraction level here. Instead of trying to calculate
> exclusivity hash in bpf_map__make_exclusive(), we should just record
> `struct bpf_program *` pointer as an exclusive target bpf program. And
> only calculate hash at map creation time where we know that the BPF
> program has been relocated.
>
> At that time we can also error out if bpf_program turned out to be
> disabled to be loaded, etc.
>
> I'd also call this a bit more like any other setter (and would add a
> getter for completeness): bpf_map__set_exclusive_program() and
> bpf_map__exclusive_program(). And I guess we can allow overwriting
> exclusive program pointer, given this explicit setter semantics.
>
> > > > + return 0;
> > > > +}
> > > > +
> > > > +
> > > > static struct bpf_map *
> > > > __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
> > > > {
>
> [...]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-06-12 22:55 ` Andrii Nakryiko
2025-06-12 23:41 ` KP Singh
@ 2025-07-12 0:53 ` KP Singh
2025-07-14 20:56 ` Andrii Nakryiko
2025-07-14 12:29 ` KP Singh
2 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-07-12 0:53 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Fri, Jun 13, 2025 at 12:56 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> >
> > Implement a convenient method i.e. bpf_map__make_exclusive which
> > calculates the hash for the program and registers it with the map for
> > creation as an exclusive map when the objects are loaded.
> >
> > The hash of the program must be computed after all the relocations are
> > done.
> >
> > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > ---
> > tools/lib/bpf/bpf.c | 4 +-
> > tools/lib/bpf/bpf.h | 4 +-
> > tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
> > tools/lib/bpf/libbpf.h | 13 +++++++
> > tools/lib/bpf/libbpf.map | 5 +++
> > tools/lib/bpf/libbpf_version.h | 2 +-
> > 6 files changed, 92 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> > index a9c3e33d0f8a..11fa2d64ccca 100644
> > --- a/tools/lib/bpf/bpf.c
> > +++ b/tools/lib/bpf/bpf.c
> > @@ -172,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
> > __u32 max_entries,
> > const struct bpf_map_create_opts *opts)
> > {
> > - const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
> > + const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
> > union bpf_attr attr;
> > int fd;
> >
> > @@ -203,6 +203,8 @@ int bpf_map_create(enum bpf_map_type map_type,
> > attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
> >
> > attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
> > + attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
> > + attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
> >
> > fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
> > return libbpf_err_errno(fd);
> > diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
> > index 777627d33d25..a82b79c0c349 100644
> > --- a/tools/lib/bpf/bpf.h
> > +++ b/tools/lib/bpf/bpf.h
> > @@ -54,9 +54,11 @@ struct bpf_map_create_opts {
> > __s32 value_type_btf_obj_fd;
> >
> > __u32 token_fd;
> > + __u32 excl_prog_hash_size;
> > + const void *excl_prog_hash;
> > size_t :0;
> > };
> > -#define bpf_map_create_opts__last_field token_fd
> > +#define bpf_map_create_opts__last_field excl_prog_hash
> >
> > LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
> > const char *map_name,
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 475038d04cb4..17de756973f4 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -499,6 +499,7 @@ struct bpf_program {
> > __u32 line_info_rec_size;
> > __u32 line_info_cnt;
> > __u32 prog_flags;
> > + __u8 hash[SHA256_DIGEST_LENGTH];
> > };
> >
> > struct bpf_struct_ops {
> > @@ -578,6 +579,8 @@ struct bpf_map {
> > bool autocreate;
> > bool autoattach;
> > __u64 map_extra;
> > + const void *excl_prog_sha;
> > + __u32 excl_prog_sha_size;
> > };
> >
> > enum extern_type {
> > @@ -4485,6 +4488,43 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
> > }
> > }
> >
> > +static int bpf_program__compute_hash(struct bpf_program *prog)
> > +{
> > + struct bpf_insn *purged;
> > + bool was_ld_map;
> > + int i, err;
> > +
> > + purged = calloc(1, BPF_INSN_SZ * prog->insns_cnt);
> > + if (!purged)
> > + return -ENOMEM;
> > +
> > + /* If relocations have been done, the map_fd needs to be
> > + * discarded for the digest calculation.
> > + */
>
> all this looks sketchy, let's think about some more robust approach
> here rather than randomly clearing some fields of some instructions...
>
> > + for (i = 0, was_ld_map = false; i < prog->insns_cnt; i++) {
> > + purged[i] = prog->insns[i];
> > + if (!was_ld_map &&
> > + purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> > + (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> > + purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> > + was_ld_map = true;
> > + purged[i].imm = 0;
> > + } else if (was_ld_map && purged[i].code == 0 &&
> > + purged[i].dst_reg == 0 && purged[i].src_reg == 0 &&
> > + purged[i].off == 0) {
> > + was_ld_map = false;
> > + purged[i].imm = 0;
> > + } else {
> > + was_ld_map = false;
> > + }
> > + }
>
> this was_ld_map business is... unnecessary? Just access purged[i + 1]
> (checking i + 1 < prog->insns_cnt, of course), and i += 1. This
> stateful approach is an unnecessary complication, IMO
>
> > + err = libbpf_sha256(purged,
> > + prog->insns_cnt * sizeof(struct bpf_insn),
> > + prog->hash);
>
> fits on a single line?
>
> > + free(purged);
> > + return err;
> > +}
> > +
> > static int bpf_program__record_reloc(struct bpf_program *prog,
> > struct reloc_desc *reloc_desc,
> > __u32 insn_idx, const char *sym_name,
> > @@ -5214,6 +5254,10 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
> > create_attr.token_fd = obj->token_fd;
> > if (obj->token_fd)
> > create_attr.map_flags |= BPF_F_TOKEN_FD;
> > + if (map->excl_prog_sha) {
> > + create_attr.excl_prog_hash = map->excl_prog_sha;
> > + create_attr.excl_prog_hash_size = map->excl_prog_sha_size;
> > + }
> >
> > if (bpf_map__is_struct_ops(map)) {
> > create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
> > @@ -7933,6 +7977,11 @@ static int bpf_object_prepare_progs(struct bpf_object *obj)
> > err = bpf_object__sanitize_prog(obj, prog);
> > if (err)
> > return err;
> > + /* Now that the instruction buffer is stable finalize the hash
> > + */
> > + err = bpf_program__compute_hash(&obj->programs[i]);
> > + if (err)
> > + return err;
>
> we'll do this unconditionally for any program?.. why?
>
> > }
> > return 0;
> > }
> > @@ -8602,8 +8651,8 @@ static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_pat
> > err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
> > err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
> > err = err ? : bpf_object__sanitize_and_load_btf(obj);
> > - err = err ? : bpf_object__create_maps(obj);
> > err = err ? : bpf_object_prepare_progs(obj);
> > + err = err ? : bpf_object__create_maps(obj);
> >
> > if (err) {
> > bpf_object_unpin(obj);
> > @@ -10502,6 +10551,23 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
> > return 0;
> > }
> >
> > +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog)
> > +{
> > + if (map_is_created(map)) {
> > + pr_warn("%s must be called before creation\n", __func__);
>
> we don't really add __func__ for a long while now, please drop, we
> have a consistent "map '%s': what the problem is" format
>
> but for checks like this we also just return -EBUSY or something like
> that without error message, so I'd just drop the message altogether
>
> > + return libbpf_err(-EINVAL);
> > + }
> > +
> > + if (prog->obj->state == OBJ_LOADED) {
> > + pr_warn("%s must be called before the prog load\n", __func__);
> > + return libbpf_err(-EINVAL);
> > + }
>
> this is unnecessary, map_is_created() takes care of this
No it does not? This is about the program and the latter is about the
map, how does map_is_created check if the program is already loaded. A
map needs to be marked as an exclusive to the program before the
program is loaded.
>
> > + map->excl_prog_sha = prog->hash;
> > + map->excl_prog_sha_size = SHA256_DIGEST_LENGTH;
>
> this is a hack, I assume that's why you compute that hash for any
> program all the time, right? Well, first, if this is called before
> bpf_object_prepare(), it will silently do the wrong thing.
>
> But also I don't think we should calculate hash proactively, we could
> do this lazily.
>
> > + return 0;
> > +}
> > +
> > +
> > static struct bpf_map *
> > __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
> > {
> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > index d39f19c8396d..b6ee9870523a 100644
> > --- a/tools/lib/bpf/libbpf.h
> > +++ b/tools/lib/bpf/libbpf.h
> > @@ -1249,6 +1249,19 @@ LIBBPF_API int bpf_map__lookup_and_delete_elem(const struct bpf_map *map,
> > */
> > LIBBPF_API int bpf_map__get_next_key(const struct bpf_map *map,
> > const void *cur_key, void *next_key, size_t key_sz);
> > +/**
> > + * @brief **bpf_map__make_exclusive()** makes the map exclusive to a single program.
>
> we should also probably error out if map was already marked as
> exclusive to some other program
>
> > + * @param map BPF map to make exclusive.
> > + * @param prog BPF program to be the exclusive user of the map.
> > + * @return 0 on success; a negative error code otherwise.
> > + *
> > + * Once a map is made exclusive, only the specified program can access its
> > + * contents. **bpf_map__make_exclusive** must be called before the objects are
> > + * loaded.
> > + */
> > +LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
> > +
> > +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
> >
> > struct bpf_xdp_set_link_opts {
> > size_t sz;
> > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > index 1205f9a4fe04..67b1ff4202a1 100644
> > --- a/tools/lib/bpf/libbpf.map
> > +++ b/tools/lib/bpf/libbpf.map
> > @@ -444,3 +444,8 @@ LIBBPF_1.6.0 {
> > btf__add_decl_attr;
> > btf__add_type_attr;
> > } LIBBPF_1.5.0;
> > +
> > +LIBBPF_1.7.0 {
> > + global:
> > + bpf_map__make_exclusive;
> > +} LIBBPF_1.6.0;
>
> we are still in v1.6 dev phase, no need to add 1.7 just yet
>
>
> > diff --git a/tools/lib/bpf/libbpf_version.h b/tools/lib/bpf/libbpf_version.h
> > index 28c58fb17250..99331e317dee 100644
> > --- a/tools/lib/bpf/libbpf_version.h
> > +++ b/tools/lib/bpf/libbpf_version.h
> > @@ -4,6 +4,6 @@
> > #define __LIBBPF_VERSION_H
> >
> > #define LIBBPF_MAJOR_VERSION 1
> > -#define LIBBPF_MINOR_VERSION 6
> > +#define LIBBPF_MINOR_VERSION 7
> >
> > #endif /* __LIBBPF_VERSION_H */
> > --
> > 2.43.0
> >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-07-12 0:53 ` KP Singh
@ 2025-07-14 20:56 ` Andrii Nakryiko
0 siblings, 0 replies; 79+ messages in thread
From: Andrii Nakryiko @ 2025-07-14 20:56 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Fri, Jul 11, 2025 at 5:53 PM KP Singh <kpsingh@kernel.org> wrote:
>
> On Fri, Jun 13, 2025 at 12:56 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> > >
> > > Implement a convenient method i.e. bpf_map__make_exclusive which
> > > calculates the hash for the program and registers it with the map for
> > > creation as an exclusive map when the objects are loaded.
> > >
> > > The hash of the program must be computed after all the relocations are
> > > done.
> > >
> > > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > > ---
> > > tools/lib/bpf/bpf.c | 4 +-
> > > tools/lib/bpf/bpf.h | 4 +-
> > > tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
> > > tools/lib/bpf/libbpf.h | 13 +++++++
> > > tools/lib/bpf/libbpf.map | 5 +++
> > > tools/lib/bpf/libbpf_version.h | 2 +-
> > > 6 files changed, 92 insertions(+), 4 deletions(-)
> > >
[...]
> > > +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog)
> > > +{
> > > + if (map_is_created(map)) {
> > > + pr_warn("%s must be called before creation\n", __func__);
> >
> > we don't really add __func__ for a long while now, please drop, we
> > have a consistent "map '%s': what the problem is" format
> >
> > but for checks like this we also just return -EBUSY or something like
> > that without error message, so I'd just drop the message altogether
> >
> > > + return libbpf_err(-EINVAL);
> > > + }
> > > +
> > > + if (prog->obj->state == OBJ_LOADED) {
> > > + pr_warn("%s must be called before the prog load\n", __func__);
> > > + return libbpf_err(-EINVAL);
> > > + }
> >
> > this is unnecessary, map_is_created() takes care of this
>
> No it does not? This is about the program and the latter is about the
> map, how does map_is_created check if the program is already loaded. A
> map needs to be marked as an exclusive to the program before the
> program is loaded.
Um... both map_is_created() and your `prog->obj->state == OBJ_LOADED`
check *object* state, making sure it didn't progress past some
specific stage. excl_prog_sha is *map* attribute, and *maps* are
created at the preparation stage (OBJ_PREPARED), which comes before
OBJ_LOADED step. OBJ_PREPARED is already too late, and so OBJ_LOADED
check is meaningless altogether because map_is_created() will return
true before that.
What am I missing?
>
>
> >
> > > + map->excl_prog_sha = prog->hash;
> > > + map->excl_prog_sha_size = SHA256_DIGEST_LENGTH;
> >
> > this is a hack, I assume that's why you compute that hash for any
> > program all the time, right? Well, first, if this is called before
> > bpf_object_prepare(), it will silently do the wrong thing.
> >
> > But also I don't think we should calculate hash proactively, we could
> > do this lazily.
> >
> > > + return 0;
> > > +}
> > > +
> > > +
[...]
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-06-12 22:55 ` Andrii Nakryiko
2025-06-12 23:41 ` KP Singh
2025-07-12 0:53 ` KP Singh
@ 2025-07-14 12:29 ` KP Singh
2025-07-14 12:55 ` KP Singh
2 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-07-14 12:29 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Fri, Jun 13, 2025 at 12:56 AM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> >
> > Implement a convenient method i.e. bpf_map__make_exclusive which
> > calculates the hash for the program and registers it with the map for
> > creation as an exclusive map when the objects are loaded.
> >
> > The hash of the program must be computed after all the relocations are
> > done.
> >
> > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > ---
> > tools/lib/bpf/bpf.c | 4 +-
> > tools/lib/bpf/bpf.h | 4 +-
> > tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
> > tools/lib/bpf/libbpf.h | 13 +++++++
> > tools/lib/bpf/libbpf.map | 5 +++
> > tools/lib/bpf/libbpf_version.h | 2 +-
> > 6 files changed, 92 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> > index a9c3e33d0f8a..11fa2d64ccca 100644
> > --- a/tools/lib/bpf/bpf.c
> > +++ b/tools/lib/bpf/bpf.c
> > @@ -172,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
> > __u32 max_entries,
> > const struct bpf_map_create_opts *opts)
> > {
> > - const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
> > + const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
> > union bpf_attr attr;
> > int fd;
> >
> > @@ -203,6 +203,8 @@ int bpf_map_create(enum bpf_map_type map_type,
> > attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
> >
> > attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
> > + attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
> > + attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
> >
> > fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
> > return libbpf_err_errno(fd);
> > diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
> > index 777627d33d25..a82b79c0c349 100644
> > --- a/tools/lib/bpf/bpf.h
> > +++ b/tools/lib/bpf/bpf.h
> > @@ -54,9 +54,11 @@ struct bpf_map_create_opts {
> > __s32 value_type_btf_obj_fd;
> >
> > __u32 token_fd;
> > + __u32 excl_prog_hash_size;
> > + const void *excl_prog_hash;
> > size_t :0;
> > };
> > -#define bpf_map_create_opts__last_field token_fd
> > +#define bpf_map_create_opts__last_field excl_prog_hash
> >
> > LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
> > const char *map_name,
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 475038d04cb4..17de756973f4 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -499,6 +499,7 @@ struct bpf_program {
> > __u32 line_info_rec_size;
> > __u32 line_info_cnt;
> > __u32 prog_flags;
> > + __u8 hash[SHA256_DIGEST_LENGTH];
> > };
> >
> > struct bpf_struct_ops {
> > @@ -578,6 +579,8 @@ struct bpf_map {
> > bool autocreate;
> > bool autoattach;
> > __u64 map_extra;
> > + const void *excl_prog_sha;
> > + __u32 excl_prog_sha_size;
> > };
> >
> > enum extern_type {
> > @@ -4485,6 +4488,43 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
> > }
> > }
> >
> > +static int bpf_program__compute_hash(struct bpf_program *prog)
> > +{
> > + struct bpf_insn *purged;
> > + bool was_ld_map;
> > + int i, err;
> > +
> > + purged = calloc(1, BPF_INSN_SZ * prog->insns_cnt);
> > + if (!purged)
> > + return -ENOMEM;
> > +
> > + /* If relocations have been done, the map_fd needs to be
> > + * discarded for the digest calculation.
> > + */
>
> all this looks sketchy, let's think about some more robust approach
> here rather than randomly clearing some fields of some instructions...
>
> > + for (i = 0, was_ld_map = false; i < prog->insns_cnt; i++) {
> > + purged[i] = prog->insns[i];
> > + if (!was_ld_map &&
> > + purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> > + (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> > + purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> > + was_ld_map = true;
> > + purged[i].imm = 0;
> > + } else if (was_ld_map && purged[i].code == 0 &&
> > + purged[i].dst_reg == 0 && purged[i].src_reg == 0 &&
> > + purged[i].off == 0) {
> > + was_ld_map = false;
> > + purged[i].imm = 0;
> > + } else {
> > + was_ld_map = false;
> > + }
> > + }
>
> this was_ld_map business is... unnecessary? Just access purged[i + 1]
> (checking i + 1 < prog->insns_cnt, of course), and i += 1. This
> stateful approach is an unnecessary complication, IMO
Does this look better to you, the next instruction has to be the
second half of the double word right?
for (int i = 0; i < prog->insns_cnt; i++) {
purged[i] = prog->insns[i];
if (purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
(purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
purged[i].imm = 0;
i++;
if (i >= prog->insns_cnt ||
prog->insns[i].code != 0 ||
prog->insns[i].dst_reg != 0 ||
prog->insns[i].src_reg != 0 ||
prog->insns[i].off != 0) {
return -EINVAL;
}
purged[i] = prog->insns[i];
purged[i].imm = 0;
}
}
>
> > + err = libbpf_sha256(purged,
> > + prog->insns_cnt * sizeof(struct bpf_insn),
> > + prog->hash);
>
> fits on a single line?
>
> > + free(purged);
> > + return err;
> > +}
> > +
> > static int bpf_program__record_reloc(struct bpf_program *prog,
> > struct reloc_desc *reloc_desc,
> > __u32 insn_idx, const char *sym_name,
> > @@ -5214,6 +5254,10 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
> > create_attr.token_fd = obj->token_fd;
> > if (obj->token_fd)
> > create_attr.map_flags |= BPF_F_TOKEN_FD;
> > + if (map->excl_prog_sha) {
> > + create_attr.excl_prog_hash = map->excl_prog_sha;
> > + create_attr.excl_prog_hash_size = map->excl_prog_sha_size;
> > + }
> >
> > if (bpf_map__is_struct_ops(map)) {
> > create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
> > @@ -7933,6 +7977,11 @@ static int bpf_object_prepare_progs(struct bpf_object *obj)
> > err = bpf_object__sanitize_prog(obj, prog);
> > if (err)
> > return err;
> > + /* Now that the instruction buffer is stable finalize the hash
> > + */
> > + err = bpf_program__compute_hash(&obj->programs[i]);
> > + if (err)
> > + return err;
>
> we'll do this unconditionally for any program?.. why?
>
> > }
> > return 0;
> > }
> > @@ -8602,8 +8651,8 @@ static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_pat
> > err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
> > err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
> > err = err ? : bpf_object__sanitize_and_load_btf(obj);
> > - err = err ? : bpf_object__create_maps(obj);
> > err = err ? : bpf_object_prepare_progs(obj);
> > + err = err ? : bpf_object__create_maps(obj);
> >
> > if (err) {
> > bpf_object_unpin(obj);
> > @@ -10502,6 +10551,23 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
> > return 0;
> > }
> >
> > +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog)
> > +{
> > + if (map_is_created(map)) {
> > + pr_warn("%s must be called before creation\n", __func__);
>
> we don't really add __func__ for a long while now, please drop, we
> have a consistent "map '%s': what the problem is" format
>
> but for checks like this we also just return -EBUSY or something like
> that without error message, so I'd just drop the message altogether
>
> > + return libbpf_err(-EINVAL);
> > + }
> > +
> > + if (prog->obj->state == OBJ_LOADED) {
> > + pr_warn("%s must be called before the prog load\n", __func__);
> > + return libbpf_err(-EINVAL);
> > + }
>
> this is unnecessary, map_is_created() takes care of this
>
> > + map->excl_prog_sha = prog->hash;
> > + map->excl_prog_sha_size = SHA256_DIGEST_LENGTH;
>
> this is a hack, I assume that's why you compute that hash for any
> program all the time, right? Well, first, if this is called before
> bpf_object_prepare(), it will silently do the wrong thing.
>
> But also I don't think we should calculate hash proactively, we could
> do this lazily.
>
> > + return 0;
> > +}
> > +
> > +
> > static struct bpf_map *
> > __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
> > {
> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > index d39f19c8396d..b6ee9870523a 100644
> > --- a/tools/lib/bpf/libbpf.h
> > +++ b/tools/lib/bpf/libbpf.h
> > @@ -1249,6 +1249,19 @@ LIBBPF_API int bpf_map__lookup_and_delete_elem(const struct bpf_map *map,
> > */
> > LIBBPF_API int bpf_map__get_next_key(const struct bpf_map *map,
> > const void *cur_key, void *next_key, size_t key_sz);
> > +/**
> > + * @brief **bpf_map__make_exclusive()** makes the map exclusive to a single program.
>
> we should also probably error out if map was already marked as
> exclusive to some other program
>
> > + * @param map BPF map to make exclusive.
> > + * @param prog BPF program to be the exclusive user of the map.
> > + * @return 0 on success; a negative error code otherwise.
> > + *
> > + * Once a map is made exclusive, only the specified program can access its
> > + * contents. **bpf_map__make_exclusive** must be called before the objects are
> > + * loaded.
> > + */
> > +LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
> > +
> > +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
> >
> > struct bpf_xdp_set_link_opts {
> > size_t sz;
> > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > index 1205f9a4fe04..67b1ff4202a1 100644
> > --- a/tools/lib/bpf/libbpf.map
> > +++ b/tools/lib/bpf/libbpf.map
> > @@ -444,3 +444,8 @@ LIBBPF_1.6.0 {
> > btf__add_decl_attr;
> > btf__add_type_attr;
> > } LIBBPF_1.5.0;
> > +
> > +LIBBPF_1.7.0 {
> > + global:
> > + bpf_map__make_exclusive;
> > +} LIBBPF_1.6.0;
>
> we are still in v1.6 dev phase, no need to add 1.7 just yet
>
>
> > diff --git a/tools/lib/bpf/libbpf_version.h b/tools/lib/bpf/libbpf_version.h
> > index 28c58fb17250..99331e317dee 100644
> > --- a/tools/lib/bpf/libbpf_version.h
> > +++ b/tools/lib/bpf/libbpf_version.h
> > @@ -4,6 +4,6 @@
> > #define __LIBBPF_VERSION_H
> >
> > #define LIBBPF_MAJOR_VERSION 1
> > -#define LIBBPF_MINOR_VERSION 6
> > +#define LIBBPF_MINOR_VERSION 7
> >
> > #endif /* __LIBBPF_VERSION_H */
> > --
> > 2.43.0
> >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-07-14 12:29 ` KP Singh
@ 2025-07-14 12:55 ` KP Singh
2025-07-14 21:05 ` Andrii Nakryiko
0 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-07-14 12:55 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Mon, Jul 14, 2025 at 2:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> On Fri, Jun 13, 2025 at 12:56 AM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> > >
> > > Implement a convenient method i.e. bpf_map__make_exclusive which
> > > calculates the hash for the program and registers it with the map for
> > > creation as an exclusive map when the objects are loaded.
> > >
> > > The hash of the program must be computed after all the relocations are
> > > done.
> > >
> > > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > > ---
> > > tools/lib/bpf/bpf.c | 4 +-
> > > tools/lib/bpf/bpf.h | 4 +-
> > > tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
> > > tools/lib/bpf/libbpf.h | 13 +++++++
> > > tools/lib/bpf/libbpf.map | 5 +++
> > > tools/lib/bpf/libbpf_version.h | 2 +-
> > > 6 files changed, 92 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> > > index a9c3e33d0f8a..11fa2d64ccca 100644
> > > --- a/tools/lib/bpf/bpf.c
> > > +++ b/tools/lib/bpf/bpf.c
> > > @@ -172,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
> > > __u32 max_entries,
> > > const struct bpf_map_create_opts *opts)
> > > {
> > > - const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
> > > + const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
> > > union bpf_attr attr;
> > > int fd;
> > >
> > > @@ -203,6 +203,8 @@ int bpf_map_create(enum bpf_map_type map_type,
> > > attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
> > >
> > > attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
> > > + attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
> > > + attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
> > >
> > > fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
> > > return libbpf_err_errno(fd);
> > > diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
> > > index 777627d33d25..a82b79c0c349 100644
> > > --- a/tools/lib/bpf/bpf.h
> > > +++ b/tools/lib/bpf/bpf.h
> > > @@ -54,9 +54,11 @@ struct bpf_map_create_opts {
> > > __s32 value_type_btf_obj_fd;
> > >
> > > __u32 token_fd;
> > > + __u32 excl_prog_hash_size;
> > > + const void *excl_prog_hash;
> > > size_t :0;
> > > };
> > > -#define bpf_map_create_opts__last_field token_fd
> > > +#define bpf_map_create_opts__last_field excl_prog_hash
> > >
> > > LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
> > > const char *map_name,
> > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > index 475038d04cb4..17de756973f4 100644
> > > --- a/tools/lib/bpf/libbpf.c
> > > +++ b/tools/lib/bpf/libbpf.c
> > > @@ -499,6 +499,7 @@ struct bpf_program {
> > > __u32 line_info_rec_size;
> > > __u32 line_info_cnt;
> > > __u32 prog_flags;
> > > + __u8 hash[SHA256_DIGEST_LENGTH];
> > > };
> > >
> > > struct bpf_struct_ops {
> > > @@ -578,6 +579,8 @@ struct bpf_map {
> > > bool autocreate;
> > > bool autoattach;
> > > __u64 map_extra;
> > > + const void *excl_prog_sha;
> > > + __u32 excl_prog_sha_size;
> > > };
> > >
> > > enum extern_type {
> > > @@ -4485,6 +4488,43 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
> > > }
> > > }
> > >
> > > +static int bpf_program__compute_hash(struct bpf_program *prog)
> > > +{
> > > + struct bpf_insn *purged;
> > > + bool was_ld_map;
> > > + int i, err;
> > > +
> > > + purged = calloc(1, BPF_INSN_SZ * prog->insns_cnt);
> > > + if (!purged)
> > > + return -ENOMEM;
> > > +
> > > + /* If relocations have been done, the map_fd needs to be
> > > + * discarded for the digest calculation.
> > > + */
> >
> > all this looks sketchy, let's think about some more robust approach
> > here rather than randomly clearing some fields of some instructions...
> >
> > > + for (i = 0, was_ld_map = false; i < prog->insns_cnt; i++) {
> > > + purged[i] = prog->insns[i];
> > > + if (!was_ld_map &&
> > > + purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> > > + (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> > > + purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> > > + was_ld_map = true;
> > > + purged[i].imm = 0;
> > > + } else if (was_ld_map && purged[i].code == 0 &&
> > > + purged[i].dst_reg == 0 && purged[i].src_reg == 0 &&
> > > + purged[i].off == 0) {
> > > + was_ld_map = false;
> > > + purged[i].imm = 0;
> > > + } else {
> > > + was_ld_map = false;
> > > + }
> > > + }
> >
> > this was_ld_map business is... unnecessary? Just access purged[i + 1]
> > (checking i + 1 < prog->insns_cnt, of course), and i += 1. This
> > stateful approach is an unnecessary complication, IMO
>
> Does this look better to you, the next instruction has to be the
> second half of the double word right?
>
> for (int i = 0; i < prog->insns_cnt; i++) {
> purged[i] = prog->insns[i];
> if (purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> purged[i].imm = 0;
> i++;
> if (i >= prog->insns_cnt ||
> prog->insns[i].code != 0 ||
> prog->insns[i].dst_reg != 0 ||
> prog->insns[i].src_reg != 0 ||
> prog->insns[i].off != 0) {
> return -EINVAL;
> }
I mean ofcourse
err = -EINVAL;
goto out;
to free the buffer.
- KP
> purged[i] = prog->insns[i];
> purged[i].imm = 0;
> }
> }
>
>
>
> >
> > > + err = libbpf_sha256(purged,
> > > + prog->insns_cnt * sizeof(struct bpf_insn),
> > > + prog->hash);
> >
> > fits on a single line?
> >
> > > + free(purged);
> > > + return err;
> > > +}
> > > +
> > > static int bpf_program__record_reloc(struct bpf_program *prog,
> > > struct reloc_desc *reloc_desc,
> > > __u32 insn_idx, const char *sym_name,
> > > @@ -5214,6 +5254,10 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
> > > create_attr.token_fd = obj->token_fd;
> > > if (obj->token_fd)
> > > create_attr.map_flags |= BPF_F_TOKEN_FD;
> > > + if (map->excl_prog_sha) {
> > > + create_attr.excl_prog_hash = map->excl_prog_sha;
> > > + create_attr.excl_prog_hash_size = map->excl_prog_sha_size;
> > > + }
> > >
> > > if (bpf_map__is_struct_ops(map)) {
> > > create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
> > > @@ -7933,6 +7977,11 @@ static int bpf_object_prepare_progs(struct bpf_object *obj)
> > > err = bpf_object__sanitize_prog(obj, prog);
> > > if (err)
> > > return err;
> > > + /* Now that the instruction buffer is stable finalize the hash
> > > + */
> > > + err = bpf_program__compute_hash(&obj->programs[i]);
> > > + if (err)
> > > + return err;
> >
> > we'll do this unconditionally for any program?.. why?
> >
> > > }
> > > return 0;
> > > }
> > > @@ -8602,8 +8651,8 @@ static int bpf_object_prepare(struct bpf_object *obj, const char *target_btf_pat
> > > err = err ? : bpf_object_adjust_struct_ops_autoload(obj);
> > > err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
> > > err = err ? : bpf_object__sanitize_and_load_btf(obj);
> > > - err = err ? : bpf_object__create_maps(obj);
> > > err = err ? : bpf_object_prepare_progs(obj);
> > > + err = err ? : bpf_object__create_maps(obj);
> > >
> > > if (err) {
> > > bpf_object_unpin(obj);
> > > @@ -10502,6 +10551,23 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
> > > return 0;
> > > }
> > >
> > > +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog)
> > > +{
> > > + if (map_is_created(map)) {
> > > + pr_warn("%s must be called before creation\n", __func__);
> >
> > we don't really add __func__ for a long while now, please drop, we
> > have a consistent "map '%s': what the problem is" format
> >
> > but for checks like this we also just return -EBUSY or something like
> > that without error message, so I'd just drop the message altogether
> >
> > > + return libbpf_err(-EINVAL);
> > > + }
> > > +
> > > + if (prog->obj->state == OBJ_LOADED) {
> > > + pr_warn("%s must be called before the prog load\n", __func__);
> > > + return libbpf_err(-EINVAL);
> > > + }
> >
> > this is unnecessary, map_is_created() takes care of this
> >
> > > + map->excl_prog_sha = prog->hash;
> > > + map->excl_prog_sha_size = SHA256_DIGEST_LENGTH;
> >
> > this is a hack, I assume that's why you compute that hash for any
> > program all the time, right? Well, first, if this is called before
> > bpf_object_prepare(), it will silently do the wrong thing.
> >
> > But also I don't think we should calculate hash proactively, we could
> > do this lazily.
> >
> > > + return 0;
> > > +}
> > > +
> > > +
> > > static struct bpf_map *
> > > __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
> > > {
> > > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > > index d39f19c8396d..b6ee9870523a 100644
> > > --- a/tools/lib/bpf/libbpf.h
> > > +++ b/tools/lib/bpf/libbpf.h
> > > @@ -1249,6 +1249,19 @@ LIBBPF_API int bpf_map__lookup_and_delete_elem(const struct bpf_map *map,
> > > */
> > > LIBBPF_API int bpf_map__get_next_key(const struct bpf_map *map,
> > > const void *cur_key, void *next_key, size_t key_sz);
> > > +/**
> > > + * @brief **bpf_map__make_exclusive()** makes the map exclusive to a single program.
> >
> > we should also probably error out if map was already marked as
> > exclusive to some other program
> >
> > > + * @param map BPF map to make exclusive.
> > > + * @param prog BPF program to be the exclusive user of the map.
> > > + * @return 0 on success; a negative error code otherwise.
> > > + *
> > > + * Once a map is made exclusive, only the specified program can access its
> > > + * contents. **bpf_map__make_exclusive** must be called before the objects are
> > > + * loaded.
> > > + */
> > > +LIBBPF_API int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
> > > +
> > > +int bpf_map__make_exclusive(struct bpf_map *map, struct bpf_program *prog);
> > >
> > > struct bpf_xdp_set_link_opts {
> > > size_t sz;
> > > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
> > > index 1205f9a4fe04..67b1ff4202a1 100644
> > > --- a/tools/lib/bpf/libbpf.map
> > > +++ b/tools/lib/bpf/libbpf.map
> > > @@ -444,3 +444,8 @@ LIBBPF_1.6.0 {
> > > btf__add_decl_attr;
> > > btf__add_type_attr;
> > > } LIBBPF_1.5.0;
> > > +
> > > +LIBBPF_1.7.0 {
> > > + global:
> > > + bpf_map__make_exclusive;
> > > +} LIBBPF_1.6.0;
> >
> > we are still in v1.6 dev phase, no need to add 1.7 just yet
> >
> >
> > > diff --git a/tools/lib/bpf/libbpf_version.h b/tools/lib/bpf/libbpf_version.h
> > > index 28c58fb17250..99331e317dee 100644
> > > --- a/tools/lib/bpf/libbpf_version.h
> > > +++ b/tools/lib/bpf/libbpf_version.h
> > > @@ -4,6 +4,6 @@
> > > #define __LIBBPF_VERSION_H
> > >
> > > #define LIBBPF_MAJOR_VERSION 1
> > > -#define LIBBPF_MINOR_VERSION 6
> > > +#define LIBBPF_MINOR_VERSION 7
> > >
> > > #endif /* __LIBBPF_VERSION_H */
> > > --
> > > 2.43.0
> > >
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 05/12] libbpf: Support exclusive map creation
2025-07-14 12:55 ` KP Singh
@ 2025-07-14 21:05 ` Andrii Nakryiko
0 siblings, 0 replies; 79+ messages in thread
From: Andrii Nakryiko @ 2025-07-14 21:05 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Mon, Jul 14, 2025 at 5:55 AM KP Singh <kpsingh@kernel.org> wrote:
>
> On Mon, Jul 14, 2025 at 2:29 PM KP Singh <kpsingh@kernel.org> wrote:
> >
> > On Fri, Jun 13, 2025 at 12:56 AM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> > > >
> > > > Implement a convenient method i.e. bpf_map__make_exclusive which
> > > > calculates the hash for the program and registers it with the map for
> > > > creation as an exclusive map when the objects are loaded.
> > > >
> > > > The hash of the program must be computed after all the relocations are
> > > > done.
> > > >
> > > > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > > > ---
> > > > tools/lib/bpf/bpf.c | 4 +-
> > > > tools/lib/bpf/bpf.h | 4 +-
> > > > tools/lib/bpf/libbpf.c | 68 +++++++++++++++++++++++++++++++++-
> > > > tools/lib/bpf/libbpf.h | 13 +++++++
> > > > tools/lib/bpf/libbpf.map | 5 +++
> > > > tools/lib/bpf/libbpf_version.h | 2 +-
> > > > 6 files changed, 92 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> > > > index a9c3e33d0f8a..11fa2d64ccca 100644
> > > > --- a/tools/lib/bpf/bpf.c
> > > > +++ b/tools/lib/bpf/bpf.c
> > > > @@ -172,7 +172,7 @@ int bpf_map_create(enum bpf_map_type map_type,
> > > > __u32 max_entries,
> > > > const struct bpf_map_create_opts *opts)
> > > > {
> > > > - const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd);
> > > > + const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
> > > > union bpf_attr attr;
> > > > int fd;
> > > >
> > > > @@ -203,6 +203,8 @@ int bpf_map_create(enum bpf_map_type map_type,
> > > > attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0);
> > > >
> > > > attr.map_token_fd = OPTS_GET(opts, token_fd, 0);
> > > > + attr.excl_prog_hash = ptr_to_u64(OPTS_GET(opts, excl_prog_hash, NULL));
> > > > + attr.excl_prog_hash_size = OPTS_GET(opts, excl_prog_hash_size, 0);
> > > >
> > > > fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz);
> > > > return libbpf_err_errno(fd);
> > > > diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
> > > > index 777627d33d25..a82b79c0c349 100644
> > > > --- a/tools/lib/bpf/bpf.h
> > > > +++ b/tools/lib/bpf/bpf.h
> > > > @@ -54,9 +54,11 @@ struct bpf_map_create_opts {
> > > > __s32 value_type_btf_obj_fd;
> > > >
> > > > __u32 token_fd;
> > > > + __u32 excl_prog_hash_size;
> > > > + const void *excl_prog_hash;
> > > > size_t :0;
> > > > };
> > > > -#define bpf_map_create_opts__last_field token_fd
> > > > +#define bpf_map_create_opts__last_field excl_prog_hash
> > > >
> > > > LIBBPF_API int bpf_map_create(enum bpf_map_type map_type,
> > > > const char *map_name,
> > > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > > index 475038d04cb4..17de756973f4 100644
> > > > --- a/tools/lib/bpf/libbpf.c
> > > > +++ b/tools/lib/bpf/libbpf.c
> > > > @@ -499,6 +499,7 @@ struct bpf_program {
> > > > __u32 line_info_rec_size;
> > > > __u32 line_info_cnt;
> > > > __u32 prog_flags;
> > > > + __u8 hash[SHA256_DIGEST_LENGTH];
> > > > };
> > > >
> > > > struct bpf_struct_ops {
> > > > @@ -578,6 +579,8 @@ struct bpf_map {
> > > > bool autocreate;
> > > > bool autoattach;
> > > > __u64 map_extra;
> > > > + const void *excl_prog_sha;
> > > > + __u32 excl_prog_sha_size;
> > > > };
> > > >
> > > > enum extern_type {
> > > > @@ -4485,6 +4488,43 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
> > > > }
> > > > }
> > > >
> > > > +static int bpf_program__compute_hash(struct bpf_program *prog)
> > > > +{
> > > > + struct bpf_insn *purged;
> > > > + bool was_ld_map;
> > > > + int i, err;
> > > > +
> > > > + purged = calloc(1, BPF_INSN_SZ * prog->insns_cnt);
> > > > + if (!purged)
> > > > + return -ENOMEM;
> > > > +
> > > > + /* If relocations have been done, the map_fd needs to be
> > > > + * discarded for the digest calculation.
> > > > + */
> > >
> > > all this looks sketchy, let's think about some more robust approach
> > > here rather than randomly clearing some fields of some instructions...
> > >
> > > > + for (i = 0, was_ld_map = false; i < prog->insns_cnt; i++) {
> > > > + purged[i] = prog->insns[i];
> > > > + if (!was_ld_map &&
> > > > + purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> > > > + (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> > > > + purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> > > > + was_ld_map = true;
> > > > + purged[i].imm = 0;
> > > > + } else if (was_ld_map && purged[i].code == 0 &&
> > > > + purged[i].dst_reg == 0 && purged[i].src_reg == 0 &&
> > > > + purged[i].off == 0) {
> > > > + was_ld_map = false;
> > > > + purged[i].imm = 0;
> > > > + } else {
> > > > + was_ld_map = false;
> > > > + }
> > > > + }
> > >
> > > this was_ld_map business is... unnecessary? Just access purged[i + 1]
> > > (checking i + 1 < prog->insns_cnt, of course), and i += 1. This
> > > stateful approach is an unnecessary complication, IMO
> >
> > Does this look better to you, the next instruction has to be the
> > second half of the double word right?
> >
> > for (int i = 0; i < prog->insns_cnt; i++) {
> > purged[i] = prog->insns[i];
> > if (purged[i].code == (BPF_LD | BPF_IMM | BPF_DW) &&
> > (purged[i].src_reg == BPF_PSEUDO_MAP_FD ||
> > purged[i].src_reg == BPF_PSEUDO_MAP_VALUE)) {
> > purged[i].imm = 0;
> > i++;
> > if (i >= prog->insns_cnt ||
> > prog->insns[i].code != 0 ||
> > prog->insns[i].dst_reg != 0 ||
> > prog->insns[i].src_reg != 0 ||
> > prog->insns[i].off != 0) {
> > return -EINVAL;
> > }
>
> I mean ofcourse
>
> err = -EINVAL;
> goto out;
>
> to free the buffer.
Yes, but I'd probably modify it a bit for conciseness:
struct bpf_insn *purged, *insn;
int i;
purged = calloc(..);
memcpy(purged, prog->insns, ...);
for (i = 0; i < prog->insns_cnt; i++) {
insn = &purged[i];
if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW) &&
(insn[0].src_reg == BPF_PSEUDO_MAP_FD || ...) {
insn[0].imm = 0;
if (i >= prog_insns_cnt) {
err = -EINVAL;
goto err;
}
insn[1].imm = 0;
i++;
}
(I'm not sure libbpf needs to check code,dst_reg,src_reg,off for
ldimm64, verifier will do it anyways, so I'd protect against
out-of-bounds access only)
I'd even consider just doing:
if (i + 1 < prog->insns_cnt && insn[0].code == (BPF_LD | BPF_IMM |
BPF_DW) ...) {
insn[0].imm = 0;
insn[1].imm = 0;
i++;
}
i.e., don't even error out, verifier will do that anyway later because
program is malformed, and hash won't even matter at that point
[...]
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 06/12] selftests/bpf: Add tests for exclusive maps
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (4 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 05/12] libbpf: Support exclusive map creation KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-06 23:29 ` [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD KP Singh
` (7 subsequent siblings)
13 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
* maps of maps are currently cannot be exclusive.
* inner maps cannot be exclusive
* Check if access is denied to another program for an exclusive map.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
.../selftests/bpf/prog_tests/map_excl.c | 130 ++++++++++++++++++
tools/testing/selftests/bpf/progs/map_excl.c | 65 +++++++++
2 files changed, 195 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/map_excl.c
create mode 100644 tools/testing/selftests/bpf/progs/map_excl.c
diff --git a/tools/testing/selftests/bpf/prog_tests/map_excl.c b/tools/testing/selftests/bpf/prog_tests/map_excl.c
new file mode 100644
index 000000000000..2f6f81ef7ae2
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/map_excl.c
@@ -0,0 +1,130 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023. Huawei Technologies Co., Ltd */
+#define _GNU_SOURCE
+#include <unistd.h>
+#include <sys/syscall.h>
+#include <test_progs.h>
+#include <bpf/btf.h>
+
+#include "map_excl.skel.h"
+
+static void test_map_exclusive_inner(void)
+{
+ struct map_excl *skel;
+ int err;
+
+ skel = map_excl__open();
+ if (!ASSERT_OK_PTR(skel, "map_excl open"))
+ return;
+
+ err = bpf_map__make_exclusive(skel->maps.inner_map,
+ skel->progs.should_have_access);
+ if (!ASSERT_OK(err, "bpf_map__make_exclusive"))
+ goto out;
+
+ err = map_excl__load(skel);
+ ASSERT_EQ(err, -EOPNOTSUPP, "map_excl__load");
+
+out:
+ map_excl__destroy(skel);
+}
+
+static void test_map_exclusive_outer_array(void)
+{
+ struct map_excl *skel;
+ int err;
+
+ skel = map_excl__open();
+ if (!ASSERT_OK_PTR(skel, "map_excl open"))
+ return;
+
+ err = bpf_map__make_exclusive(skel->maps.outer_array_map,
+ skel->progs.should_have_access);
+ if (!ASSERT_OK(err, "bpf_map__make_exclusive"))
+ goto out;
+
+ bpf_program__set_autoload(skel->progs.should_have_access, true);
+ bpf_program__set_autoload(skel->progs.should_not_have_access, false);
+
+ err = map_excl__load(skel);
+ ASSERT_EQ(err, -EOPNOTSUPP, "exclusive maps of maps are not supported\n");
+out:
+ map_excl__destroy(skel);
+}
+
+static void test_map_exclusive_outer_htab(void)
+{
+ struct map_excl *skel;
+ int err;
+
+ skel = map_excl__open();
+ if (!ASSERT_OK_PTR(skel, "map_excl open"))
+ return;
+
+ err = bpf_map__make_exclusive(skel->maps.outer_htab_map,
+ skel->progs.should_have_access);
+ if (!ASSERT_OK(err, "bpf_map__make_exclusive"))
+ goto out;
+
+ bpf_program__set_autoload(skel->progs.should_have_access, true);
+ bpf_program__set_autoload(skel->progs.should_not_have_access, false);
+
+ err = map_excl__load(skel);
+ ASSERT_EQ(err, -EOPNOTSUPP, "exclusive maps of maps are not supported\n");
+
+out:
+ map_excl__destroy(skel);
+}
+
+static void test_map_excl_allowed(void)
+{
+ struct map_excl *skel = map_excl__open();
+ int err;
+
+ err = bpf_map__make_exclusive(skel->maps.excl_map, skel->progs.should_have_access);
+ if (!ASSERT_OK(err, "bpf_map__make_exclusive"))
+ goto out;
+
+ bpf_program__set_autoload(skel->progs.should_have_access, true);
+ bpf_program__set_autoload(skel->progs.should_not_have_access, false);
+
+ err = map_excl__load(skel);
+ ASSERT_OK(err, "map_excl__load");
+out:
+ map_excl__destroy(skel);
+}
+
+static void test_map_excl_denied(void)
+{
+ struct map_excl *skel = map_excl__open();
+ int err;
+
+ err = bpf_map__make_exclusive(skel->maps.excl_map, skel->progs.should_have_access);
+ if (!ASSERT_OK(err, "bpf_map__make_exclusive"))
+ goto out;
+
+ bpf_program__set_autoload(skel->progs.should_have_access, false);
+ bpf_program__set_autoload(skel->progs.should_not_have_access, true);
+
+ err = map_excl__load(skel);
+ ASSERT_EQ(err, -EACCES, "exclusive map Paccess not denied\n");
+out:
+ map_excl__destroy(skel);
+
+}
+
+void test_map_excl(void)
+{
+ start_libbpf_log_capture();
+ if (test__start_subtest("map_excl_allowed"))
+ test_map_excl_allowed();
+ stop_libbpf_log_capture();
+ if (test__start_subtest("map_excl_denied"))
+ test_map_excl_denied();
+ if (test__start_subtest("map_exclusive_outer_array"))
+ test_map_exclusive_outer_array();
+ if (test__start_subtest("map_exclusive_outer_htab"))
+ test_map_exclusive_outer_htab();
+ if (test__start_subtest("map_exclusive_inner"))
+ test_map_exclusive_inner();
+}
diff --git a/tools/testing/selftests/bpf/progs/map_excl.c b/tools/testing/selftests/bpf/progs/map_excl.c
new file mode 100644
index 000000000000..9543aa3ab484
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/map_excl.c
@@ -0,0 +1,65 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023. Huawei Technologies Co., Ltd */
+#include <linux/bpf.h>
+#include <time.h>
+#include <bpf/bpf_helpers.h>
+
+#include "bpf_misc.h"
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __type(key, __u32);
+ __type(value, __u32);
+ __uint(max_entries, 1);
+} excl_map SEC(".maps");
+
+struct inner_map_type {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(key_size, 4);
+ __uint(value_size, 4);
+ __uint(max_entries, 1);
+} inner_map SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
+ __type(key, int);
+ __type(value, int);
+ __uint(max_entries, 1);
+ __array(values, struct inner_map_type);
+} outer_array_map SEC(".maps") = {
+ .values = {
+ [0] = &inner_map,
+ },
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH_OF_MAPS);
+ __type(key, int);
+ __type(value, int);
+ __uint(max_entries, 1);
+ __array(values, struct inner_map_type);
+} outer_htab_map SEC(".maps") = {
+ .values = {
+ [0] = &inner_map,
+ },
+};
+
+char _license[] SEC("license") = "GPL";
+
+SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
+int should_have_access(void *ctx)
+{
+ int key = 0, value = 0xdeadbeef;
+
+ bpf_map_update_elem(&excl_map, &key, &value, 0);
+ return 0;
+}
+
+SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
+int should_not_have_access(void *ctx)
+{
+ int key = 0, value = 0xdeadbeef;
+
+ bpf_map_update_elem(&excl_map, &key, &value, 0);
+ return 0;
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (5 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 06/12] selftests/bpf: Add tests for exclusive maps KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-07 9:26 ` kernel test robot
` (2 more replies)
2025-06-06 23:29 ` [PATCH 08/12] bpf: Implement signature verification for BPF programs KP Singh
` (6 subsequent siblings)
13 siblings, 3 replies; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
Currently only array maps are supported, but the implementation can be
extended for other maps and objects. The hash is memoized only for
exclusive and frozen maps as their content is stable until the exclusive
program modifies the map.
This is required for BPF signing, enabling a trusted loader program to
verify a map's integrity. The loader retrieves
the map's runtime hash from the kernel and compares it against an
expected hash computed at build time.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
include/linux/bpf.h | 3 +++
include/uapi/linux/bpf.h | 2 ++
kernel/bpf/arraymap.c | 13 ++++++++++++
kernel/bpf/syscall.c | 38 ++++++++++++++++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 2 ++
5 files changed, 58 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cb1bea99702a..35f1a633d87a 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -7,6 +7,7 @@
#include <uapi/linux/bpf.h>
#include <uapi/linux/filter.h>
+#include <crypto/sha2.h>
#include <linux/workqueue.h>
#include <linux/file.h>
#include <linux/percpu.h>
@@ -110,6 +111,7 @@ struct bpf_map_ops {
long (*map_pop_elem)(struct bpf_map *map, void *value);
long (*map_peek_elem)(struct bpf_map *map, void *value);
void *(*map_lookup_percpu_elem)(struct bpf_map *map, void *key, u32 cpu);
+ int (*map_get_hash)(struct bpf_map *map, u32 hash_buf_size, void *hash_buf);
/* funcs called by prog_array and perf_event_array map */
void *(*map_fd_get_ptr)(struct bpf_map *map, struct file *map_file,
@@ -262,6 +264,7 @@ struct bpf_list_node_kern {
} __attribute__((aligned(8)));
struct bpf_map {
+ u8 sha[SHA256_DIGEST_SIZE];
const struct bpf_map_ops *ops;
struct bpf_map *inner_map_meta;
#ifdef CONFIG_SECURITY
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 6f2f4f3b3822..ffd9e11befc2 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -6630,6 +6630,8 @@ struct bpf_map_info {
__u32 btf_value_type_id;
__u32 btf_vmlinux_id;
__u64 map_extra;
+ __aligned_u64 hash;
+ __u32 hash_size;
} __attribute__((aligned(8)));
struct bpf_btf_info {
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 8719aa821b63..1fb989db03a2 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -12,6 +12,7 @@
#include <uapi/linux/btf.h>
#include <linux/rcupdate_trace.h>
#include <linux/btf_ids.h>
+#include <crypto/sha256_base.h>
#include "map_in_map.h"
@@ -174,6 +175,17 @@ static void *array_map_lookup_elem(struct bpf_map *map, void *key)
return array->value + (u64)array->elem_size * (index & array->index_mask);
}
+static int array_map_get_hash(struct bpf_map *map, u32 hash_buf_size,
+ void *hash_buf)
+{
+ struct bpf_array *array = container_of(map, struct bpf_array, map);
+
+ bpf_sha256(array->value, (u64)array->elem_size * array->map.max_entries,
+ hash_buf);
+ memcpy(array->map.sha, hash_buf, sizeof(array->map.sha));
+ return 0;
+}
+
static int array_map_direct_value_addr(const struct bpf_map *map, u64 *imm,
u32 off)
{
@@ -805,6 +817,7 @@ const struct bpf_map_ops array_map_ops = {
.map_mem_usage = array_map_mem_usage,
.map_btf_id = &array_map_btf_ids[0],
.iter_seq_info = &iter_seq_info,
+ .map_get_hash = &array_map_get_hash,
};
const struct bpf_map_ops percpu_array_map_ops = {
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index bef9edcfdb76..c81be07fa4fa 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
*/
+#include <crypto/sha2.h>
#include <linux/bpf.h>
#include <linux/bpf-cgroup.h>
#include <linux/bpf_trace.h>
@@ -5027,6 +5028,9 @@ static int bpf_map_get_info_by_fd(struct file *file,
info_len = min_t(u32, sizeof(info), info_len);
memset(&info, 0, sizeof(info));
+ if (copy_from_user(&info, uinfo, info_len))
+ return -EFAULT;
+
info.type = map->map_type;
info.id = map->id;
info.key_size = map->key_size;
@@ -5051,6 +5055,40 @@ static int bpf_map_get_info_by_fd(struct file *file,
return err;
}
+ if (map->ops->map_get_hash && map->frozen && map->excl_prog_sha) {
+ err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE, &map->sha);
+ if (err != 0)
+ return err;
+ }
+
+ if (info.hash) {
+ char __user *uhash = u64_to_user_ptr(info.hash);
+
+ if (!map->ops->map_get_hash)
+ return -EINVAL;
+
+ if (info.hash_size < SHA256_DIGEST_SIZE)
+ return -EINVAL;
+
+ info.hash_size = SHA256_DIGEST_SIZE;
+
+ if (map->excl_prog_sha && map->frozen) {
+ if (copy_to_user(uhash, map->sha, SHA256_DIGEST_SIZE) !=
+ 0)
+ return -EFAULT;
+ } else {
+ u8 sha[SHA256_DIGEST_SIZE];
+
+ err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE,
+ sha);
+ if (err != 0)
+ return err;
+
+ if (copy_to_user(uhash, sha, SHA256_DIGEST_SIZE) != 0)
+ return -EFAULT;
+ }
+ }
+
if (copy_to_user(uinfo, &info, info_len) ||
put_user(info_len, &uattr->info.info_len))
return -EFAULT;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 6f2f4f3b3822..ffd9e11befc2 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -6630,6 +6630,8 @@ struct bpf_map_info {
__u32 btf_value_type_id;
__u32 btf_vmlinux_id;
__u64 map_extra;
+ __aligned_u64 hash;
+ __u32 hash_size;
} __attribute__((aligned(8)));
struct bpf_btf_info {
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
2025-06-06 23:29 ` [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD KP Singh
@ 2025-06-07 9:26 ` kernel test robot
2025-06-08 13:11 ` kernel test robot
2025-06-09 21:30 ` Alexei Starovoitov
2 siblings, 0 replies; 79+ messages in thread
From: kernel test robot @ 2025-06-07 9:26 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module
Cc: oe-kbuild-all, bboscaccy, paul, kys, ast, daniel, andrii,
KP Singh
Hi KP,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/net]
[also build test ERROR on bpf-next/master bpf/master linus/master next-20250606]
[cannot apply to v6.15]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/KP-Singh/bpf-Implement-an-internal-helper-for-SHA256-hashing/20250607-073052
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git net
patch link: https://lore.kernel.org/r/20250606232914.317094-8-kpsingh%40kernel.org
patch subject: [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
config: x86_64-buildonly-randconfig-005-20250607 (https://download.01.org/0day-ci/archive/20250607/202506071738.5MZFjRuA-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250607/202506071738.5MZFjRuA-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202506071738.5MZFjRuA-lkp@intel.com/
All errors (new ones prefixed by >>):
>> kernel/bpf/arraymap.c:15:10: fatal error: crypto/sha256_base.h: No such file or directory
15 | #include <crypto/sha256_base.h>
| ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
vim +15 kernel/bpf/arraymap.c
> 15 #include <crypto/sha256_base.h>
16
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
2025-06-06 23:29 ` [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD KP Singh
2025-06-07 9:26 ` kernel test robot
@ 2025-06-08 13:11 ` kernel test robot
2025-06-09 21:30 ` Alexei Starovoitov
2 siblings, 0 replies; 79+ messages in thread
From: kernel test robot @ 2025-06-08 13:11 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module
Cc: oe-kbuild-all, bboscaccy, paul, kys, ast, daniel, andrii,
KP Singh
Hi KP,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/net]
[also build test WARNING on bpf-next/master bpf/master linus/master next-20250606]
[cannot apply to v6.15]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/KP-Singh/bpf-Implement-an-internal-helper-for-SHA256-hashing/20250607-073052
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git net
patch link: https://lore.kernel.org/r/20250606232914.317094-8-kpsingh%40kernel.org
patch subject: [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202506082011.6Tejyd72-lkp@intel.com/
includecheck warnings: (new ones prefixed by >>)
>> include/linux/bpf.h: crypto/sha2.h is included more than once.
vim +10 include/linux/bpf.h
9
> 10 #include <crypto/sha2.h>
11 #include <linux/workqueue.h>
12 #include <linux/file.h>
13 #include <linux/percpu.h>
14 #include <linux/err.h>
15 #include <linux/rbtree_latch.h>
16 #include <linux/numa.h>
17 #include <linux/mm_types.h>
18 #include <linux/wait.h>
19 #include <linux/refcount.h>
20 #include <linux/mutex.h>
21 #include <linux/module.h>
22 #include <linux/kallsyms.h>
23 #include <linux/capability.h>
24 #include <linux/sched/mm.h>
25 #include <linux/slab.h>
26 #include <linux/percpu-refcount.h>
27 #include <linux/stddef.h>
28 #include <linux/bpfptr.h>
29 #include <linux/btf.h>
30 #include <linux/rcupdate_trace.h>
31 #include <linux/static_call.h>
32 #include <linux/memcontrol.h>
33 #include <linux/cfi.h>
34 #include <asm/rqspinlock.h>
> 35 #include <crypto/sha2.h>
36
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
2025-06-06 23:29 ` [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD KP Singh
2025-06-07 9:26 ` kernel test robot
2025-06-08 13:11 ` kernel test robot
@ 2025-06-09 21:30 ` Alexei Starovoitov
2025-06-11 14:27 ` KP Singh
2 siblings, 1 reply; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-09 21:30 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> Currently only array maps are supported, but the implementation can be
> extended for other maps and objects. The hash is memoized only for
> exclusive and frozen maps as their content is stable until the exclusive
> program modifies the map.
>
> This is required for BPF signing, enabling a trusted loader program to
> verify a map's integrity. The loader retrieves
> the map's runtime hash from the kernel and compares it against an
> expected hash computed at build time.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> include/linux/bpf.h | 3 +++
> include/uapi/linux/bpf.h | 2 ++
> kernel/bpf/arraymap.c | 13 ++++++++++++
> kernel/bpf/syscall.c | 38 ++++++++++++++++++++++++++++++++++
> tools/include/uapi/linux/bpf.h | 2 ++
> 5 files changed, 58 insertions(+)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index cb1bea99702a..35f1a633d87a 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -7,6 +7,7 @@
> #include <uapi/linux/bpf.h>
> #include <uapi/linux/filter.h>
>
> +#include <crypto/sha2.h>
> #include <linux/workqueue.h>
> #include <linux/file.h>
> #include <linux/percpu.h>
> @@ -110,6 +111,7 @@ struct bpf_map_ops {
> long (*map_pop_elem)(struct bpf_map *map, void *value);
> long (*map_peek_elem)(struct bpf_map *map, void *value);
> void *(*map_lookup_percpu_elem)(struct bpf_map *map, void *key, u32 cpu);
> + int (*map_get_hash)(struct bpf_map *map, u32 hash_buf_size, void *hash_buf);
>
> /* funcs called by prog_array and perf_event_array map */
> void *(*map_fd_get_ptr)(struct bpf_map *map, struct file *map_file,
> @@ -262,6 +264,7 @@ struct bpf_list_node_kern {
> } __attribute__((aligned(8)));
>
> struct bpf_map {
> + u8 sha[SHA256_DIGEST_SIZE];
> const struct bpf_map_ops *ops;
> struct bpf_map *inner_map_meta;
> #ifdef CONFIG_SECURITY
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 6f2f4f3b3822..ffd9e11befc2 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -6630,6 +6630,8 @@ struct bpf_map_info {
> __u32 btf_value_type_id;
> __u32 btf_vmlinux_id;
> __u64 map_extra;
> + __aligned_u64 hash;
> + __u32 hash_size;
> } __attribute__((aligned(8)));
>
> struct bpf_btf_info {
> diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
> index 8719aa821b63..1fb989db03a2 100644
> --- a/kernel/bpf/arraymap.c
> +++ b/kernel/bpf/arraymap.c
> @@ -12,6 +12,7 @@
> #include <uapi/linux/btf.h>
> #include <linux/rcupdate_trace.h>
> #include <linux/btf_ids.h>
> +#include <crypto/sha256_base.h>
>
> #include "map_in_map.h"
>
> @@ -174,6 +175,17 @@ static void *array_map_lookup_elem(struct bpf_map *map, void *key)
> return array->value + (u64)array->elem_size * (index & array->index_mask);
> }
>
> +static int array_map_get_hash(struct bpf_map *map, u32 hash_buf_size,
> + void *hash_buf)
> +{
> + struct bpf_array *array = container_of(map, struct bpf_array, map);
> +
> + bpf_sha256(array->value, (u64)array->elem_size * array->map.max_entries,
> + hash_buf);
> + memcpy(array->map.sha, hash_buf, sizeof(array->map.sha));
> + return 0;
> +}
> +
> static int array_map_direct_value_addr(const struct bpf_map *map, u64 *imm,
> u32 off)
> {
> @@ -805,6 +817,7 @@ const struct bpf_map_ops array_map_ops = {
> .map_mem_usage = array_map_mem_usage,
> .map_btf_id = &array_map_btf_ids[0],
> .iter_seq_info = &iter_seq_info,
> + .map_get_hash = &array_map_get_hash,
> };
>
> const struct bpf_map_ops percpu_array_map_ops = {
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index bef9edcfdb76..c81be07fa4fa 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -1,6 +1,7 @@
> // SPDX-License-Identifier: GPL-2.0-only
> /* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com
> */
> +#include <crypto/sha2.h>
> #include <linux/bpf.h>
> #include <linux/bpf-cgroup.h>
> #include <linux/bpf_trace.h>
> @@ -5027,6 +5028,9 @@ static int bpf_map_get_info_by_fd(struct file *file,
> info_len = min_t(u32, sizeof(info), info_len);
>
> memset(&info, 0, sizeof(info));
> + if (copy_from_user(&info, uinfo, info_len))
> + return -EFAULT;
> +
> info.type = map->map_type;
> info.id = map->id;
> info.key_size = map->key_size;
> @@ -5051,6 +5055,40 @@ static int bpf_map_get_info_by_fd(struct file *file,
> return err;
> }
>
> + if (map->ops->map_get_hash && map->frozen && map->excl_prog_sha) {
> + err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE, &map->sha);
& in &map->sha looks suspicious. Should be just map->sha ?
> + if (err != 0)
> + return err;
> + }
> +
> + if (info.hash) {
> + char __user *uhash = u64_to_user_ptr(info.hash);
> +
> + if (!map->ops->map_get_hash)
> + return -EINVAL;
> +
> + if (info.hash_size < SHA256_DIGEST_SIZE)
Similar to prog let's == here?
> + return -EINVAL;
> +
> + info.hash_size = SHA256_DIGEST_SIZE;
> +
> + if (map->excl_prog_sha && map->frozen) {
> + if (copy_to_user(uhash, map->sha, SHA256_DIGEST_SIZE) !=
> + 0)
> + return -EFAULT;
I would drop above and keep below part only.
> + } else {
> + u8 sha[SHA256_DIGEST_SIZE];
> +
> + err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE,
> + sha);
Here the kernel can write into map->sha and then copy it to uhash.
I think the concern was to disallow 2nd map_get_hash on exclusive
and frozen map, right?
But I think that won't be an issue for signed lskel loader.
Since the map is frozen the user space cannot modify it.
Since the map is exclusive another bpf prog cannot modify it.
If user space calls map_get_hash 2nd time the sha will be
exactly the same until loader prog writes into the map.
So I see no harm generalizing this bit of code.
I don't have a particular use case in mind,
but it seems fine to allow user space to recompute sha
of exclusive and frozen map.
The loader will check the sha of its map as the very first operation,
so if user space did two map_get_hash() it just wasted cpu cycles.
If user space is calling map_get_hash() while loader prog
reads and writes into it the map->sha will change, but
it doesn't matter to the loader program anymore.
Also I wouldn't special case the !info.hash case for exclusive maps.
It seems cleaner to waste few bytes on stack in
skel_obj_get_info_by_fd() later in patch 9.
Let it point to valid u8 sha[] on stack.
The skel won't use it, but this way we can kernel behavior
consistent.
if info.hash != NULL -> compute sha, update map->sha, copy to user space.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
2025-06-09 21:30 ` Alexei Starovoitov
@ 2025-06-11 14:27 ` KP Singh
2025-06-11 15:04 ` Alexei Starovoitov
0 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-11 14:27 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Mon, Jun 9, 2025 at 11:30 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
[...]
> >
> > + if (map->ops->map_get_hash && map->frozen && map->excl_prog_sha) {
> > + err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE, &map->sha);
>
> & in &map->sha looks suspicious. Should be just map->sha ?
yep, fixed.
>
> > + if (err != 0)
> > + return err;
> > + }
> > +
> > + if (info.hash) {
> > + char __user *uhash = u64_to_user_ptr(info.hash);
> > +
> > + if (!map->ops->map_get_hash)
> > + return -EINVAL;
> > +
> > + if (info.hash_size < SHA256_DIGEST_SIZE)
>
> Similar to prog let's == here?
Thanks, yeah agreed.
>
> > + return -EINVAL;
> > +
> > + info.hash_size = SHA256_DIGEST_SIZE;
> > +
> > + if (map->excl_prog_sha && map->frozen) {
> > + if (copy_to_user(uhash, map->sha, SHA256_DIGEST_SIZE) !=
> > + 0)
> > + return -EFAULT;
>
> I would drop above and keep below part only.
>
> > + } else {
> > + u8 sha[SHA256_DIGEST_SIZE];
> > +
> > + err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE,
> > + sha);
>
> Here the kernel can write into map->sha and then copy it to uhash.
> I think the concern was to disallow 2nd map_get_hash on exclusive
> and frozen map, right?
> But I think that won't be an issue for signed lskel loader.
> Since the map is frozen the user space cannot modify it.
> Since the map is exclusive another bpf prog cannot modify it.
> If user space calls map_get_hash 2nd time the sha will be
> exactly the same until loader prog writes into the map.
> So I see no harm generalizing this bit of code.
> I don't have a particular use case in mind,
> but it seems fine to allow user space to recompute sha
> of exclusive and frozen map.
> The loader will check the sha of its map as the very first operation,
> so if user space did two map_get_hash() it just wasted cpu cycles.
> If user space is calling map_get_hash() while loader prog
> reads and writes into it the map->sha will change, but
> it doesn't matter to the loader program anymore.
>
> Also I wouldn't special case the !info.hash case for exclusive maps.
> It seems cleaner to waste few bytes on stack in
> skel_obj_get_info_by_fd() later in patch 9.
> Let it point to valid u8 sha[] on stack.
> The skel won't use it, but this way we can kernel behavior
> consistent.
> if info.hash != NULL -> compute sha, update map->sha, copy to user space.
Here's what I updated it to:
if (info.hash) {
char __user *uhash = u64_to_user_ptr(info.hash);
if (!map->ops->map_get_hash)
return -EINVAL;
if (info.hash_size != SHA256_DIGEST_SIZE)
return -EINVAL;
if (!map->excl_prog_sha || !map->frozen)
return -EINVAL;
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I think we still need this check as we want the program to
have exclusive control over the map when the hash is being calculated
right?
err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE, map->sha);
if (err != 0)
return err;
if (copy_to_user(uhash, map->sha, SHA256_DIGEST_SIZE) != 0)
return -EFAULT;
} else if (info.hash_size) {
return -EINVAL;
}
- KP
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
2025-06-11 14:27 ` KP Singh
@ 2025-06-11 15:04 ` Alexei Starovoitov
2025-06-11 16:05 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-11 15:04 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Wed, Jun 11, 2025 at 7:27 AM KP Singh <kpsingh@kernel.org> wrote:
>
> On Mon, Jun 9, 2025 at 11:30 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> [...]
>
> > >
> > > + if (map->ops->map_get_hash && map->frozen && map->excl_prog_sha) {
> > > + err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE, &map->sha);
> >
> > & in &map->sha looks suspicious. Should be just map->sha ?
>
> yep, fixed.
>
> >
> > > + if (err != 0)
> > > + return err;
> > > + }
> > > +
> > > + if (info.hash) {
> > > + char __user *uhash = u64_to_user_ptr(info.hash);
> > > +
> > > + if (!map->ops->map_get_hash)
> > > + return -EINVAL;
> > > +
> > > + if (info.hash_size < SHA256_DIGEST_SIZE)
> >
> > Similar to prog let's == here?
>
> Thanks, yeah agreed.
>
> >
> > > + return -EINVAL;
> > > +
> > > + info.hash_size = SHA256_DIGEST_SIZE;
> > > +
> > > + if (map->excl_prog_sha && map->frozen) {
> > > + if (copy_to_user(uhash, map->sha, SHA256_DIGEST_SIZE) !=
> > > + 0)
> > > + return -EFAULT;
> >
> > I would drop above and keep below part only.
> >
> > > + } else {
> > > + u8 sha[SHA256_DIGEST_SIZE];
> > > +
> > > + err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE,
> > > + sha);
> >
> > Here the kernel can write into map->sha and then copy it to uhash.
> > I think the concern was to disallow 2nd map_get_hash on exclusive
> > and frozen map, right?
> > But I think that won't be an issue for signed lskel loader.
> > Since the map is frozen the user space cannot modify it.
> > Since the map is exclusive another bpf prog cannot modify it.
> > If user space calls map_get_hash 2nd time the sha will be
> > exactly the same until loader prog writes into the map.
> > So I see no harm generalizing this bit of code.
> > I don't have a particular use case in mind,
> > but it seems fine to allow user space to recompute sha
> > of exclusive and frozen map.
> > The loader will check the sha of its map as the very first operation,
> > so if user space did two map_get_hash() it just wasted cpu cycles.
> > If user space is calling map_get_hash() while loader prog
> > reads and writes into it the map->sha will change, but
> > it doesn't matter to the loader program anymore.
> >
> > Also I wouldn't special case the !info.hash case for exclusive maps.
> > It seems cleaner to waste few bytes on stack in
> > skel_obj_get_info_by_fd() later in patch 9.
> > Let it point to valid u8 sha[] on stack.
> > The skel won't use it, but this way we can kernel behavior
> > consistent.
> > if info.hash != NULL -> compute sha, update map->sha, copy to user space.
>
> Here's what I updated it to:
>
> if (info.hash) {
> char __user *uhash = u64_to_user_ptr(info.hash);
>
> if (!map->ops->map_get_hash)
> return -EINVAL;
>
> if (info.hash_size != SHA256_DIGEST_SIZE)
> return -EINVAL;
>
> if (!map->excl_prog_sha || !map->frozen)
> return -EINVAL;
>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> I think we still need this check as we want the program to
> have exclusive control over the map when the hash is being calculated
> right?
Why add such a restriction?
Whether it's frozen or exclusive or both it still races with map_get_hash.
It's up to the user to make sure that the computed hash
will be meaningful.
I would allow for all maps.
The callback will work for arrays initially, but that
can be improved in the future.
> err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE, map->sha);
> if (err != 0)
> return err;
>
> if (copy_to_user(uhash, map->sha, SHA256_DIGEST_SIZE) != 0)
> return -EFAULT;
> } else if (info.hash_size) {
> return -EINVAL;
> }
yep.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
2025-06-11 15:04 ` Alexei Starovoitov
@ 2025-06-11 16:05 ` KP Singh
0 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-06-11 16:05 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Wed, Jun 11, 2025 at 5:04 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Jun 11, 2025 at 7:27 AM KP Singh <kpsingh@kernel.org> wrote:
> >
> > On Mon, Jun 9, 2025 at 11:30 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
> >
> > [...]
> >
> > > >
> > > > + if (map->ops->map_get_hash && map->frozen && map->excl_prog_sha) {
> > > > + err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE, &map->sha);
> > >
> > > & in &map->sha looks suspicious. Should be just map->sha ?
> >
> > yep, fixed.
> >
> > >
> > > > + if (err != 0)
> > > > + return err;
> > > > + }
> > > > +
> > > > + if (info.hash) {
> > > > + char __user *uhash = u64_to_user_ptr(info.hash);
> > > > +
> > > > + if (!map->ops->map_get_hash)
> > > > + return -EINVAL;
> > > > +
> > > > + if (info.hash_size < SHA256_DIGEST_SIZE)
> > >
> > > Similar to prog let's == here?
> >
> > Thanks, yeah agreed.
> >
> > >
> > > > + return -EINVAL;
> > > > +
> > > > + info.hash_size = SHA256_DIGEST_SIZE;
> > > > +
> > > > + if (map->excl_prog_sha && map->frozen) {
> > > > + if (copy_to_user(uhash, map->sha, SHA256_DIGEST_SIZE) !=
> > > > + 0)
> > > > + return -EFAULT;
> > >
> > > I would drop above and keep below part only.
> > >
> > > > + } else {
> > > > + u8 sha[SHA256_DIGEST_SIZE];
> > > > +
> > > > + err = map->ops->map_get_hash(map, SHA256_DIGEST_SIZE,
> > > > + sha);
> > >
> > > Here the kernel can write into map->sha and then copy it to uhash.
> > > I think the concern was to disallow 2nd map_get_hash on exclusive
> > > and frozen map, right?
> > > But I think that won't be an issue for signed lskel loader.
> > > Since the map is frozen the user space cannot modify it.
> > > Since the map is exclusive another bpf prog cannot modify it.
> > > If user space calls map_get_hash 2nd time the sha will be
> > > exactly the same until loader prog writes into the map.
> > > So I see no harm generalizing this bit of code.
> > > I don't have a particular use case in mind,
> > > but it seems fine to allow user space to recompute sha
> > > of exclusive and frozen map.
> > > The loader will check the sha of its map as the very first operation,
> > > so if user space did two map_get_hash() it just wasted cpu cycles.
> > > If user space is calling map_get_hash() while loader prog
> > > reads and writes into it the map->sha will change, but
> > > it doesn't matter to the loader program anymore.
> > >
> > > Also I wouldn't special case the !info.hash case for exclusive maps.
> > > It seems cleaner to waste few bytes on stack in
> > > skel_obj_get_info_by_fd() later in patch 9.
> > > Let it point to valid u8 sha[] on stack.
> > > The skel won't use it, but this way we can kernel behavior
> > > consistent.
> > > if info.hash != NULL -> compute sha, update map->sha, copy to user space.
> >
> > Here's what I updated it to:
> >
> > if (info.hash) {
> > char __user *uhash = u64_to_user_ptr(info.hash);
> >
> > if (!map->ops->map_get_hash)
> > return -EINVAL;
> >
> > if (info.hash_size != SHA256_DIGEST_SIZE)
> > return -EINVAL;
> >
> > if (!map->excl_prog_sha || !map->frozen)
> > return -EINVAL;
> >
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > I think we still need this check as we want the program to
> > have exclusive control over the map when the hash is being calculated
> > right?
>
> Why add such a restriction?
> Whether it's frozen or exclusive or both it still races with map_get_hash.
> It's up to the user to make sure that the computed hash
> will be meaningful.
Sure, yeah. I removed the check, they can use the hash in many ways,
even if racy.
- KP
> I would allow for all maps.
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 08/12] bpf: Implement signature verification for BPF programs
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (6 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 07/12] bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-09 21:39 ` Alexei Starovoitov
2025-06-10 16:37 ` Blaise Boscaccy
2025-06-06 23:29 ` [PATCH 09/12] libbpf: Update light skeleton for signing KP Singh
` (5 subsequent siblings)
13 siblings, 2 replies; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
This patch extends the BPF_PROG_LOAD command by adding three new fields
to `union bpf_attr` in the user-space API:
- signature: A pointer to the signature blob.
- signature_size: The size of the signature blob.
- keyring_id: The serial number of a loaded kernel keyring (e.g.,
the user or session keyring) containing the trusted public keys.
When a BPF program is loaded with a signature, the kernel:
1. Retrieves the trusted keyring using the provided `keyring_id`.
2. Verifies the supplied signature against the BPF program's
instruction buffer.
3. If the signature is valid and was generated by a key in the trusted
keyring, the program load proceeds.
4. If no signature is provided, the load proceeds as before, allowing
for backward compatibility. LSMs can chose to restrict unsigned
programs and implement a security policy.
5. If signature verification fails for any reason,
the program is not loaded.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
include/linux/bpf.h | 9 +++++++-
include/uapi/linux/bpf.h | 10 +++++++++
kernel/bpf/syscall.c | 39 +++++++++++++++++++++++++++++++++-
kernel/trace/bpf_trace.c | 6 ++++--
tools/include/uapi/linux/bpf.h | 10 +++++++++
tools/lib/bpf/bpf.c | 2 +-
6 files changed, 71 insertions(+), 5 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 35f1a633d87a..32a41803d61c 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2778,7 +2778,14 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
u16 btf_fd_idx, u8 **func_addr);
-struct bpf_core_ctx {
+__bpf_kfunc struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags);
+__bpf_kfunc struct bpf_key *bpf_lookup_system_key(u64 id);
+__bpf_kfunc void bpf_key_put(struct bpf_key *bkey);
+__bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
+ struct bpf_dynptr *sig_p,
+ struct bpf_key *trusted_keyring);
+
+ struct bpf_core_ctx {
struct bpf_verifier_log *log;
const struct btf *btf;
};
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index ffd9e11befc2..5f7c82ebe10a 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1589,6 +1589,16 @@ union bpf_attr {
* continuous.
*/
__u32 fd_array_cnt;
+ /* Pointer to a buffer containing the signature of the BPF
+ * program.
+ */
+ __aligned_u64 signature;
+ /* Size of the signature buffer in bytes. */
+ __u32 signature_size;
+ /* ID of the kernel keyring to be used for signature
+ * verification.
+ */
+ __u32 keyring_id;
};
struct { /* anonymous struct used by BPF_OBJ_* commands */
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index c81be07fa4fa..6cd5ba42d946 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2782,8 +2782,39 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type)
}
}
+static int bpf_prog_verify_signature(struct bpf_prog *prog, union bpf_attr *attr, bpfptr_t uattr)
+{
+ bpfptr_t usig = make_bpfptr(attr->signature, uattr.is_kernel);
+ struct bpf_dynptr_kern sig_ptr, insns_ptr;
+ struct bpf_key *key = NULL;
+ void *sig;
+ int err = 0;
+
+ key = bpf_lookup_user_key(attr->keyring_id, 0);
+ if (!key)
+ return -ENOKEY;
+
+ sig = kvmemdup_bpfptr(usig, attr->signature_size);
+ if (!sig) {
+ bpf_key_put(key);
+ return -ENOMEM;
+ }
+
+ bpf_dynptr_init(&sig_ptr, sig, BPF_DYNPTR_TYPE_LOCAL, 0,
+ attr->signature_size);
+ bpf_dynptr_init(&insns_ptr, prog->insnsi, BPF_DYNPTR_TYPE_LOCAL, 0,
+ prog->len * sizeof(struct bpf_insn));
+
+ err = bpf_verify_pkcs7_signature((struct bpf_dynptr *)&insns_ptr,
+ (struct bpf_dynptr *)&sig_ptr, key);
+
+ bpf_key_put(key);
+ kvfree(sig);
+ return err;
+}
+
/* last field in 'union bpf_attr' used by this command */
-#define BPF_PROG_LOAD_LAST_FIELD fd_array_cnt
+#define BPF_PROG_LOAD_LAST_FIELD keyring_id
static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
{
@@ -2947,6 +2978,12 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
/* eBPF programs must be GPL compatible to use GPL-ed functions */
prog->gpl_compatible = license_is_gpl_compatible(license) ? 1 : 0;
+ if (attr->signature) {
+ err = bpf_prog_verify_signature(prog, attr, uattr);
+ if (err)
+ goto free_prog;
+ }
+
prog->orig_prog = NULL;
prog->jited = 0;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 132c8be6f635..0cce39e1a9ee 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1351,7 +1351,6 @@ __bpf_kfunc void bpf_key_put(struct bpf_key *bkey)
kfree(bkey);
}
-#ifdef CONFIG_SYSTEM_DATA_VERIFICATION
/**
* bpf_verify_pkcs7_signature - verify a PKCS#7 signature
* @data_p: data to verify
@@ -1367,6 +1366,7 @@ __bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
struct bpf_dynptr *sig_p,
struct bpf_key *trusted_keyring)
{
+#ifdef CONFIG_SYSTEM_DATA_VERIFICATION
struct bpf_dynptr_kern *data_ptr = (struct bpf_dynptr_kern *)data_p;
struct bpf_dynptr_kern *sig_ptr = (struct bpf_dynptr_kern *)sig_p;
const void *data, *sig;
@@ -1396,8 +1396,10 @@ __bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
trusted_keyring->key,
VERIFYING_UNSPECIFIED_SIGNATURE, NULL,
NULL);
-}
+#else
+ return -EOPNOTSUPP;
#endif /* CONFIG_SYSTEM_DATA_VERIFICATION */
+}
__bpf_kfunc_end_defs();
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index ffd9e11befc2..5f7c82ebe10a 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1589,6 +1589,16 @@ union bpf_attr {
* continuous.
*/
__u32 fd_array_cnt;
+ /* Pointer to a buffer containing the signature of the BPF
+ * program.
+ */
+ __aligned_u64 signature;
+ /* Size of the signature buffer in bytes. */
+ __u32 signature_size;
+ /* ID of the kernel keyring to be used for signature
+ * verification.
+ */
+ __u32 keyring_id;
};
struct { /* anonymous struct used by BPF_OBJ_* commands */
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 11fa2d64ccca..1a85cfa4282c 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -240,7 +240,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
const struct bpf_insn *insns, size_t insn_cnt,
struct bpf_prog_load_opts *opts)
{
- const size_t attr_sz = offsetofend(union bpf_attr, fd_array_cnt);
+ const size_t attr_sz = offsetofend(union bpf_attr, keyring_id);
void *finfo = NULL, *linfo = NULL;
const char *func_info, *line_info;
__u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 08/12] bpf: Implement signature verification for BPF programs
2025-06-06 23:29 ` [PATCH 08/12] bpf: Implement signature verification for BPF programs KP Singh
@ 2025-06-09 21:39 ` Alexei Starovoitov
2025-06-10 16:37 ` Blaise Boscaccy
1 sibling, 0 replies; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-09 21:39 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> This patch extends the BPF_PROG_LOAD command by adding three new fields
> to `union bpf_attr` in the user-space API:
>
> - signature: A pointer to the signature blob.
> - signature_size: The size of the signature blob.
> - keyring_id: The serial number of a loaded kernel keyring (e.g.,
> the user or session keyring) containing the trusted public keys.
>
> When a BPF program is loaded with a signature, the kernel:
>
> 1. Retrieves the trusted keyring using the provided `keyring_id`.
> 2. Verifies the supplied signature against the BPF program's
> instruction buffer.
> 3. If the signature is valid and was generated by a key in the trusted
> keyring, the program load proceeds.
> 4. If no signature is provided, the load proceeds as before, allowing
> for backward compatibility. LSMs can chose to restrict unsigned
> programs and implement a security policy.
> 5. If signature verification fails for any reason,
> the program is not loaded.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> include/linux/bpf.h | 9 +++++++-
> include/uapi/linux/bpf.h | 10 +++++++++
> kernel/bpf/syscall.c | 39 +++++++++++++++++++++++++++++++++-
> kernel/trace/bpf_trace.c | 6 ++++--
> tools/include/uapi/linux/bpf.h | 10 +++++++++
> tools/lib/bpf/bpf.c | 2 +-
> 6 files changed, 71 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 35f1a633d87a..32a41803d61c 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2778,7 +2778,14 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> u16 btf_fd_idx, u8 **func_addr);
>
> -struct bpf_core_ctx {
> +__bpf_kfunc struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags);
No need for __bpf_kfunc attribute in prototypes.
It's only meaningful in definition.
> +__bpf_kfunc struct bpf_key *bpf_lookup_system_key(u64 id);
> +__bpf_kfunc void bpf_key_put(struct bpf_key *bkey);
> +__bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> + struct bpf_dynptr *sig_p,
> + struct bpf_key *trusted_keyring);
> +
We probably need to move them to kernel/bpf/helper.c first.
Since kernel/trace/bpf_trace.c depends on:
config BPF_EVENTS
depends on BPF_SYSCALL
depends on (KPROBE_EVENTS || UPROBE_EVENTS) && PERF_EVENTS
They will still be guarded by CONFIG_KEYS, of course.
> + struct bpf_core_ctx {
drop extra tab.
> struct bpf_verifier_log *log;
> const struct btf *btf;
> };
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 08/12] bpf: Implement signature verification for BPF programs
2025-06-06 23:29 ` [PATCH 08/12] bpf: Implement signature verification for BPF programs KP Singh
2025-06-09 21:39 ` Alexei Starovoitov
@ 2025-06-10 16:37 ` Blaise Boscaccy
1 sibling, 0 replies; 79+ messages in thread
From: Blaise Boscaccy @ 2025-06-10 16:37 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module
Cc: paul, kys, ast, daniel, andrii, KP Singh
KP Singh <kpsingh@kernel.org> writes:
> This patch extends the BPF_PROG_LOAD command by adding three new fields
> to `union bpf_attr` in the user-space API:
>
> - signature: A pointer to the signature blob.
> - signature_size: The size of the signature blob.
> - keyring_id: The serial number of a loaded kernel keyring (e.g.,
> the user or session keyring) containing the trusted public keys.
>
> When a BPF program is loaded with a signature, the kernel:
>
> 1. Retrieves the trusted keyring using the provided `keyring_id`.
> 2. Verifies the supplied signature against the BPF program's
> instruction buffer.
> 3. If the signature is valid and was generated by a key in the trusted
> keyring, the program load proceeds.
> 4. If no signature is provided, the load proceeds as before, allowing
> for backward compatibility. LSMs can chose to restrict unsigned
> programs and implement a security policy.
> 5. If signature verification fails for any reason,
> the program is not loaded.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> include/linux/bpf.h | 9 +++++++-
> include/uapi/linux/bpf.h | 10 +++++++++
> kernel/bpf/syscall.c | 39 +++++++++++++++++++++++++++++++++-
> kernel/trace/bpf_trace.c | 6 ++++--
> tools/include/uapi/linux/bpf.h | 10 +++++++++
> tools/lib/bpf/bpf.c | 2 +-
> 6 files changed, 71 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 35f1a633d87a..32a41803d61c 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2778,7 +2778,14 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> u16 btf_fd_idx, u8 **func_addr);
>
> -struct bpf_core_ctx {
> +__bpf_kfunc struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags);
> +__bpf_kfunc struct bpf_key *bpf_lookup_system_key(u64 id);
> +__bpf_kfunc void bpf_key_put(struct bpf_key *bkey);
> +__bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> + struct bpf_dynptr *sig_p,
> + struct bpf_key *trusted_keyring);
> +
> + struct bpf_core_ctx {
> struct bpf_verifier_log *log;
> const struct btf *btf;
> };
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index ffd9e11befc2..5f7c82ebe10a 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1589,6 +1589,16 @@ union bpf_attr {
> * continuous.
> */
> __u32 fd_array_cnt;
> + /* Pointer to a buffer containing the signature of the BPF
> + * program.
> + */
> + __aligned_u64 signature;
> + /* Size of the signature buffer in bytes. */
> + __u32 signature_size;
> + /* ID of the kernel keyring to be used for signature
> + * verification.
> + */
> + __u32 keyring_id;
> };
>
> struct { /* anonymous struct used by BPF_OBJ_* commands */
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index c81be07fa4fa..6cd5ba42d946 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -2782,8 +2782,39 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type)
> }
> }
>
> +static int bpf_prog_verify_signature(struct bpf_prog *prog, union bpf_attr *attr, bpfptr_t uattr)
> +{
> + bpfptr_t usig = make_bpfptr(attr->signature, uattr.is_kernel);
> + struct bpf_dynptr_kern sig_ptr, insns_ptr;
> + struct bpf_key *key = NULL;
> + void *sig;
> + int err = 0;
> +
> + key = bpf_lookup_user_key(attr->keyring_id, 0);
> + if (!key)
> + return -ENOKEY;
> +
> + sig = kvmemdup_bpfptr(usig, attr->signature_size);
> + if (!sig) {
> + bpf_key_put(key);
> + return -ENOMEM;
> + }
> +
> + bpf_dynptr_init(&sig_ptr, sig, BPF_DYNPTR_TYPE_LOCAL, 0,
> + attr->signature_size);
> + bpf_dynptr_init(&insns_ptr, prog->insnsi, BPF_DYNPTR_TYPE_LOCAL, 0,
> + prog->len * sizeof(struct bpf_insn));
> +
> + err = bpf_verify_pkcs7_signature((struct bpf_dynptr *)&insns_ptr,
> + (struct bpf_dynptr *)&sig_ptr, key);
> +
> + bpf_key_put(key);
> + kvfree(sig);
> + return err;
> +}
> +
> /* last field in 'union bpf_attr' used by this command */
> -#define BPF_PROG_LOAD_LAST_FIELD fd_array_cnt
> +#define BPF_PROG_LOAD_LAST_FIELD keyring_id
>
> static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
> {
> @@ -2947,6 +2978,12 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size)
> /* eBPF programs must be GPL compatible to use GPL-ed functions */
> prog->gpl_compatible = license_is_gpl_compatible(license) ? 1 : 0;
>
> + if (attr->signature) {
> + err = bpf_prog_verify_signature(prog, attr, uattr);
> + if (err)
> + goto free_prog;
> + }
> +
> prog->orig_prog = NULL;
> prog->jited = 0;
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 132c8be6f635..0cce39e1a9ee 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1351,7 +1351,6 @@ __bpf_kfunc void bpf_key_put(struct bpf_key *bkey)
> kfree(bkey);
> }
>
> -#ifdef CONFIG_SYSTEM_DATA_VERIFICATION
> /**
> * bpf_verify_pkcs7_signature - verify a PKCS#7 signature
> * @data_p: data to verify
> @@ -1367,6 +1366,7 @@ __bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> struct bpf_dynptr *sig_p,
> struct bpf_key *trusted_keyring)
> {
> +#ifdef CONFIG_SYSTEM_DATA_VERIFICATION
> struct bpf_dynptr_kern *data_ptr = (struct bpf_dynptr_kern *)data_p;
> struct bpf_dynptr_kern *sig_ptr = (struct bpf_dynptr_kern *)sig_p;
> const void *data, *sig;
> @@ -1396,8 +1396,10 @@ __bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> trusted_keyring->key,
> VERIFYING_UNSPECIFIED_SIGNATURE, NULL,
> NULL);
> -}
The usage for this is no longer unspecified. VERIFYING_BPF_SIGNATURE or
similar would add clarity here.
> +#else
> + return -EOPNOTSUPP;
> #endif /* CONFIG_SYSTEM_DATA_VERIFICATION */
> +}
>
> __bpf_kfunc_end_defs();
>
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index ffd9e11befc2..5f7c82ebe10a 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -1589,6 +1589,16 @@ union bpf_attr {
> * continuous.
> */
> __u32 fd_array_cnt;
> + /* Pointer to a buffer containing the signature of the BPF
> + * program.
> + */
> + __aligned_u64 signature;
> + /* Size of the signature buffer in bytes. */
> + __u32 signature_size;
> + /* ID of the kernel keyring to be used for signature
> + * verification.
> + */
> + __u32 keyring_id;
> };
>
> struct { /* anonymous struct used by BPF_OBJ_* commands */
> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> index 11fa2d64ccca..1a85cfa4282c 100644
> --- a/tools/lib/bpf/bpf.c
> +++ b/tools/lib/bpf/bpf.c
> @@ -240,7 +240,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
> const struct bpf_insn *insns, size_t insn_cnt,
> struct bpf_prog_load_opts *opts)
> {
> - const size_t attr_sz = offsetofend(union bpf_attr, fd_array_cnt);
> + const size_t attr_sz = offsetofend(union bpf_attr, keyring_id);
> void *finfo = NULL, *linfo = NULL;
> const char *func_info, *line_info;
> __u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
> --
> 2.43.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 09/12] libbpf: Update light skeleton for signing
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (7 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 08/12] bpf: Implement signature verification for BPF programs KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-09 21:41 ` Alexei Starovoitov
2025-06-06 23:29 ` [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader KP Singh
` (4 subsequent siblings)
13 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
* The metadata map is created with as an exclusive map (with an
excl_prog_hash) This restricts map access exclusively to the signed
loader program, preventing tampering by other processes.
* The map is then frozen, making it read-only from userspace.
* BPF_OBJ_GET_INFO_BY_ID instructs the kernel to compute the hash of the
metadata map (H') and store it in bpf_map->sha.
* The loader is then loaded with the signature which is then verified by
the kernel.
The sekeleton currently uses the session keyring
(KEY_SPEC_SESSION_KEYRING) by default but this can
be overridden by the user of the skeleton.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
tools/lib/bpf/skel_internal.h | 57 +++++++++++++++++++++++++++++++++--
1 file changed, 54 insertions(+), 3 deletions(-)
diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
index 4d5fa079b5d6..25502925ff36 100644
--- a/tools/lib/bpf/skel_internal.h
+++ b/tools/lib/bpf/skel_internal.h
@@ -13,6 +13,7 @@
#include <unistd.h>
#include <sys/syscall.h>
#include <sys/mman.h>
+#include <linux/keyctl.h>
#include <stdlib.h>
#include "bpf.h"
#endif
@@ -64,6 +65,11 @@ struct bpf_load_and_run_opts {
__u32 data_sz;
__u32 insns_sz;
const char *errstr;
+ void *signature;
+ __u32 signature_sz;
+ __u32 keyring_id;
+ void * excl_prog_hash;
+ __u32 excl_prog_hash_sz;
};
long kern_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
@@ -218,16 +224,21 @@ static inline int skel_closenz(int fd)
static inline int skel_map_create(enum bpf_map_type map_type,
const char *map_name,
+ const void *excl_prog_hash,
+ __u32 excl_prog_hash_sz,
__u32 key_size,
__u32 value_size,
__u32 max_entries)
{
- const size_t attr_sz = offsetofend(union bpf_attr, map_extra);
+ const size_t attr_sz = offsetofend(union bpf_attr, excl_prog_hash);
union bpf_attr attr;
memset(&attr, 0, attr_sz);
attr.map_type = map_type;
+ attr.excl_prog_hash = (unsigned long) excl_prog_hash;
+ attr.excl_prog_hash_size = excl_prog_hash_sz;
+
strncpy(attr.map_name, map_name, sizeof(attr.map_name));
attr.key_size = key_size;
attr.value_size = value_size;
@@ -300,6 +311,26 @@ static inline int skel_link_create(int prog_fd, int target_fd,
return skel_sys_bpf(BPF_LINK_CREATE, &attr, attr_sz);
}
+static inline int skel_obj_get_info_by_fd(int fd)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, info);
+ union bpf_attr attr;
+
+ memset(&attr, 0, attr_sz);
+ attr.info.bpf_fd = fd;
+ return skel_sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, attr_sz);
+}
+
+static inline int skel_map_freeze(int fd)
+{
+ const size_t attr_sz = offsetofend(union bpf_attr, map_fd);
+ union bpf_attr attr;
+
+ memset(&attr, 0, attr_sz);
+ attr.map_fd = fd;
+
+ return skel_sys_bpf(BPF_MAP_FREEZE, &attr, attr_sz);
+}
#ifdef __KERNEL__
#define set_err
#else
@@ -308,12 +339,15 @@ static inline int skel_link_create(int prog_fd, int target_fd,
static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
{
- const size_t prog_load_attr_sz = offsetofend(union bpf_attr, fd_array);
+ const size_t prog_load_attr_sz = offsetofend(union bpf_attr, keyring_id);
const size_t test_run_attr_sz = offsetofend(union bpf_attr, test);
int map_fd = -1, prog_fd = -1, key = 0, err;
union bpf_attr attr;
- err = map_fd = skel_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map", 4, opts->data_sz, 1);
+ err = map_fd = skel_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map",
+ opts->excl_prog_hash,
+ opts->excl_prog_hash_sz, 4,
+ opts->data_sz, 1);
if (map_fd < 0) {
opts->errstr = "failed to create loader map";
set_err;
@@ -327,10 +361,27 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
goto out;
}
+ err = skel_map_freeze(map_fd);
+ if (err < 0) {
+ opts->errstr = "failed to freeze map";
+ set_err;
+ goto out;
+ }
+
+ err = skel_obj_get_info_by_fd(map_fd);
+ if (err < 0) {
+ opts->errstr = "failed to fetch obj info";
+ set_err;
+ goto out;
+ }
+
memset(&attr, 0, prog_load_attr_sz);
attr.prog_type = BPF_PROG_TYPE_SYSCALL;
attr.insns = (long) opts->insns;
attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
+ attr.signature = (long) opts->signature;
+ attr.signature_size = opts->signature_sz;
+ attr.keyring_id = opts->keyring_id;
attr.license = (long) "Dual BSD/GPL";
memcpy(attr.prog_name, "__loader.prog", sizeof("__loader.prog"));
attr.fd_array = (long) &map_fd;
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 09/12] libbpf: Update light skeleton for signing
2025-06-06 23:29 ` [PATCH 09/12] libbpf: Update light skeleton for signing KP Singh
@ 2025-06-09 21:41 ` Alexei Starovoitov
0 siblings, 0 replies; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-09 21:41 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> * The metadata map is created with as an exclusive map (with an
> excl_prog_hash) This restricts map access exclusively to the signed
> loader program, preventing tampering by other processes.
>
> * The map is then frozen, making it read-only from userspace.
>
> * BPF_OBJ_GET_INFO_BY_ID instructs the kernel to compute the hash of the
> metadata map (H') and store it in bpf_map->sha.
>
> * The loader is then loaded with the signature which is then verified by
> the kernel.
>
> The sekeleton currently uses the session keyring
> (KEY_SPEC_SESSION_KEYRING) by default but this can
> be overridden by the user of the skeleton.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> tools/lib/bpf/skel_internal.h | 57 +++++++++++++++++++++++++++++++++--
> 1 file changed, 54 insertions(+), 3 deletions(-)
>
> diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
> index 4d5fa079b5d6..25502925ff36 100644
> --- a/tools/lib/bpf/skel_internal.h
> +++ b/tools/lib/bpf/skel_internal.h
> @@ -13,6 +13,7 @@
> #include <unistd.h>
> #include <sys/syscall.h>
> #include <sys/mman.h>
> +#include <linux/keyctl.h>
> #include <stdlib.h>
> #include "bpf.h"
> #endif
> @@ -64,6 +65,11 @@ struct bpf_load_and_run_opts {
> __u32 data_sz;
> __u32 insns_sz;
> const char *errstr;
> + void *signature;
> + __u32 signature_sz;
> + __u32 keyring_id;
> + void * excl_prog_hash;
> + __u32 excl_prog_hash_sz;
> };
>
> long kern_sys_bpf(__u32 cmd, void *attr, __u32 attr_size);
> @@ -218,16 +224,21 @@ static inline int skel_closenz(int fd)
>
> static inline int skel_map_create(enum bpf_map_type map_type,
> const char *map_name,
> + const void *excl_prog_hash,
> + __u32 excl_prog_hash_sz,
> __u32 key_size,
> __u32 value_size,
> __u32 max_entries)
A bit odd to insert new args in the middle. Add them to the end.
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (8 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 09/12] libbpf: Update light skeleton for signing KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-10 0:08 ` Alexei Starovoitov
` (2 more replies)
2025-06-06 23:29 ` [PATCH 11/12] bpftool: Add support for signing BPF programs KP Singh
` (3 subsequent siblings)
13 siblings, 3 replies; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
To fulfill the BPF signing contract, represented as Sig(I_loader ||
H_meta), the generated trusted loader program must verify the integrity
of the metadata. This signature cryptographically binds the loader's
instructions (I_loader) to a hash of the metadata (H_meta).
The verification process is embedded directly into the loader program.
Upon execution, the loader loads the runtime hash from struct bpf_map
i.e. BPF_PSEUDO_MAP_IDX and compares this runtime hash against an
expected hash value that has been hardcoded directly by
bpf_obj__gen_loader.
The load from bpf_map can be improved by calling
BPF_OBJ_GET_INFO_BY_FD from the kernel context after BPF_OBJ_GET_INFO_BY_FD
has been updated for being called from the kernel context.
The following instructions are generated:
ld_imm64 r1, const_ptr_to_map // insn[0].src_reg == BPF_PSEUDO_MAP_IDX
r2 = *(u64 *)(r1 + 0);
ld_imm64 r3, sha256_of_map_part1 // constant precomputed by
bpftool (part of H_meta)
if r2 != r3 goto out;
r2 = *(u64 *)(r1 + 8);
ld_imm64 r3, sha256_of_map_part2 // (part of H_meta)
if r2 != r3 goto out;
r2 = *(u64 *)(r1 + 16);
ld_imm64 r3, sha256_of_map_part3 // (part of H_meta)
if r2 != r3 goto out;
r2 = *(u64 *)(r1 + 24);
ld_imm64 r3, sha256_of_map_part4 // (part of H_meta)
if r2 != r3 goto out;
...
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
tools/lib/bpf/bpf_gen_internal.h | 2 ++
tools/lib/bpf/gen_loader.c | 52 ++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 3 +-
3 files changed, 56 insertions(+), 1 deletion(-)
diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
index 6ff963a491d9..49af4260b8e6 100644
--- a/tools/lib/bpf/bpf_gen_internal.h
+++ b/tools/lib/bpf/bpf_gen_internal.h
@@ -4,6 +4,7 @@
#define __BPF_GEN_INTERNAL_H
#include "bpf.h"
+#include "libbpf_internal.h"
struct ksym_relo_desc {
const char *name;
@@ -50,6 +51,7 @@ struct bpf_gen {
__u32 nr_ksyms;
int fd_array;
int nr_fd_array;
+ int hash_insn_offset[SHA256_DWORD_SIZE];
};
void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps);
diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index 113ae4abd345..3d672c09e948 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -110,6 +110,7 @@ static void emit2(struct bpf_gen *gen, struct bpf_insn insn1, struct bpf_insn in
static int add_data(struct bpf_gen *gen, const void *data, __u32 size);
static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off);
+static void bpf_gen__signature_match(struct bpf_gen *gen);
void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps)
{
@@ -152,6 +153,8 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps
/* R7 contains the error code from sys_bpf. Copy it into R0 and exit. */
emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_7));
emit(gen, BPF_EXIT_INSN());
+ if (gen->opts->gen_hash)
+ bpf_gen__signature_match(gen);
}
static int add_data(struct bpf_gen *gen, const void *data, __u32 size)
@@ -368,6 +371,25 @@ static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off)
__emit_sys_close(gen);
}
+static int compute_sha_udpate_offsets(struct bpf_gen *gen)
+{
+ __u64 sha[SHA256_DWORD_SIZE];
+ int i, err;
+
+ err = libbpf_sha256(gen->data_start, gen->data_cur - gen->data_start, sha);
+ if (err < 0) {
+ pr_warn("sha256 computation of the metadata failed");
+ return err;
+ }
+ for (i = 0; i < SHA256_DWORD_SIZE; i++) {
+ struct bpf_insn *insn =
+ (struct bpf_insn *)(gen->insn_start + gen->hash_insn_offset[i]);
+ insn[0].imm = (__u32)sha[i];
+ insn[1].imm = sha[i] >> 32;
+ }
+ return 0;
+}
+
int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
{
int i;
@@ -394,6 +416,12 @@ int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
blob_fd_array_off(gen, i));
emit(gen, BPF_MOV64_IMM(BPF_REG_0, 0));
emit(gen, BPF_EXIT_INSN());
+ if (gen->opts->gen_hash) {
+ gen->error = compute_sha_udpate_offsets(gen);
+ if (gen->error)
+ return gen->error;
+ }
+
pr_debug("gen: finish %s\n", errstr(gen->error));
if (!gen->error) {
struct gen_loader_opts *opts = gen->opts;
@@ -557,6 +585,30 @@ void bpf_gen__map_create(struct bpf_gen *gen,
emit_sys_close_stack(gen, stack_off(inner_map_fd));
}
+static void bpf_gen__signature_match(struct bpf_gen *gen)
+{
+ __s64 off = -(gen->insn_cur - gen->insn_start - gen->cleanup_label) / 8 - 1;
+ int i;
+
+ for (i = 0; i < SHA256_DWORD_SIZE; i++) {
+ emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX,
+ 0, 0, 0, 0));
+ emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, i * sizeof(__u64)));
+ gen->hash_insn_offset[i] = gen->insn_cur - gen->insn_start;
+ emit2(gen,
+ BPF_LD_IMM64_RAW_FULL(BPF_REG_3, 0, 0, 0, 0, 0));
+
+ if (is_simm16(off)) {
+ emit(gen, BPF_MOV64_IMM(BPF_REG_7, -EINVAL));
+ emit(gen,
+ BPF_JMP_REG(BPF_JNE, BPF_REG_2, BPF_REG_3, off));
+ } else {
+ gen->error = -ERANGE;
+ emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, -1));
+ }
+ }
+}
+
void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
enum bpf_attach_type type)
{
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index b6ee9870523a..084372fa54f4 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -1803,9 +1803,10 @@ struct gen_loader_opts {
const char *insns;
__u32 data_sz;
__u32 insns_sz;
+ bool gen_hash;
};
-#define gen_loader_opts__last_field insns_sz
+#define gen_loader_opts__last_field gen_hash
LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj,
struct gen_loader_opts *opts);
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-06 23:29 ` [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader KP Singh
@ 2025-06-10 0:08 ` Alexei Starovoitov
2025-06-10 16:51 ` Blaise Boscaccy
2025-06-12 22:56 ` Andrii Nakryiko
2 siblings, 0 replies; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-10 0:08 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> To fulfill the BPF signing contract, represented as Sig(I_loader ||
> H_meta), the generated trusted loader program must verify the integrity
> of the metadata. This signature cryptographically binds the loader's
> instructions (I_loader) to a hash of the metadata (H_meta).
>
> The verification process is embedded directly into the loader program.
> Upon execution, the loader loads the runtime hash from struct bpf_map
> i.e. BPF_PSEUDO_MAP_IDX and compares this runtime hash against an
> expected hash value that has been hardcoded directly by
> bpf_obj__gen_loader.
>
> The load from bpf_map can be improved by calling
> BPF_OBJ_GET_INFO_BY_FD from the kernel context after BPF_OBJ_GET_INFO_BY_FD
> has been updated for being called from the kernel context.
>
> The following instructions are generated:
>
> ld_imm64 r1, const_ptr_to_map // insn[0].src_reg == BPF_PSEUDO_MAP_IDX
> r2 = *(u64 *)(r1 + 0);
> ld_imm64 r3, sha256_of_map_part1 // constant precomputed by
> bpftool (part of H_meta)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 8);
> ld_imm64 r3, sha256_of_map_part2 // (part of H_meta)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 16);
> ld_imm64 r3, sha256_of_map_part3 // (part of H_meta)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 24);
> ld_imm64 r3, sha256_of_map_part4 // (part of H_meta)
> if r2 != r3 goto out;
> ...
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> tools/lib/bpf/bpf_gen_internal.h | 2 ++
> tools/lib/bpf/gen_loader.c | 52 ++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 3 +-
> 3 files changed, 56 insertions(+), 1 deletion(-)
>
> diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
> index 6ff963a491d9..49af4260b8e6 100644
> --- a/tools/lib/bpf/bpf_gen_internal.h
> +++ b/tools/lib/bpf/bpf_gen_internal.h
> @@ -4,6 +4,7 @@
> #define __BPF_GEN_INTERNAL_H
>
> #include "bpf.h"
> +#include "libbpf_internal.h"
>
> struct ksym_relo_desc {
> const char *name;
> @@ -50,6 +51,7 @@ struct bpf_gen {
> __u32 nr_ksyms;
> int fd_array;
> int nr_fd_array;
> + int hash_insn_offset[SHA256_DWORD_SIZE];
> };
>
> void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps);
> diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
> index 113ae4abd345..3d672c09e948 100644
> --- a/tools/lib/bpf/gen_loader.c
> +++ b/tools/lib/bpf/gen_loader.c
> @@ -110,6 +110,7 @@ static void emit2(struct bpf_gen *gen, struct bpf_insn insn1, struct bpf_insn in
>
> static int add_data(struct bpf_gen *gen, const void *data, __u32 size);
> static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off);
> +static void bpf_gen__signature_match(struct bpf_gen *gen);
>
> void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps)
> {
> @@ -152,6 +153,8 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps
> /* R7 contains the error code from sys_bpf. Copy it into R0 and exit. */
> emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_7));
> emit(gen, BPF_EXIT_INSN());
> + if (gen->opts->gen_hash)
> + bpf_gen__signature_match(gen);
> }
>
> static int add_data(struct bpf_gen *gen, const void *data, __u32 size)
> @@ -368,6 +371,25 @@ static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off)
> __emit_sys_close(gen);
> }
>
> +static int compute_sha_udpate_offsets(struct bpf_gen *gen)
> +{
> + __u64 sha[SHA256_DWORD_SIZE];
> + int i, err;
> +
> + err = libbpf_sha256(gen->data_start, gen->data_cur - gen->data_start, sha);
> + if (err < 0) {
> + pr_warn("sha256 computation of the metadata failed");
> + return err;
> + }
> + for (i = 0; i < SHA256_DWORD_SIZE; i++) {
> + struct bpf_insn *insn =
> + (struct bpf_insn *)(gen->insn_start + gen->hash_insn_offset[i]);
Is there a reason to use offset instead of pointers?
Instead of
int hash_insn_offset[SHA256_DWORD_SIZE];
it could be
struct bpf_insn *hash_insn[SHA256_DWORD_SIZE];
> + insn[0].imm = (__u32)sha[i];
> + insn[1].imm = sha[i] >> 32;
Then above will be gen->hash_insn[i][0].imm ?
> + }
> + return 0;
> +}
> +
> int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
> {
> int i;
> @@ -394,6 +416,12 @@ int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
> blob_fd_array_off(gen, i));
> emit(gen, BPF_MOV64_IMM(BPF_REG_0, 0));
> emit(gen, BPF_EXIT_INSN());
> + if (gen->opts->gen_hash) {
> + gen->error = compute_sha_udpate_offsets(gen);
> + if (gen->error)
> + return gen->error;
> + }
> +
> pr_debug("gen: finish %s\n", errstr(gen->error));
> if (!gen->error) {
> struct gen_loader_opts *opts = gen->opts;
> @@ -557,6 +585,30 @@ void bpf_gen__map_create(struct bpf_gen *gen,
> emit_sys_close_stack(gen, stack_off(inner_map_fd));
> }
>
> +static void bpf_gen__signature_match(struct bpf_gen *gen)
> +{
> + __s64 off = -(gen->insn_cur - gen->insn_start - gen->cleanup_label) / 8 - 1;
> + int i;
> +
> + for (i = 0; i < SHA256_DWORD_SIZE; i++) {
> + emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX,
> + 0, 0, 0, 0));
> + emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, i * sizeof(__u64)));
> + gen->hash_insn_offset[i] = gen->insn_cur - gen->insn_start;
and this will be
gen->hash_insn[i] = gen->insn_cur;
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-06 23:29 ` [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader KP Singh
2025-06-10 0:08 ` Alexei Starovoitov
@ 2025-06-10 16:51 ` Blaise Boscaccy
2025-06-10 17:43 ` KP Singh
2025-06-12 22:56 ` Andrii Nakryiko
2 siblings, 1 reply; 79+ messages in thread
From: Blaise Boscaccy @ 2025-06-10 16:51 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module
Cc: paul, kys, ast, daniel, andrii, KP Singh
KP Singh <kpsingh@kernel.org> writes:
> To fulfill the BPF signing contract, represented as Sig(I_loader ||
> H_meta), the generated trusted loader program must verify the integrity
> of the metadata. This signature cryptographically binds the loader's
> instructions (I_loader) to a hash of the metadata (H_meta).
>
> The verification process is embedded directly into the loader program.
> Upon execution, the loader loads the runtime hash from struct bpf_map
> i.e. BPF_PSEUDO_MAP_IDX and compares this runtime hash against an
> expected hash value that has been hardcoded directly by
> bpf_obj__gen_loader.
>
> The load from bpf_map can be improved by calling
> BPF_OBJ_GET_INFO_BY_FD from the kernel context after BPF_OBJ_GET_INFO_BY_FD
> has been updated for being called from the kernel context.
>
> The following instructions are generated:
>
> ld_imm64 r1, const_ptr_to_map // insn[0].src_reg == BPF_PSEUDO_MAP_IDX
> r2 = *(u64 *)(r1 + 0);
> ld_imm64 r3, sha256_of_map_part1 // constant precomputed by
> bpftool (part of H_meta)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 8);
> ld_imm64 r3, sha256_of_map_part2 // (part of H_meta)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 16);
> ld_imm64 r3, sha256_of_map_part3 // (part of H_meta)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 24);
> ld_imm64 r3, sha256_of_map_part4 // (part of H_meta)
> if r2 != r3 goto out;
> ...
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> tools/lib/bpf/bpf_gen_internal.h | 2 ++
> tools/lib/bpf/gen_loader.c | 52 ++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 3 +-
> 3 files changed, 56 insertions(+), 1 deletion(-)
>
> diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
> index 6ff963a491d9..49af4260b8e6 100644
> --- a/tools/lib/bpf/bpf_gen_internal.h
> +++ b/tools/lib/bpf/bpf_gen_internal.h
> @@ -4,6 +4,7 @@
> #define __BPF_GEN_INTERNAL_H
>
> #include "bpf.h"
> +#include "libbpf_internal.h"
>
> struct ksym_relo_desc {
> const char *name;
> @@ -50,6 +51,7 @@ struct bpf_gen {
> __u32 nr_ksyms;
> int fd_array;
> int nr_fd_array;
> + int hash_insn_offset[SHA256_DWORD_SIZE];
> };
>
> void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps);
> diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
> index 113ae4abd345..3d672c09e948 100644
> --- a/tools/lib/bpf/gen_loader.c
> +++ b/tools/lib/bpf/gen_loader.c
> @@ -110,6 +110,7 @@ static void emit2(struct bpf_gen *gen, struct bpf_insn insn1, struct bpf_insn in
>
> static int add_data(struct bpf_gen *gen, const void *data, __u32 size);
> static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off);
> +static void bpf_gen__signature_match(struct bpf_gen *gen);
>
> void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps)
> {
> @@ -152,6 +153,8 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps
> /* R7 contains the error code from sys_bpf. Copy it into R0 and exit. */
> emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_7));
> emit(gen, BPF_EXIT_INSN());
> + if (gen->opts->gen_hash)
> + bpf_gen__signature_match(gen);
> }
>
> static int add_data(struct bpf_gen *gen, const void *data, __u32 size)
> @@ -368,6 +371,25 @@ static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off)
> __emit_sys_close(gen);
> }
>
> +static int compute_sha_udpate_offsets(struct bpf_gen *gen)
> +{
> + __u64 sha[SHA256_DWORD_SIZE];
> + int i, err;
> +
> + err = libbpf_sha256(gen->data_start, gen->data_cur - gen->data_start, sha);
> + if (err < 0) {
> + pr_warn("sha256 computation of the metadata failed");
> + return err;
> + }
> + for (i = 0; i < SHA256_DWORD_SIZE; i++) {
> + struct bpf_insn *insn =
> + (struct bpf_insn *)(gen->insn_start + gen->hash_insn_offset[i]);
> + insn[0].imm = (__u32)sha[i];
> + insn[1].imm = sha[i] >> 32;
> + }
> + return 0;
> +}
> +
> int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
> {
> int i;
> @@ -394,6 +416,12 @@ int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
> blob_fd_array_off(gen, i));
> emit(gen, BPF_MOV64_IMM(BPF_REG_0, 0));
> emit(gen, BPF_EXIT_INSN());
> + if (gen->opts->gen_hash) {
> + gen->error = compute_sha_udpate_offsets(gen);
> + if (gen->error)
> + return gen->error;
> + }
> +
> pr_debug("gen: finish %s\n", errstr(gen->error));
> if (!gen->error) {
> struct gen_loader_opts *opts = gen->opts;
> @@ -557,6 +585,30 @@ void bpf_gen__map_create(struct bpf_gen *gen,
> emit_sys_close_stack(gen, stack_off(inner_map_fd));
> }
>
> +static void bpf_gen__signature_match(struct bpf_gen *gen)
> +{
> + __s64 off = -(gen->insn_cur - gen->insn_start - gen->cleanup_label) / 8 - 1;
> + int i;
> +
> + for (i = 0; i < SHA256_DWORD_SIZE; i++) {
> + emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX,
> + 0, 0, 0, 0));
> + emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, i * sizeof(__u64)));
> + gen->hash_insn_offset[i] = gen->insn_cur - gen->insn_start;
> + emit2(gen,
> + BPF_LD_IMM64_RAW_FULL(BPF_REG_3, 0, 0, 0, 0, 0));
> +
> + if (is_simm16(off)) {
> + emit(gen, BPF_MOV64_IMM(BPF_REG_7, -EINVAL));
> + emit(gen,
> + BPF_JMP_REG(BPF_JNE, BPF_REG_2, BPF_REG_3, off));
> + } else {
> + gen->error = -ERANGE;
> + emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, -1));
> + }
> + }
> +}
> +
The above code gets generated per-program and exists out-of-tree in a
very unreadable format in it's final form. I have general objections to
being forced to "trust" out-of-tree code, when it's demostrably trivial
to perform this check in-kernel, without impeding any of the other
stated use cases. There is no possible audit log nor LSM hook for these
operations. There is no way to know that this check was ever performed.
Further, this check ends up happeing in an entirely different syscall,
the LSM layer and the end user may both see invalid programs successfully
being loaded into the kernel, that may fail mysteriously later.
Also, this patch seems to rely on hacking into struct internals and
magic binary layouts.
-blaise
> void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
> enum bpf_attach_type type)
> {
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index b6ee9870523a..084372fa54f4 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -1803,9 +1803,10 @@ struct gen_loader_opts {
> const char *insns;
> __u32 data_sz;
> __u32 insns_sz;
> + bool gen_hash;
> };
>
> -#define gen_loader_opts__last_field insns_sz
> +#define gen_loader_opts__last_field gen_hash
> LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj,
> struct gen_loader_opts *opts);
>
> --
> 2.43.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-10 16:51 ` Blaise Boscaccy
@ 2025-06-10 17:43 ` KP Singh
2025-06-10 18:15 ` Blaise Boscaccy
0 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-10 17:43 UTC (permalink / raw)
To: Blaise Boscaccy
Cc: bpf, linux-security-module, paul, kys, ast, daniel, andrii
On Tue, Jun 10, 2025 at 6:51 PM Blaise Boscaccy
<bboscaccy@linux.microsoft.com> wrote:
>
> KP Singh <kpsingh@kernel.org> writes:
>
> > To fulfill the BPF signing contract, represented as Sig(I_loader ||
> > H_meta), the generated trusted loader program must verify the integrity
> > of the metadata. This signature cryptographically binds the loader's
> > instructions (I_loader) to a hash of the metadata (H_meta).
> >
> > The verification process is embedded directly into the loader program.
> > Upon execution, the loader loads the runtime hash from struct bpf_map
> > i.e. BPF_PSEUDO_MAP_IDX and compares this runtime hash against an
> > expected hash value that has been hardcoded directly by
> > bpf_obj__gen_loader.
> >
> > The load from bpf_map can be improved by calling
> > BPF_OBJ_GET_INFO_BY_FD from the kernel context after BPF_OBJ_GET_INFO_BY_FD
> > has been updated for being called from the kernel context.
> >
> > The following instructions are generated:
> >
> > ld_imm64 r1, const_ptr_to_map // insn[0].src_reg == BPF_PSEUDO_MAP_IDX
> > r2 = *(u64 *)(r1 + 0);
> > ld_imm64 r3, sha256_of_map_part1 // constant precomputed by
> > bpftool (part of H_meta)
> > if r2 != r3 goto out;
> >
> > r2 = *(u64 *)(r1 + 8);
> > ld_imm64 r3, sha256_of_map_part2 // (part of H_meta)
> > if r2 != r3 goto out;
> >
> > r2 = *(u64 *)(r1 + 16);
> > ld_imm64 r3, sha256_of_map_part3 // (part of H_meta)
> > if r2 != r3 goto out;
> >
> > r2 = *(u64 *)(r1 + 24);
> > ld_imm64 r3, sha256_of_map_part4 // (part of H_meta)
> > if r2 != r3 goto out;
> > ...
> >
> > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > ---
> > tools/lib/bpf/bpf_gen_internal.h | 2 ++
> > tools/lib/bpf/gen_loader.c | 52 ++++++++++++++++++++++++++++++++
> > tools/lib/bpf/libbpf.h | 3 +-
> > 3 files changed, 56 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
> > index 6ff963a491d9..49af4260b8e6 100644
> > --- a/tools/lib/bpf/bpf_gen_internal.h
> > +++ b/tools/lib/bpf/bpf_gen_internal.h
> > @@ -4,6 +4,7 @@
> > #define __BPF_GEN_INTERNAL_H
> >
> > #include "bpf.h"
> > +#include "libbpf_internal.h"
> >
> > struct ksym_relo_desc {
> > const char *name;
> > @@ -50,6 +51,7 @@ struct bpf_gen {
> > __u32 nr_ksyms;
> > int fd_array;
> > int nr_fd_array;
> > + int hash_insn_offset[SHA256_DWORD_SIZE];
> > };
> >
> > void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps);
> > diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
> > index 113ae4abd345..3d672c09e948 100644
> > --- a/tools/lib/bpf/gen_loader.c
> > +++ b/tools/lib/bpf/gen_loader.c
> > @@ -110,6 +110,7 @@ static void emit2(struct bpf_gen *gen, struct bpf_insn insn1, struct bpf_insn in
> >
> > static int add_data(struct bpf_gen *gen, const void *data, __u32 size);
> > static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off);
> > +static void bpf_gen__signature_match(struct bpf_gen *gen);
> >
> > void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps)
> > {
> > @@ -152,6 +153,8 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps
> > /* R7 contains the error code from sys_bpf. Copy it into R0 and exit. */
> > emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_7));
> > emit(gen, BPF_EXIT_INSN());
> > + if (gen->opts->gen_hash)
> > + bpf_gen__signature_match(gen);
> > }
> >
> > static int add_data(struct bpf_gen *gen, const void *data, __u32 size)
> > @@ -368,6 +371,25 @@ static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off)
> > __emit_sys_close(gen);
> > }
> >
> > +static int compute_sha_udpate_offsets(struct bpf_gen *gen)
> > +{
> > + __u64 sha[SHA256_DWORD_SIZE];
> > + int i, err;
> > +
> > + err = libbpf_sha256(gen->data_start, gen->data_cur - gen->data_start, sha);
> > + if (err < 0) {
> > + pr_warn("sha256 computation of the metadata failed");
> > + return err;
> > + }
> > + for (i = 0; i < SHA256_DWORD_SIZE; i++) {
> > + struct bpf_insn *insn =
> > + (struct bpf_insn *)(gen->insn_start + gen->hash_insn_offset[i]);
> > + insn[0].imm = (__u32)sha[i];
> > + insn[1].imm = sha[i] >> 32;
> > + }
> > + return 0;
> > +}
> > +
> > int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
> > {
> > int i;
> > @@ -394,6 +416,12 @@ int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
> > blob_fd_array_off(gen, i));
> > emit(gen, BPF_MOV64_IMM(BPF_REG_0, 0));
> > emit(gen, BPF_EXIT_INSN());
> > + if (gen->opts->gen_hash) {
> > + gen->error = compute_sha_udpate_offsets(gen);
> > + if (gen->error)
> > + return gen->error;
> > + }
> > +
> > pr_debug("gen: finish %s\n", errstr(gen->error));
> > if (!gen->error) {
> > struct gen_loader_opts *opts = gen->opts;
> > @@ -557,6 +585,30 @@ void bpf_gen__map_create(struct bpf_gen *gen,
> > emit_sys_close_stack(gen, stack_off(inner_map_fd));
> > }
> >
> > +static void bpf_gen__signature_match(struct bpf_gen *gen)
> > +{
> > + __s64 off = -(gen->insn_cur - gen->insn_start - gen->cleanup_label) / 8 - 1;
> > + int i;
> > +
> > + for (i = 0; i < SHA256_DWORD_SIZE; i++) {
> > + emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX,
> > + 0, 0, 0, 0));
> > + emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, i * sizeof(__u64)));
> > + gen->hash_insn_offset[i] = gen->insn_cur - gen->insn_start;
> > + emit2(gen,
> > + BPF_LD_IMM64_RAW_FULL(BPF_REG_3, 0, 0, 0, 0, 0));
> > +
> > + if (is_simm16(off)) {
> > + emit(gen, BPF_MOV64_IMM(BPF_REG_7, -EINVAL));
> > + emit(gen,
> > + BPF_JMP_REG(BPF_JNE, BPF_REG_2, BPF_REG_3, off));
> > + } else {
> > + gen->error = -ERANGE;
> > + emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, -1));
> > + }
> > + }
> > +}
> > +
>
> The above code gets generated per-program and exists out-of-tree in a
> very unreadable format in it's final form. I have general objections to
> being forced to "trust" out-of-tree code, when it's demostrably trivial
This is not out of tree. It's very much within the kernel tree.
> to perform this check in-kernel, without impeding any of the other
> stated use cases. There is no possible audit log nor LSM hook for these
> operations. There is no way to know that this check was ever performed.
>
> Further, this check ends up happeing in an entirely different syscall,
> the LSM layer and the end user may both see invalid programs successfully
> being loaded into the kernel, that may fail mysteriously later.
>
> Also, this patch seems to rely on hacking into struct internals and
> magic binary layouts.
These magical binary layouts are BPF programs, as I mentioned, if you
don't like this you (i.e an advanced user like Microsoft) can
implement your own trusted loader in whatever format you like. We are
not forcing you.
If you really want to do it in the kernel, you can do it out of tree
and maintain these patches (that's what "out of tree" actually means),
this is not a direction the BPF maintainers are interested in as it
does not meet the broader community's use-cases. We don’t want an
unnecessary extension to the UAPI when some BPF programs do have
stable instructions already (e.g. network) and some that can
potentially have someday.
RE The struct internals will be replaced by calling BPF_OBJ_GET_INFO
directly from the loader program as I mentioned in the commit.”
- KP
>
> -blaise
>
> > void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
> > enum bpf_attach_type type)
> > {
> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> > index b6ee9870523a..084372fa54f4 100644
> > --- a/tools/lib/bpf/libbpf.h
> > +++ b/tools/lib/bpf/libbpf.h
> > @@ -1803,9 +1803,10 @@ struct gen_loader_opts {
> > const char *insns;
> > __u32 data_sz;
> > __u32 insns_sz;
> > + bool gen_hash;
> > };
> >
> > -#define gen_loader_opts__last_field insns_sz
> > +#define gen_loader_opts__last_field gen_hash
> > LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj,
> > struct gen_loader_opts *opts);
> >
> > --
> > 2.43.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-10 17:43 ` KP Singh
@ 2025-06-10 18:15 ` Blaise Boscaccy
2025-06-10 19:47 ` KP Singh
2025-06-10 20:56 ` KP Singh
0 siblings, 2 replies; 79+ messages in thread
From: Blaise Boscaccy @ 2025-06-10 18:15 UTC (permalink / raw)
To: KP Singh; +Cc: bpf, linux-security-module, paul, kys, ast, daniel, andrii
KP Singh <kpsingh@kernel.org> writes:
[...]
>>
>> The above code gets generated per-program and exists out-of-tree in a
>> very unreadable format in it's final form. I have general objections to
>> being forced to "trust" out-of-tree code, when it's demostrably trivial
>
> This is not out of tree. It's very much within the kernel tree.
No, it's not.
Running something like
bpftool gen skeleton -S -k <private_key> -i <identity_cert>
fentry_test.bpf.o
will yield a header file fentery_test.h or whatever. That header file
contains a customized and one-off version of the templated code in this
patch. That header file and the resultant loader it gets compiled into
exists out-of-tree.
>
>> to perform this check in-kernel, without impeding any of the other
>> stated use cases. There is no possible audit log nor LSM hook for these
>> operations. There is no way to know that this check was ever performed.
>>
>> Further, this check ends up happeing in an entirely different syscall,
>> the LSM layer and the end user may both see invalid programs successfully
>> being loaded into the kernel, that may fail mysteriously later.
>>
>> Also, this patch seems to rely on hacking into struct internals and
>> magic binary layouts.
>
> These magical binary layouts are BPF programs, as I mentioned, if you
> don't like this you (i.e an advanced user like Microsoft) can
> implement your own trusted loader in whatever format you like. We are
> not forcing you.
>
> If you really want to do it in the kernel, you can do it out of tree
> and maintain these patches (that's what "out of tree" actually means),
> this is not a direction the BPF maintainers are interested in as it
> does not meet the broader community's use-cases. We don’t want an
> unnecessary extension to the UAPI when some BPF programs do have
> stable instructions already (e.g. network) and some that can
> potentially have someday.
>
Yes, you are forcing us. Saying we are only allowed to use "trusted"
loaders, and that no one is allowed to have any in-kernel, in-tree code
that inspects user inputs or target programs directly is very
non-consentual on my end. This is a design mandate, being forced upon
other people, by you, with no concrete reasons, other than vague statements
around UAPI design, need or necessity.
-blaise
> RE The struct internals will be replaced by calling BPF_OBJ_GET_INFO
> directly from the loader program as I mentioned in the commit.”
>
>
> - KP
>
>
>>
>> -blaise
>>
>> > void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
>> > enum bpf_attach_type type)
>> > {
>> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
>> > index b6ee9870523a..084372fa54f4 100644
>> > --- a/tools/lib/bpf/libbpf.h
>> > +++ b/tools/lib/bpf/libbpf.h
>> > @@ -1803,9 +1803,10 @@ struct gen_loader_opts {
>> > const char *insns;
>> > __u32 data_sz;
>> > __u32 insns_sz;
>> > + bool gen_hash;
>> > };
>> >
>> > -#define gen_loader_opts__last_field insns_sz
>> > +#define gen_loader_opts__last_field gen_hash
>> > LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj,
>> > struct gen_loader_opts *opts);
>> >
>> > --
>> > 2.43.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-10 18:15 ` Blaise Boscaccy
@ 2025-06-10 19:47 ` KP Singh
2025-06-10 21:24 ` James Bottomley
2025-06-10 20:56 ` KP Singh
1 sibling, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-10 19:47 UTC (permalink / raw)
To: Blaise Boscaccy
Cc: bpf, linux-security-module, paul, kys, ast, daniel, andrii
On Tue, Jun 10, 2025 at 8:16 PM Blaise Boscaccy
<bboscaccy@linux.microsoft.com> wrote:
>
[...]
> >
> >> to perform this check in-kernel, without impeding any of the other
> >> stated use cases. There is no possible audit log nor LSM hook for these
> >> operations. There is no way to know that this check was ever performed.
> >>
> >> Further, this check ends up happeing in an entirely different syscall,
> >> the LSM layer and the end user may both see invalid programs successfully
> >> being loaded into the kernel, that may fail mysteriously later.
> >>
> >> Also, this patch seems to rely on hacking into struct internals and
> >> magic binary layouts.
> >
> > These magical binary layouts are BPF programs, as I mentioned, if you
> > don't like this you (i.e an advanced user like Microsoft) can
> > implement your own trusted loader in whatever format you like. We are
> > not forcing you.
> >
> > If you really want to do it in the kernel, you can do it out of tree
> > and maintain these patches (that's what "out of tree" actually means),
> > this is not a direction the BPF maintainers are interested in as it
> > does not meet the broader community's use-cases. We don’t want an
> > unnecessary extension to the UAPI when some BPF programs do have
> > stable instructions already (e.g. network) and some that can
> > potentially have someday.
> >
>
> Yes, you are forcing us. Saying we are only allowed to use "trusted"
> loaders, and that no one is allowed to have any in-kernel, in-tree code
It's been repeatedly mentioned that trusted loaders (whether kernel or
BPF programs) are the only way because a large number of BPF use-cases
dynamically generate BPF programs. So whatever we build needs to work
for everyone and not just your specific use-case or your affinity to
an implementation.
> that inspects user inputs or target programs directly is very
> non-consentual on my end. This is a design mandate, being forced upon
> other people, by you, with no concrete reasons, other than vague statements
> around UAPI design, need or necessity.
>
> -blaise
>
> > RE The struct internals will be replaced by calling BPF_OBJ_GET_INFO
> > directly from the loader program as I mentioned in the commit.”
> >
> >
> > - KP
> >
> >
> >>
> >> -blaise
> >>
> >> > void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
> >> > enum bpf_attach_type type)
> >> > {
> >> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> >> > index b6ee9870523a..084372fa54f4 100644
> >> > --- a/tools/lib/bpf/libbpf.h
> >> > +++ b/tools/lib/bpf/libbpf.h
> >> > @@ -1803,9 +1803,10 @@ struct gen_loader_opts {
> >> > const char *insns;
> >> > __u32 data_sz;
> >> > __u32 insns_sz;
> >> > + bool gen_hash;
> >> > };
> >> >
> >> > -#define gen_loader_opts__last_field insns_sz
> >> > +#define gen_loader_opts__last_field gen_hash
> >> > LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj,
> >> > struct gen_loader_opts *opts);
> >> >
> >> > --
> >> > 2.43.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-10 19:47 ` KP Singh
@ 2025-06-10 21:24 ` James Bottomley
2025-06-10 22:31 ` Paul Moore
2025-06-10 22:35 ` KP Singh
0 siblings, 2 replies; 79+ messages in thread
From: James Bottomley @ 2025-06-10 21:24 UTC (permalink / raw)
To: KP Singh, Blaise Boscaccy
Cc: bpf, linux-security-module, paul, kys, ast, daniel, andrii
On Tue, 2025-06-10 at 21:47 +0200, KP Singh wrote:
> It's been repeatedly mentioned that trusted loaders (whether kernel
> or BPF programs) are the only way because a large number of BPF
> use-cases dynamically generate BPF programs.
You keep asserting this, but it isn't supported by patches already
proposed. Specifically, there already exists a patch set:
https://lore.kernel.org/all/20250528215037.2081066-1-bboscaccy@linux.microsoft.com/
that supports both signed trusted loaders and exact hash chain
verification of loaders plus program maps. The core kernel code that
does it is only about 10 lines and looks to me like it could easily be
added to your current patch set. This means BPF signing could support
both dynamically generated and end to end integrity use cases with the
signer being in the position of deciding what they want and no loss of
generality for either use case.
> So whatever we build needs to work for everyone and not just your
> specific use-case or your affinity to an implementation.
The linked patch supports both your trusted loader use case and the
exact hash chain verification one the security people want. Your
current patch only seems to support your use case, which seems a little
bit counter to the quote above. However, it also seems that
reconciling both patch sets to give everyone what they want is easily
within reach so I think that's what we should all work towards.
Regards,
James
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-10 21:24 ` James Bottomley
@ 2025-06-10 22:31 ` Paul Moore
2025-06-10 22:35 ` KP Singh
1 sibling, 0 replies; 79+ messages in thread
From: Paul Moore @ 2025-06-10 22:31 UTC (permalink / raw)
To: James Bottomley
Cc: KP Singh, Blaise Boscaccy, bpf, linux-security-module, kys, ast,
daniel, andrii
On Tue, Jun 10, 2025 at 5:24 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
> On Tue, 2025-06-10 at 21:47 +0200, KP Singh wrote:
> > It's been repeatedly mentioned that trusted loaders (whether kernel
> > or BPF programs) are the only way because a large number of BPF
> > use-cases dynamically generate BPF programs.
>
> You keep asserting this, but it isn't supported by patches already
> proposed. Specifically, there already exists a patch set:
>
> https://lore.kernel.org/all/20250528215037.2081066-1-bboscaccy@linux.microsoft.com/
>
> that supports both signed trusted loaders and exact hash chain
> verification of loaders plus program maps. The core kernel code that
> does it is only about 10 lines and looks to me like it could easily be
> added to your current patch set. This means BPF signing could support
> both dynamically generated and end to end integrity use cases with the
> signer being in the position of deciding what they want and no loss of
> generality for either use case.
>
> > So whatever we build needs to work for everyone and not just your
> > specific use-case or your affinity to an implementation.
>
> The linked patch supports both your trusted loader use case and the
> exact hash chain verification one the security people want. Your
> current patch only seems to support your use case, which seems a little
> bit counter to the quote above. However, it also seems that
> reconciling both patch sets to give everyone what they want is easily
> within reach so I think that's what we should all work towards.
I agree with James, I see no reason why the two schemes could not
coexist in the kernel; support both and let the user/admin/distro
decide which is appropriate for their needs through policy.
I'm sure Blaise would be willing to build on top of KP's patchset if
that really is a sticking point.
Finally, I just wanted to bring some attention to my last comment on
Blaise's latest patchset as the needs mentioned there seem to have
been ignored in this patchset.
https://lore.kernel.org/linux-security-module/CAHC9VhQT=ymqssa9ymXtvssHTdVH_64T8Mpb0Mh8oxRD0Guo_Q@mail.gmail.com/
--
paul-moore.com
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-10 21:24 ` James Bottomley
2025-06-10 22:31 ` Paul Moore
@ 2025-06-10 22:35 ` KP Singh
2025-06-11 11:59 ` James Bottomley
1 sibling, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-10 22:35 UTC (permalink / raw)
To: James Bottomley
Cc: Blaise Boscaccy, bpf, linux-security-module, paul, kys, ast,
daniel, andrii
On Tue, Jun 10, 2025 at 11:24 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Tue, 2025-06-10 at 21:47 +0200, KP Singh wrote:
> > It's been repeatedly mentioned that trusted loaders (whether kernel
> > or BPF programs) are the only way because a large number of BPF
> > use-cases dynamically generate BPF programs.
>
> You keep asserting this, but it isn't supported by patches already
This is supported for sure. But it's not what the patches are
providing a reference implementation for. The patches provide a stand
alone reference implementation using in-kernel / BPF loaders but you
can surely implement this (see below):
> proposed. Specifically, there already exists a patch set:
>
> https://lore.kernel.org/all/20250528215037.2081066-1-bboscaccy@linux.microsoft.com/
The patch-set takes a very narrow view by adding additional UAPI and
ties us into an implementation. Whereas the current approach keeps the
UAPI clean while still meeting all the use-cases and keeps the
implementation flexible should it need to change. (no tie into the
hash chain approach, if we are able to move to stable BPF instruction
buffers in the future).
Blaise's patches also do not handle the trusted user-space loader
space and the "signature_maps" are not relevant to dynamic generation
or simple BPF programs like networking, see below.
>
> that supports both signed trusted loaders and exact hash chain
> verification of loaders plus program maps. The core kernel code that
I have mentioned in various replies as to how the current design ends
up working for dynamic loaders. Here's it once again:
* The dynamic userspace loader is trusted, it's either compiled in
with libbpf statically or libbpf is also a trusted library.
* The BPF program is generated and all the relcoations are performed
at runtime, after which the BPF instruction buffer becomes stable and
can be signed which obviates the need for the loader program for
programs that have runtime relocations. And ofcourse, some BPF
programs don't have runtime relocations at all (e.g some networking
programs).
* The program is then signed with a derived credential at runtime and
this signature is passed in attr.signature and this signature is
verified by the kernel.
> does it is only about 10 lines and looks to me like it could easily be
> added to your current patch set. This means BPF signing could support
I still don't understand the actual reasons for you needing this to
happen in the kernel.
Here's a summary of the reasons that have been thrown around:
Supply chain attacks
================
I got vague answers about supply chain attacks. If one cannot trust
the build environment that builds the BPF programs, has signing keys,
generates and signs the loader, or that builds libbpf / kernel, then I
think one has other issues.
PS: You can also contribute code into LLVM / clang to generate loader
programs directly from a BPF object.
The loader code is hard to understand
=============================
So is the BPF JIT that lives in the kernel, I am sure there are
engineers who understand BPF assembly and JITs? Please remember the
user who uses BPF is different from the user who implements singing
for the BPF users, the latter (e.g. a distro, hyperscalar etc) needs
to be advanced and aware of BPF internals.
"having visibility" in the LSM code
==========================
To implement what specific security policy? There are security
policies and controls that need to be defined that are required by BPF
use-cases and one would expect the LSM experts to help here, none of
them require in kernel verification, here they are:
* LSM controls to reject programs that are not signed
* LSM controls that establish trust on userspace binaries and libraries.
* LSM policies that allow these components to either load programs
signed at runtime using a derived credential.
* LSM policies that allow certain signed BPF programs to be loaded
without requiring elevated privileges i.e CAP_BPF.
Auditing
======
You can surely propose a follow up to my patches that adds audit
logging to the loader, that calls the audit code from using a BPF
kfunc, so this can be extended for auditing.
At this point, I am happy to discuss the actual security policy work
that is needed. For the discussion around the UAPI and in-kernel
verification, I rest it in the hands of the BPF maintainers.
- KP
> signer being in the position of deciding what they want and no loss of
> generality for either use case.
> > So whatever we build needs to work for everyone and not just your
> > specific use-case or your affinity to an implementation.
>
> The linked patch supports both your trusted loader use case and the
> exact hash chain verification one the security people want. Your
> current patch only seems to support your use case, which seems a little
> bit counter to the quote above. However, it also seems that
> reconciling both patch sets to give everyone what they want is easily
> within reach so I think that's what we should all work towards.
>
> Regards,
>
> James
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-10 22:35 ` KP Singh
@ 2025-06-11 11:59 ` James Bottomley
2025-06-11 12:33 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: James Bottomley @ 2025-06-11 11:59 UTC (permalink / raw)
To: KP Singh
Cc: Blaise Boscaccy, bpf, linux-security-module, paul, kys, ast,
daniel, andrii
On Wed, 2025-06-11 at 00:35 +0200, KP Singh wrote:
> On Tue, Jun 10, 2025 at 11:24 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> >
> > On Tue, 2025-06-10 at 21:47 +0200, KP Singh wrote:
> > > It's been repeatedly mentioned that trusted loaders (whether
> > > kernel or BPF programs) are the only way because a large number
> > > of BPF use-cases dynamically generate BPF programs.
> >
> > You keep asserting this, but it isn't supported by patches already
>
> This is supported for sure. But it's not what the patches are
> providing a reference implementation for. The patches provide a stand
> alone reference implementation using in-kernel / BPF loaders but you
> can surely implement this (see below):
>
> > proposed. Specifically, there already exists a patch set:
> >
> > https://lore.kernel.org/all/20250528215037.2081066-1-bboscaccy@linux.microsoft.com/
>
> The patch-set takes a very narrow view by adding additional UAPI and
> ties us into an implementation.
What do you mean by this? When kernel people say UAPI, they think of
the contract between the kernel and userspace. So for both patch sets
the additional attr. entries which user space adds and the kernel
parses for the signature would conventionally be thought to extend the
UAPI.
Additionally, the content of the signature (what it's over) is a UAPI
contract. When adding to the kernel UAPI we don't look not to change
it, we look to change it in a way that is extensible. It strikes me
that actually only the linked patch does this because the UAPI addition
for your signature scheme doesn't seem to be that extensible.
> Whereas the current approach keeps the UAPI clean while still
> meeting all the use-cases and keeps the implementation flexible
> should it need to change. (no tie into the hash chain approach, if we
> are able to move to stable BPF instruction buffers in the future).
>
> Blaise's patches also do not handle the trusted user-space loader
> space and the "signature_maps" are not relevant to dynamic generation
> or simple BPF programs like networking, see below.
OK, is this just a technical misreading? I missed the fact that it
supported both schemes on first reading as well. If you look in this
patch:
https://lore.kernel.org/all/20250528215037.2081066-2-bboscaccy@linux.microsoft.com/
It's this addition in bpf_check_signature():
> + if (!attr->signature_maps_size) {
> + sha256((u8 *)prog->insnsi, prog->len * sizeof(struct bpf_insn), (u8 *)&hash);
> + err = verify_pkcs7_signature(hash, sizeof(hash), signature, attr->signature_size,
> + VERIFY_USE_SECONDARY_KEYRING,
> + VERIFYING_EBPF_SIGNATURE,
> + NULL, NULL);
> + } else {
> + used_maps = kmalloc_array(attr->signature_maps_size,
> + sizeof(*used_maps), GFP_KERNEL);
> [...]
The first leg of the if is your use case: a zero map size means the
signature is a single hash of the loader only. The else clause
encompasses a hash chain over the maps as well. This means the signer
can choose which scheme they want.
I'll skip responding to the rest since it seems to be assuming that
Blaise's patch excludes your use case (which the above should
demonstrate it doesn't) and we'd be talking past each other.
Regards,
James
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-11 11:59 ` James Bottomley
@ 2025-06-11 12:33 ` KP Singh
2025-06-11 13:12 ` James Bottomley
2025-06-11 13:18 ` James Bottomley
0 siblings, 2 replies; 79+ messages in thread
From: KP Singh @ 2025-06-11 12:33 UTC (permalink / raw)
To: James Bottomley
Cc: Blaise Boscaccy, bpf, linux-security-module, paul, kys, ast,
daniel, andrii
On Wed, Jun 11, 2025 at 1:59 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Wed, 2025-06-11 at 00:35 +0200, KP Singh wrote:
> > On Tue, Jun 10, 2025 at 11:24 PM James Bottomley
> > <James.Bottomley@hansenpartnership.com> wrote:
> > >
> > > On Tue, 2025-06-10 at 21:47 +0200, KP Singh wrote:
> > > > It's been repeatedly mentioned that trusted loaders (whether
> > > > kernel or BPF programs) are the only way because a large number
> > > > of BPF use-cases dynamically generate BPF programs.
> > >
> > > You keep asserting this, but it isn't supported by patches already
> >
> > This is supported for sure. But it's not what the patches are
> > providing a reference implementation for. The patches provide a stand
> > alone reference implementation using in-kernel / BPF loaders but you
> > can surely implement this (see below):
> >
> > > proposed. Specifically, there already exists a patch set:
> > >
> > > https://lore.kernel.org/all/20250528215037.2081066-1-bboscaccy@linux.microsoft.com/
> >
> > The patch-set takes a very narrow view by adding additional UAPI and
> > ties us into an implementation.
>
> What do you mean by this? When kernel people say UAPI, they think of
> the contract between the kernel and userspace. So for both patch sets
> the additional attr. entries which user space adds and the kernel
> parses for the signature would conventionally be thought to extend the
> UAPI.
>
> Additionally, the content of the signature (what it's over) is a UAPI
> contract. When adding to the kernel UAPI we don't look not to change
> it, we look to change it in a way that is extensible. It strikes me
> that actually only the linked patch does this because the UAPI addition
> for your signature scheme doesn't seem to be that extensible.
James, I am adding less attributes, it's always extensible, adding
more UAPI than strictly needed is what's not flexible.
The attributes I proposed remain valid in a world where the BPF
instruction set is stable at compile time, for trusted user space
loaders (applications like Cilium) that can already have a stable
instruction buffer, the attributes Blaise proposed do not.
I believe we have discussed this enough. Let's have the BPF maintainers decide.
>
> > Whereas the current approach keeps the UAPI clean while still
> > meeting all the use-cases and keeps the implementation flexible
> > should it need to change. (no tie into the hash chain approach, if we
> > are able to move to stable BPF instruction buffers in the future).
> >
> > Blaise's patches also do not handle the trusted user-space loader
> > space and the "signature_maps" are not relevant to dynamic generation
> > or simple BPF programs like networking, see below.
>
> OK, is this just a technical misreading? I missed the fact that it
> supported both schemes on first reading as well. If you look in this
> patch:
>
> https://lore.kernel.org/all/20250528215037.2081066-2-bboscaccy@linux.microsoft.com/
>
> It's this addition in bpf_check_signature():
>
> > + if (!attr->signature_maps_size) {
> > + sha256((u8 *)prog->insnsi, prog->len * sizeof(struct bpf_insn), (u8 *)&hash);
> > + err = verify_pkcs7_signature(hash, sizeof(hash), signature, attr->signature_size,
> > + VERIFY_USE_SECONDARY_KEYRING,
> > + VERIFYING_EBPF_SIGNATURE,
> > + NULL, NULL);
> > + } else {
> > + used_maps = kmalloc_array(attr->signature_maps_size,
> > + sizeof(*used_maps), GFP_KERNEL);
> > [...]
>
> The first leg of the if is your use case: a zero map size means the
> signature is a single hash of the loader only. The else clause
> encompasses a hash chain over the maps as well. This means the signer
> can choose which scheme they want.
I have read and understood the code, there is no technical misalignment.
I am talking about a trusted user space loader. You seem to confuse
the trusted BPF loader program as userspace, no this is not userspace,
it runs in the kernel context.
- KP
>
> I'll skip responding to the rest since it seems to be assuming that
> Blaise's patch excludes your use case (which the above should
> demonstrate it doesn't) and we'd be talking past each other.
>
> Regards,
>
> James
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-11 12:33 ` KP Singh
@ 2025-06-11 13:12 ` James Bottomley
2025-06-11 13:24 ` KP Singh
2025-06-11 13:18 ` James Bottomley
1 sibling, 1 reply; 79+ messages in thread
From: James Bottomley @ 2025-06-11 13:12 UTC (permalink / raw)
To: KP Singh
Cc: Blaise Boscaccy, bpf, linux-security-module, paul, kys, ast,
daniel, andrii
On Wed, 2025-06-11 at 14:33 +0200, KP Singh wrote:
> On Wed, Jun 11, 2025 at 1:59 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> >
> > On Wed, 2025-06-11 at 00:35 +0200, KP Singh wrote:
> > > On Tue, Jun 10, 2025 at 11:24 PM James Bottomley
> > > <James.Bottomley@hansenpartnership.com> wrote:
> > > >
> > > > On Tue, 2025-06-10 at 21:47 +0200, KP Singh wrote:
> > > > > It's been repeatedly mentioned that trusted loaders (whether
> > > > > kernel or BPF programs) are the only way because a large
> > > > > number
> > > > > of BPF use-cases dynamically generate BPF programs.
> > > >
> > > > You keep asserting this, but it isn't supported by patches
> > > > already
> > >
> > > This is supported for sure. But it's not what the patches are
> > > providing a reference implementation for. The patches provide a
> > > stand alone reference implementation using in-kernel / BPF
> > > loaders but you can surely implement this (see below):
> > >
> > > > proposed. Specifically, there already exists a patch set:
> > > >
> > > > https://lore.kernel.org/all/20250528215037.2081066-1-bboscaccy@linux.microsoft.com/
> > >
> > > The patch-set takes a very narrow view by adding additional UAPI
> > > and ties us into an implementation.
> >
> > What do you mean by this? When kernel people say UAPI, they think
> > of the contract between the kernel and userspace. So for both
> > patch sets the additional attr. entries which user space adds and
> > the kernel parses for the signature would conventionally be thought
> > to extend the UAPI.
> >
> > Additionally, the content of the signature (what it's over) is a
> > UAPI contract. When adding to the kernel UAPI we don't look not to
> > change it, we look to change it in a way that is extensible. It
> > strikes me that actually only the linked patch does this because
> > the UAPI addition for your signature scheme doesn't seem to be that
> > extensible.
>
> James, I am adding less attributes, it's always extensible, adding
> more UAPI than strictly needed is what's not flexible.
To repeat: the object should be extensibility not minimization. If an
API is extensible it doesn't tie you to a specific implementation
regardless of how many arguments it adds. The attr structure uses the
standard kernel way of doing this: it can grow but may never lose
elements and features added at the end are always optional so an older
kernel that doesn't see them can still process everything it does
understand.
> The attributes I proposed remain valid in a world where the BPF
> instruction set is stable at compile time, for trusted user space
> loaders (applications like Cilium) that can already have a stable
> instruction buffer, the attributes Blaise proposed do not.
I don't follow. For stable compilation (I'm more familiar with the way
systemd does this but I presume cilium does the same: by constructing
ebpf byte code on the fly that doesn't require relocation and then
inserting it directly) you simply program the loader to do the
restrictions (about insertion point and the like) and sign it, correct?
That's covered in the linked patch in the !attr->signature_maps_size
case, so what Blaise proposed most definitely does do this.
> I believe we have discussed this enough. Let's have the BPF
> maintainers decide.
But this is obviously an important point otherwise you wouldn't be
arguing about it. If pure minimization were all that's required then
it's easy to do since we're using pkcs7 signatures, the signature can
contain a data structure with authenticatedAttributes that are
validated by the signature, so I could do the Blaise patch with fewer
attr elements than you simply by moving the maps and their count into
the athenticatedAttributes element of the pkcs7 signature. I could
also do the same with your keyring_id and, bonus, it would be integrity
validated. Then each of you adds the same number of UAPI attr's so
there's no argument about who adds fewer attributes.
Regards,
James
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-11 13:12 ` James Bottomley
@ 2025-06-11 13:24 ` KP Singh
0 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-06-11 13:24 UTC (permalink / raw)
To: James Bottomley
Cc: Blaise Boscaccy, bpf, linux-security-module, paul, kys, ast,
daniel, andrii
On Wed, Jun 11, 2025 at 3:12 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Wed, 2025-06-11 at 14:33 +0200, KP Singh wrote:
> > On Wed, Jun 11, 2025 at 1:59 PM James Bottomley
> > <James.Bottomley@hansenpartnership.com> wrote:
> > >
> > > On Wed, 2025-06-11 at 00:35 +0200, KP Singh wrote:
> > > > On Tue, Jun 10, 2025 at 11:24 PM James Bottomley
> > > > <James.Bottomley@hansenpartnership.com> wrote:
> > > > >
> > > > > On Tue, 2025-06-10 at 21:47 +0200, KP Singh wrote:
> > > > > > It's been repeatedly mentioned that trusted loaders (whether
> > > > > > kernel or BPF programs) are the only way because a large
> > > > > > number
> > > > > > of BPF use-cases dynamically generate BPF programs.
> > > > >
> > > > > You keep asserting this, but it isn't supported by patches
> > > > > already
> > > >
> > > > This is supported for sure. But it's not what the patches are
> > > > providing a reference implementation for. The patches provide a
> > > > stand alone reference implementation using in-kernel / BPF
> > > > loaders but you can surely implement this (see below):
> > > >
> > > > > proposed. Specifically, there already exists a patch set:
> > > > >
> > > > > https://lore.kernel.org/all/20250528215037.2081066-1-bboscaccy@linux.microsoft.com/
> > > >
> > > > The patch-set takes a very narrow view by adding additional UAPI
> > > > and ties us into an implementation.
> > >
> > > What do you mean by this? When kernel people say UAPI, they think
> > > of the contract between the kernel and userspace. So for both
> > > patch sets the additional attr. entries which user space adds and
> > > the kernel parses for the signature would conventionally be thought
> > > to extend the UAPI.
> > >
> > > Additionally, the content of the signature (what it's over) is a
> > > UAPI contract. When adding to the kernel UAPI we don't look not to
> > > change it, we look to change it in a way that is extensible. It
> > > strikes me that actually only the linked patch does this because
> > > the UAPI addition for your signature scheme doesn't seem to be that
> > > extensible.
> >
> > James, I am adding less attributes, it's always extensible, adding
> > more UAPI than strictly needed is what's not flexible.
>
> To repeat: the object should be extensibility not minimization. If an
> API is extensible it doesn't tie you to a specific implementation
> regardless of how many arguments it adds. The attr structure uses the
> standard kernel way of doing this: it can grow but may never lose
> elements and features added at the end are always optional so an older
> kernel that doesn't see them can still process everything it does
> understand.
>
> > The attributes I proposed remain valid in a world where the BPF
> > instruction set is stable at compile time, for trusted user space
> > loaders (applications like Cilium) that can already have a stable
> > instruction buffer, the attributes Blaise proposed do not.
>
> I don't follow. For stable compilation (I'm more familiar with the way
> systemd does this but I presume cilium does the same: by constructing
> ebpf byte code on the fly that doesn't require relocation and then
> inserting it directly) you simply program the loader to do the
> restrictions (about insertion point and the like) and sign it, correct?
There is no loader program if the instruction buffer is stable.
> That's covered in the linked patch in the !attr->signature_maps_size
> case, so what Blaise proposed most definitely does do this.
>
> > I believe we have discussed this enough. Let's have the BPF
> > maintainers decide.
>
> But this is obviously an important point otherwise you wouldn't be
> arguing about it. If pure minimization were all that's required then
> it's easy to do since we're using pkcs7 signatures, the signature can
> contain a data structure with authenticatedAttributes that are
> validated by the signature, so I could do the Blaise patch with fewer
> attr elements than you simply by moving the maps and their count into
> the athenticatedAttributes element of the pkcs7 signature. I could
Can we discuss this as a follow up as Paul proposed? I have limited
bandwidth to work on this, so this only delays what I think is a solid
baseline implementation.
- KP
> also do the same with your keyring_id and, bonus, it would be integrity
> validated. Then each of you adds the same number of UAPI attr's so
> there's no argument about who adds fewer attributes.
>
> Regards,
>
> James
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-11 12:33 ` KP Singh
2025-06-11 13:12 ` James Bottomley
@ 2025-06-11 13:18 ` James Bottomley
2025-06-11 13:41 ` KP Singh
1 sibling, 1 reply; 79+ messages in thread
From: James Bottomley @ 2025-06-11 13:18 UTC (permalink / raw)
To: KP Singh
Cc: Blaise Boscaccy, bpf, linux-security-module, paul, kys, ast,
daniel, andrii
On Wed, 2025-06-11 at 14:33 +0200, KP Singh wrote:
> [...]
> I have read and understood the code, there is no technical
> misalignment.
>
> I am talking about a trusted user space loader. You seem to confuse
> the trusted BPF loader program as userspace, no this is not
> userspace, it runs in the kernel context.
So your criticism isn't that it doesn't cover your use case from the
signature point of view but that it didn't include a loader for it?
The linked patch was a sketch of how to verify signatures not a full
implementation. The pieces like what the loader looks like and which
keyring gets used are implementation details which can be filled in
later by combining the patch series with review and discussion. It's
not a requirement that one person codes everyone's use case before they
get theirs in, it's usually a collaborative effort ... I mean, why
would you want Microsoft coding up the loader? If they don't have a
use case for it they don't have much incentive to test it thoroughly
whereas you do.
Regards,
James
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-11 13:18 ` James Bottomley
@ 2025-06-11 13:41 ` KP Singh
2025-06-11 14:43 ` James Bottomley
0 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-11 13:41 UTC (permalink / raw)
To: James Bottomley
Cc: Blaise Boscaccy, bpf, linux-security-module, paul, kys, ast,
daniel, andrii
On Wed, Jun 11, 2025 at 3:18 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Wed, 2025-06-11 at 14:33 +0200, KP Singh wrote:
> > [...]
> > I have read and understood the code, there is no technical
> > misalignment.
> >
> > I am talking about a trusted user space loader. You seem to confuse
> > the trusted BPF loader program as userspace, no this is not
> > userspace, it runs in the kernel context.
>
> So your criticism isn't that it doesn't cover your use case from the
> signature point of view but that it didn't include a loader for it?
>
> The linked patch was a sketch of how to verify signatures not a full
It was a non functional sketch that did not address much of the
feedback that was given, that's not how collaboration works.
> implementation. The pieces like what the loader looks like and which
> keyring gets used are implementation details which can be filled in
> later by combining the patch series with review and discussion. It's
> not a requirement that one person codes everyone's use case before they
> get theirs in, it's usually a collaborative effort ... I mean, why
Yeah, it's surely a collaborative effort, but the collaboration has
been aggressive and tied to a specific implementation (at least from
some folks). Rather than working with the feedback received it has
been accusational of mandating and forcing. If the intent is to really
collaborate, let's land this base implementation and discuss further.
I am not willing to add additional stuff into this base
implementation.
> would you want Microsoft coding up the loader? If they don't have a
> use case for it they don't have much incentive to test it thoroughly
> whereas you do.
It seems that your incentives are purely aligned with Microsoft and
not that of the BPF community at large (this is also visible from the
patches and the engagement). FWIW, There is no urgency for my employer
to have signed BPF programs, yet I am working on this purely to help
you and the community.
- KP
>
> Regards,
>
> James
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-11 13:41 ` KP Singh
@ 2025-06-11 14:43 ` James Bottomley
2025-06-11 14:45 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: James Bottomley @ 2025-06-11 14:43 UTC (permalink / raw)
To: KP Singh
Cc: Blaise Boscaccy, bpf, linux-security-module, paul, kys, ast,
daniel, andrii
On Wed, 2025-06-11 at 15:41 +0200, KP Singh wrote:
> On Wed, Jun 11, 2025 at 3:18 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> >
> > On Wed, 2025-06-11 at 14:33 +0200, KP Singh wrote:
> > > [...]
> > > I have read and understood the code, there is no technical
> > > misalignment.
> > >
> > > I am talking about a trusted user space loader. You seem to
> > > confuse the trusted BPF loader program as userspace, no this is
> > > not userspace, it runs in the kernel context.
> >
> > So your criticism isn't that it doesn't cover your use case from
> > the signature point of view but that it didn't include a loader for
> > it?
> >
> > The linked patch was a sketch of how to verify signatures not a
> > full
>
> It was a non functional sketch that did not address much of the
> feedback that was given, that's not how collaboration works.
It was somewhat functional for the security use case but could be
extended for yours and provably allowed both was the point of the
sketch. The feedback it addressed was your desire for a signed trusted
loader.
> > implementation. The pieces like what the loader looks like and
> > which keyring gets used are implementation details which can be
> > filled in later by combining the patch series with review and
> > discussion. It's not a requirement that one person codes
> > everyone's use case before they get theirs in, it's usually a
> > collaborative effort ... I mean, why
>
> Yeah, it's surely a collaborative effort, but the collaboration has
> been aggressive and tied to a specific implementation (at least from
> some folks). Rather than working with the feedback received it has
> been accusational of mandating and forcing.
I don't see how that squares with producing a sketch that supports your
use case ... clearly feedback has been incorporated.
> If the intent is to really collaborate, let's land this base
> implementation and discuss further. I am not willing to add
> additional stuff into this base implementation.
Just so I'm clear: your definition of collaboration means Blaise takes
feedback from you at all times but you don't take it from him until
you've got your use case upstream? This might be OK if the two use
cases were thousands of lines apart, but, as I've said before, it seems
to be less than a hundred lines which doesn't seem to be a huge
integration burden given the size of the patch set you're trying to
land. I'm sure Blaise would be willing to produce the patch set that
adds the incremental over what you've published here to demonstrate its
smallness.
> > would you want Microsoft coding up the loader? If they don't have
> > a use case for it they don't have much incentive to test it
> > thoroughly whereas you do.
>
> It seems that your incentives are purely aligned with Microsoft and
> not that of the BPF community at large (this is also visible from the
> patches and the engagement).
No, as has been stated many times before there are other companies than
Microsoft who want supply chain integrity for BPF code which the data
block hash chaining you proposed but didn't implement does perfectly.
I shouldn't have to remind people that open source is about scratching
your own itch and thus you can determine a company's investments in
open source by its goals. I've even given a few talks about this:
https://archive.fosdem.org/2020/schedule/event/selfish_contributor/
As a Linux community we're usually good at creaming off additional
things around the edges, such as integrating the two approaches, which
I'm sure Microsoft will be happy to invest Blaise's time on, but I'm
equally sure they won't invest huge amounts in testing the trusted
loader until they have a use case for it.
> FWIW, There is no urgency for my employer to have signed BPF
> programs, yet I am working on this purely to help you and the
> community.
From what I heard Google is using signed BPF internally but has no
urgency to get it upstream. However, in my experience Google always
has a lack of upstream urgency until they run into a backporting
problem, so I'm sure they'll give you credit for avoiding potential
future backport issues.
Regards,
James
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-11 14:43 ` James Bottomley
@ 2025-06-11 14:45 ` KP Singh
0 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-06-11 14:45 UTC (permalink / raw)
To: James Bottomley
Cc: Blaise Boscaccy, bpf, linux-security-module, paul, kys, ast,
daniel, andrii
On Wed, Jun 11, 2025 at 4:43 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Wed, 2025-06-11 at 15:41 +0200, KP Singh wrote:
> > On Wed, Jun 11, 2025 at 3:18 PM James Bottomley
> > <James.Bottomley@hansenpartnership.com> wrote:
> > >
> > > On Wed, 2025-06-11 at 14:33 +0200, KP Singh wrote:
> > > > [...]
> > > > I have read and understood the code, there is no technical
> > > > misalignment.
> > > >
> > > > I am talking about a trusted user space loader. You seem to
> > > > confuse the trusted BPF loader program as userspace, no this is
> > > > not userspace, it runs in the kernel context.
> > >
> > > So your criticism isn't that it doesn't cover your use case from
> > > the signature point of view but that it didn't include a loader for
> > > it?
> > >
> > > The linked patch was a sketch of how to verify signatures not a
> > > full
> >
> > It was a non functional sketch that did not address much of the
> > feedback that was given, that's not how collaboration works.
>
> It was somewhat functional for the security use case but could be
> extended for yours and provably allowed both was the point of the
> sketch. The feedback it addressed was your desire for a signed trusted
> loader.
>
> > > implementation. The pieces like what the loader looks like and
> > > which keyring gets used are implementation details which can be
> > > filled in later by combining the patch series with review and
> > > discussion. It's not a requirement that one person codes
> > > everyone's use case before they get theirs in, it's usually a
> > > collaborative effort ... I mean, why
> >
> > Yeah, it's surely a collaborative effort, but the collaboration has
> > been aggressive and tied to a specific implementation (at least from
> > some folks). Rather than working with the feedback received it has
> > been accusational of mandating and forcing.
>
> I don't see how that squares with producing a sketch that supports your
> use case ... clearly feedback has been incorporated.
>
> > If the intent is to really collaborate, let's land this base
> > implementation and discuss further. I am not willing to add
> > additional stuff into this base implementation.
>
> Just so I'm clear: your definition of collaboration means Blaise takes
> feedback from you at all times but you don't take it from him until
> you've got your use case upstream? This might be OK if the two use
> cases were thousands of lines apart, but, as I've said before, it seems
> to be less than a hundred lines which doesn't seem to be a huge
> integration burden given the size of the patch set you're trying to
> land. I'm sure Blaise would be willing to produce the patch set that
> adds the incremental over what you've published here to demonstrate its
> smallness.
>
> > > would you want Microsoft coding up the loader? If they don't have
> > > a use case for it they don't have much incentive to test it
> > > thoroughly whereas you do.
> >
> > It seems that your incentives are purely aligned with Microsoft and
> > not that of the BPF community at large (this is also visible from the
> > patches and the engagement).
>
> No, as has been stated many times before there are other companies than
> Microsoft who want supply chain integrity for BPF code which the data
> block hash chaining you proposed but didn't implement does perfectly.
> I shouldn't have to remind people that open source is about scratching
> your own itch and thus you can determine a company's investments in
> open source by its goals. I've even given a few talks about this:
>
> https://archive.fosdem.org/2020/schedule/event/selfish_contributor/
>
> As a Linux community we're usually good at creaming off additional
> things around the edges, such as integrating the two approaches, which
> I'm sure Microsoft will be happy to invest Blaise's time on, but I'm
> equally sure they won't invest huge amounts in testing the trusted
> loader until they have a use case for it.
>
> > FWIW, There is no urgency for my employer to have signed BPF
> > programs, yet I am working on this purely to help you and the
> > community.
>
> From what I heard Google is using signed BPF internally but has no
> urgency to get it upstream. However, in my experience Google always
This is incorrect.
> has a lack of upstream urgency until they run into a backporting
> problem, so I'm sure they'll give you credit for avoiding potential
Also not correct, please stop assuming things.
> future backport issues.
>
> Regards,
>
> James
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-10 18:15 ` Blaise Boscaccy
2025-06-10 19:47 ` KP Singh
@ 2025-06-10 20:56 ` KP Singh
1 sibling, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-06-10 20:56 UTC (permalink / raw)
To: Blaise Boscaccy
Cc: bpf, linux-security-module, paul, kys, ast, daniel, andrii
On Tue, Jun 10, 2025 at 8:16 PM Blaise Boscaccy
<bboscaccy@linux.microsoft.com> wrote:
>
> KP Singh <kpsingh@kernel.org> writes:
>
> [...]
>
> >>
> >> The above code gets generated per-program and exists out-of-tree in a
> >> very unreadable format in it's final form. I have general objections to
> >> being forced to "trust" out-of-tree code, when it's demostrably trivial
> >
> > This is not out of tree. It's very much within the kernel tree.
>
> No, it's not.
>
> Running something like
>
> bpftool gen skeleton -S -k <private_key> -i <identity_cert>
> fentry_test.bpf.o
>
> will yield a header file fentery_test.h or whatever. That header file
> contains a customized and one-off version of the templated code in this
> patch. That header file and the resultant loader it gets compiled into
> exists out-of-tree.
Please read the cover letter and the patches, the
bpf_object__gen_loader generates the loader program that you sign,
this is not in bpftool but in libbpf which is core to using BPF, but
not your only option. Here are the many options that this gives to the
various use-case the community has and these are also available to
you:
* You can use bpftool
* You can choose to not use bpftool and avoid this generated header
file and use libbpf i.e. bpf_object__gen_loader.
* You can choose to not bpf_object__gen_loader and use your loader
program that has more logging around the check, calls audit (which we
can expose via BPF kfuncs).
* You can choose to have a trusted user-space loader that can load
unsigned BPF programs.
* You can choose to have a trusted loader that uses a derived
credential to sign the BPF program after the instruction buffer is
stable (then you don't need the loader meta program).
You can also choose to continue arguing for your specific
implementation without providing any constructive collaboration. But
that won't help anyone.
- KP
>
> >
> >> to perform this check in-kernel, without impeding any of the other
> >> stated use cases. There is no possible audit log nor LSM hook for these
> >> operations. There is no way to know that this check was ever performed.
> >>
> >> Further, this check ends up happeing in an entirely different syscall,
> >> the LSM layer and the end user may both see invalid programs successfully
> >> being loaded into the kernel, that may fail mysteriously later.
> >>
> >> Also, this patch seems to rely on hacking into struct internals and
> >> magic binary layouts.
> >
> > These magical binary layouts are BPF programs, as I mentioned, if you
> > don't like this you (i.e an advanced user like Microsoft) can
> > implement your own trusted loader in whatever format you like. We are
> > not forcing you.
> >
> > If you really want to do it in the kernel, you can do it out of tree
> > and maintain these patches (that's what "out of tree" actually means),
> > this is not a direction the BPF maintainers are interested in as it
> > does not meet the broader community's use-cases. We don’t want an
> > unnecessary extension to the UAPI when some BPF programs do have
> > stable instructions already (e.g. network) and some that can
> > potentially have someday.
> >
>
> Yes, you are forcing us. Saying we are only allowed to use "trusted"
> loaders, and that no one is allowed to have any in-kernel, in-tree code
> that inspects user inputs or target programs directly is very
> non-consentual on my end. This is a design mandate, being forced upon
> other people, by you, with no concrete reasons, other than vague statements
> around UAPI design, need or necessity.
>
> -blaise
>
> > RE The struct internals will be replaced by calling BPF_OBJ_GET_INFO
> > directly from the loader program as I mentioned in the commit.”
> >
> >
> > - KP
> >
> >
> >>
> >> -blaise
> >>
> >> > void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
> >> > enum bpf_attach_type type)
> >> > {
> >> > diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> >> > index b6ee9870523a..084372fa54f4 100644
> >> > --- a/tools/lib/bpf/libbpf.h
> >> > +++ b/tools/lib/bpf/libbpf.h
> >> > @@ -1803,9 +1803,10 @@ struct gen_loader_opts {
> >> > const char *insns;
> >> > __u32 data_sz;
> >> > __u32 insns_sz;
> >> > + bool gen_hash;
> >> > };
> >> >
> >> > -#define gen_loader_opts__last_field insns_sz
> >> > +#define gen_loader_opts__last_field gen_hash
> >> > LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj,
> >> > struct gen_loader_opts *opts);
> >> >
> >> > --
> >> > 2.43.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader
2025-06-06 23:29 ` [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader KP Singh
2025-06-10 0:08 ` Alexei Starovoitov
2025-06-10 16:51 ` Blaise Boscaccy
@ 2025-06-12 22:56 ` Andrii Nakryiko
2 siblings, 0 replies; 79+ messages in thread
From: Andrii Nakryiko @ 2025-06-12 22:56 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> To fulfill the BPF signing contract, represented as Sig(I_loader ||
> H_meta), the generated trusted loader program must verify the integrity
> of the metadata. This signature cryptographically binds the loader's
> instructions (I_loader) to a hash of the metadata (H_meta).
>
> The verification process is embedded directly into the loader program.
> Upon execution, the loader loads the runtime hash from struct bpf_map
> i.e. BPF_PSEUDO_MAP_IDX and compares this runtime hash against an
> expected hash value that has been hardcoded directly by
> bpf_obj__gen_loader.
>
> The load from bpf_map can be improved by calling
> BPF_OBJ_GET_INFO_BY_FD from the kernel context after BPF_OBJ_GET_INFO_BY_FD
> has been updated for being called from the kernel context.
>
> The following instructions are generated:
>
> ld_imm64 r1, const_ptr_to_map // insn[0].src_reg == BPF_PSEUDO_MAP_IDX
> r2 = *(u64 *)(r1 + 0);
> ld_imm64 r3, sha256_of_map_part1 // constant precomputed by
> bpftool (part of H_meta)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 8);
> ld_imm64 r3, sha256_of_map_part2 // (part of H_meta)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 16);
> ld_imm64 r3, sha256_of_map_part3 // (part of H_meta)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 24);
> ld_imm64 r3, sha256_of_map_part4 // (part of H_meta)
> if r2 != r3 goto out;
> ...
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> tools/lib/bpf/bpf_gen_internal.h | 2 ++
> tools/lib/bpf/gen_loader.c | 52 ++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 3 +-
> 3 files changed, 56 insertions(+), 1 deletion(-)
>
> diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
> index 6ff963a491d9..49af4260b8e6 100644
> --- a/tools/lib/bpf/bpf_gen_internal.h
> +++ b/tools/lib/bpf/bpf_gen_internal.h
> @@ -4,6 +4,7 @@
> #define __BPF_GEN_INTERNAL_H
>
> #include "bpf.h"
> +#include "libbpf_internal.h"
>
> struct ksym_relo_desc {
> const char *name;
> @@ -50,6 +51,7 @@ struct bpf_gen {
> __u32 nr_ksyms;
> int fd_array;
> int nr_fd_array;
> + int hash_insn_offset[SHA256_DWORD_SIZE];
> };
>
> void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps);
> diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
> index 113ae4abd345..3d672c09e948 100644
> --- a/tools/lib/bpf/gen_loader.c
> +++ b/tools/lib/bpf/gen_loader.c
> @@ -110,6 +110,7 @@ static void emit2(struct bpf_gen *gen, struct bpf_insn insn1, struct bpf_insn in
>
> static int add_data(struct bpf_gen *gen, const void *data, __u32 size);
> static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off);
> +static void bpf_gen__signature_match(struct bpf_gen *gen);
>
> void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps)
> {
> @@ -152,6 +153,8 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps
> /* R7 contains the error code from sys_bpf. Copy it into R0 and exit. */
> emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_7));
> emit(gen, BPF_EXIT_INSN());
> + if (gen->opts->gen_hash)
use OPTS_GET instead of directly accessing a field that might not be there?
> + bpf_gen__signature_match(gen);
> }
>
> static int add_data(struct bpf_gen *gen, const void *data, __u32 size)
> @@ -368,6 +371,25 @@ static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off)
> __emit_sys_close(gen);
> }
>
> +static int compute_sha_udpate_offsets(struct bpf_gen *gen)
> +{
> + __u64 sha[SHA256_DWORD_SIZE];
> + int i, err;
> +
> + err = libbpf_sha256(gen->data_start, gen->data_cur - gen->data_start, sha);
> + if (err < 0) {
> + pr_warn("sha256 computation of the metadata failed");
> + return err;
> + }
> + for (i = 0; i < SHA256_DWORD_SIZE; i++) {
> + struct bpf_insn *insn =
> + (struct bpf_insn *)(gen->insn_start + gen->hash_insn_offset[i]);
> + insn[0].imm = (__u32)sha[i];
> + insn[1].imm = sha[i] >> 32;
> + }
> + return 0;
> +}
> +
> int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
> {
> int i;
> @@ -394,6 +416,12 @@ int bpf_gen__finish(struct bpf_gen *gen, int nr_progs, int nr_maps)
> blob_fd_array_off(gen, i));
> emit(gen, BPF_MOV64_IMM(BPF_REG_0, 0));
> emit(gen, BPF_EXIT_INSN());
> + if (gen->opts->gen_hash) {
ditto, OPTS_GET
> + gen->error = compute_sha_udpate_offsets(gen);
> + if (gen->error)
> + return gen->error;
> + }
> +
> pr_debug("gen: finish %s\n", errstr(gen->error));
> if (!gen->error) {
> struct gen_loader_opts *opts = gen->opts;
> @@ -557,6 +585,30 @@ void bpf_gen__map_create(struct bpf_gen *gen,
> emit_sys_close_stack(gen, stack_off(inner_map_fd));
> }
>
> +static void bpf_gen__signature_match(struct bpf_gen *gen)
> +{
> + __s64 off = -(gen->insn_cur - gen->insn_start - gen->cleanup_label) / 8 - 1;
> + int i;
> +
> + for (i = 0; i < SHA256_DWORD_SIZE; i++) {
> + emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX,
> + 0, 0, 0, 0));
> + emit(gen, BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, i * sizeof(__u64)));
> + gen->hash_insn_offset[i] = gen->insn_cur - gen->insn_start;
> + emit2(gen,
> + BPF_LD_IMM64_RAW_FULL(BPF_REG_3, 0, 0, 0, 0, 0));
nit: doesn't fit on a single line? same below
> +
> + if (is_simm16(off)) {
> + emit(gen, BPF_MOV64_IMM(BPF_REG_7, -EINVAL));
> + emit(gen,
> + BPF_JMP_REG(BPF_JNE, BPF_REG_2, BPF_REG_3, off));
> + } else {
> + gen->error = -ERANGE;
> + emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, -1));
> + }
> + }
> +}
> +
> void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *attach_name,
> enum bpf_attach_type type)
> {
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index b6ee9870523a..084372fa54f4 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -1803,9 +1803,10 @@ struct gen_loader_opts {
> const char *insns;
> __u32 data_sz;
> __u32 insns_sz;
> + bool gen_hash;
> };
>
> -#define gen_loader_opts__last_field insns_sz
> +#define gen_loader_opts__last_field gen_hash
> LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj,
> struct gen_loader_opts *opts);
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 11/12] bpftool: Add support for signing BPF programs
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (9 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 10/12] libbpf: Embed and verify the metadata hash in the loader KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-08 14:03 ` James Bottomley
2025-06-06 23:29 ` [PATCH 12/12] selftests/bpf: Enable signature verification for all lskel tests KP Singh
` (2 subsequent siblings)
13 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
Two modes of operation being added:
Add two modes of operation:
* For prog load, allow signing a program immediately before loading. This
is essential for command-line testing and administration.
bpftool prog load -S -k <private_key> -i <identity_cert> fentry_test.bpf.o
* For gen skeleton, embed a pre-generated signature into the C skeleton
file. This supports the use of signed programs in compiled applications.
bpftool gen skeleton -S -k <private_key> -i <identity_cert> fentry_test.bpf.o
Generation of the loader program and its metadata map is implemented in
libbpf (bpf_obj__gen_loader). bpftool generates a skeleton that loads
the program and automates the required steps: freezing the map, creating
an exclusive map, loading, and running. Users can use standard libbpf
APIs directly or integrate loader program generation into their own
toolchains.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
.../bpf/bpftool/Documentation/bpftool-gen.rst | 12 +
.../bpftool/Documentation/bpftool-prog.rst | 12 +
tools/bpf/bpftool/Makefile | 6 +-
tools/bpf/bpftool/cgroup.c | 5 +-
tools/bpf/bpftool/gen.c | 58 ++++-
tools/bpf/bpftool/main.c | 21 +-
tools/bpf/bpftool/main.h | 11 +
tools/bpf/bpftool/prog.c | 25 +++
tools/bpf/bpftool/sign.c | 211 ++++++++++++++++++
9 files changed, 352 insertions(+), 9 deletions(-)
create mode 100644 tools/bpf/bpftool/sign.c
diff --git a/tools/bpf/bpftool/Documentation/bpftool-gen.rst b/tools/bpf/bpftool/Documentation/bpftool-gen.rst
index ca860fd97d8d..2997313003b1 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-gen.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-gen.rst
@@ -185,6 +185,18 @@ OPTIONS
For skeletons, generate a "light" skeleton (also known as "loader"
skeleton). A light skeleton contains a loader eBPF program. It does not use
the majority of the libbpf infrastructure, and does not need libelf.
+-S, --sign
+ For skeletons, generate a signed skeleton. This option must be used with
+ **-k** and **-i**. Using this flag implicitly enables **--use-loader**.
+ See the "Signed Skeletons" section in the description of the
+ **gen skeleton** command for more details.
+
+-k <private_key.pem>
+ Path to the private key file in PEM format, required for signing.
+
+-i <certificate.x509>
+ Path to the X.509 certificate file in PEM or DER format, required for
+ signing.
EXAMPLES
========
diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
index d6304e01afe0..dbfc7a496569 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
@@ -235,6 +235,18 @@ OPTIONS
creating the maps, and loading the programs (see **bpftool prog tracelog**
as a way to dump those messages).
+-S, --sign
+ Enable signing of the BPF program before loading. This option must be
+ used with **-k** and **-i**. Using this flag implicitly enables
+ **--use-loader**.
+
+-k <private_key.pem>
+ Path to the private key file in PEM format, required when signing.
+
+-i <certificate.x509>
+ Path to the X.509 certificate file in PEM or DER format, required when
+ signing.
+
EXAMPLES
========
**# bpftool prog show**
diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile
index 9e9a5f006cd2..586d1b2595d1 100644
--- a/tools/bpf/bpftool/Makefile
+++ b/tools/bpf/bpftool/Makefile
@@ -130,8 +130,8 @@ include $(FEATURES_DUMP)
endif
endif
-LIBS = $(LIBBPF) -lelf -lz
-LIBS_BOOTSTRAP = $(LIBBPF_BOOTSTRAP) -lelf -lz
+LIBS = $(LIBBPF) -lelf -lz -lcrypto
+LIBS_BOOTSTRAP = $(LIBBPF_BOOTSTRAP) -lelf -lz -lcrypto
ifeq ($(feature-libelf-zstd),1)
LIBS += -lzstd
@@ -194,7 +194,7 @@ endif
BPFTOOL_BOOTSTRAP := $(BOOTSTRAP_OUTPUT)bpftool
-BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o)
+BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o sign.o)
$(BOOTSTRAP_OBJS): $(LIBBPF_BOOTSTRAP)
OBJS = $(patsubst %.c,$(OUTPUT)%.o,$(SRCS)) $(OUTPUT)disasm.o
diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
index 944ebe21a216..90c9aa297806 100644
--- a/tools/bpf/bpftool/cgroup.c
+++ b/tools/bpf/bpftool/cgroup.c
@@ -1,7 +1,10 @@
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
// Copyright (C) 2017 Facebook
// Author: Roman Gushchin <guro@fb.com>
-
+#undef GCC_VERSION
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
#define _XOPEN_SOURCE 500
#include <errno.h>
#include <fcntl.h>
diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
index 67a60114368f..f4a211daa729 100644
--- a/tools/bpf/bpftool/gen.c
+++ b/tools/bpf/bpftool/gen.c
@@ -688,10 +688,17 @@ static void codegen_destroy(struct bpf_object *obj, const char *obj_name)
static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *header_guard)
{
DECLARE_LIBBPF_OPTS(gen_loader_opts, opts);
+ struct bpf_load_and_run_opts sopts = {};
+ char sig_buf[MAX_SIG_SIZE];
+ __u8 prog_sha[SHA256_DIGEST_LENGTH];
struct bpf_map *map;
+
char ident[256];
int err = 0;
+ if (sign_progs)
+ opts.gen_hash = true;
+
err = bpf_object__gen_loader(obj, &opts);
if (err)
return err;
@@ -701,6 +708,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
p_err("failed to load object file");
goto out;
}
+
/* If there was no error during load then gen_loader_opts
* are populated with the loader program.
*/
@@ -780,14 +788,56 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
print_hex(opts.insns, opts.insns_sz);
codegen("\
\n\
- \"; \n\
- \n\
+ \";\n");
+
+ if (sign_progs) {
+ sopts.insns = opts.insns;
+ sopts.insns_sz = opts.insns_sz;
+ sopts.excl_prog_hash = prog_sha;
+ sopts.excl_prog_hash_sz = sizeof(prog_sha);
+ sopts.signature = sig_buf;
+ sopts.signature_sz = MAX_SIG_SIZE;
+ sopts.keyring_id = KEY_SPEC_SESSION_KEYRING;
+
+ err = bpftool_prog_sign(&sopts);
+ if (err < 0)
+ return err;
+
+ codegen("\
+ \n\
+ static const char opts_sig[] __attribute__((__aligned__(8))) = \"\\\n\
+ ");
+ print_hex((const void *)sig_buf, sopts.signature_sz);
+ codegen("\
+ \n\
+ \";\n");
+
+ codegen("\
+ \n\
+ static const char opts_excl_hash[] __attribute__((__aligned__(8))) = \"\\\n\
+ ");
+ print_hex((const void *)prog_sha, sizeof(prog_sha));
+ codegen("\
+ \n\
+ \";\n");
+
+ codegen("\
+ \n\
+ opts.signature = (void *)opts_sig; \n\
+ opts.signature_sz = sizeof(opts_sig) - 1; \n\
+ opts.excl_prog_hash = (void *)opts_excl_hash; \n\
+ opts.excl_prog_hash_sz = sizeof(opts_excl_hash) - 1; \n\
+ opts.keyring_id = KEY_SPEC_SESSION_KEYRING; \n\
+ ");
+ }
+
+ codegen("\
+ \n\
opts.ctx = (struct bpf_loader_ctx *)skel; \n\
opts.data_sz = sizeof(opts_data) - 1; \n\
opts.data = (void *)opts_data; \n\
- opts.insns_sz = sizeof(opts_insn) - 1; \n\
opts.insns = (void *)opts_insn; \n\
- \n\
+ opts.insns_sz = sizeof(opts_insn) - 1; \n\
err = bpf_load_and_run(&opts); \n\
if (err < 0) \n\
return err; \n\
diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c
index cd5963cb6058..e69a8fa2c99b 100644
--- a/tools/bpf/bpftool/main.c
+++ b/tools/bpf/bpftool/main.c
@@ -33,6 +33,9 @@ bool relaxed_maps;
bool use_loader;
struct btf *base_btf;
struct hashmap *refs_table;
+bool sign_progs;
+const char *private_key_path;
+const char *cert_path;
static void __noreturn clean_and_exit(int i)
{
@@ -447,6 +450,7 @@ int main(int argc, char **argv)
{ "nomount", no_argument, NULL, 'n' },
{ "debug", no_argument, NULL, 'd' },
{ "use-loader", no_argument, NULL, 'L' },
+ { "sign", required_argument, NULL, 'S'},
{ "base-btf", required_argument, NULL, 'B' },
{ 0 }
};
@@ -473,7 +477,7 @@ int main(int argc, char **argv)
bin_name = "bpftool";
opterr = 0;
- while ((opt = getopt_long(argc, argv, "VhpjfLmndB:l",
+ while ((opt = getopt_long(argc, argv, "VhpjfLmndSi:k:B:l",
options, NULL)) >= 0) {
switch (opt) {
case 'V':
@@ -519,6 +523,16 @@ int main(int argc, char **argv)
case 'L':
use_loader = true;
break;
+ case 'S':
+ sign_progs = true;
+ use_loader = true;
+ break;
+ case 'k':
+ private_key_path = optarg;
+ break;
+ case 'i':
+ cert_path = optarg;
+ break;
default:
p_err("unrecognized option '%s'", argv[optind - 1]);
if (json_output)
@@ -536,6 +550,11 @@ int main(int argc, char **argv)
if (version_requested)
return do_version(argc, argv);
+ if (sign_progs && (private_key_path == NULL || cert_path == NULL)) {
+ p_err("-i <identity_x509_cert> and -k <private> key must be supplied with -S for signing");
+ return -EINVAL;
+ }
+
ret = cmd_select(commands, argc, argv, do_help);
if (json_output)
diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
index 9eb764fe4cc8..3832322ad7c3 100644
--- a/tools/bpf/bpftool/main.h
+++ b/tools/bpf/bpftool/main.h
@@ -6,9 +6,14 @@
/* BFD and kernel.h both define GCC_VERSION, differently */
#undef GCC_VERSION
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
#include <stdbool.h>
#include <stdio.h>
+#include <errno.h>
#include <stdlib.h>
+#include <bpf/skel_internal.h>
#include <linux/bpf.h>
#include <linux/compiler.h>
#include <linux/kernel.h>
@@ -51,6 +56,7 @@ static inline void *u64_to_ptr(__u64 ptr)
})
#define ERR_MAX_LEN 1024
+#define MAX_SIG_SIZE 4096
#define BPF_TAG_FMT "%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx"
@@ -84,6 +90,9 @@ extern bool relaxed_maps;
extern bool use_loader;
extern struct btf *base_btf;
extern struct hashmap *refs_table;
+extern bool sign_progs;
+extern const char *private_key_path;
+extern const char *cert_path;
void __printf(1, 2) p_err(const char *fmt, ...);
void __printf(1, 2) p_info(const char *fmt, ...);
@@ -271,4 +280,6 @@ int pathname_concat(char *buf, int buf_sz, const char *path,
/* print netfilter bpf_link info */
void netfilter_dump_plain(const struct bpf_link_info *info);
void netfilter_dump_json(const struct bpf_link_info *info, json_writer_t *wtr);
+int bpftool_prog_sign(struct bpf_load_and_run_opts *opts);
+__u32 register_session_key(const char *key_der_path);
#endif
diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index f010295350be..e1dbbca91e34 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -23,6 +23,7 @@
#include <linux/err.h>
#include <linux/perf_event.h>
#include <linux/sizes.h>
+#include <linux/keyctl.h>
#include <bpf/bpf.h>
#include <bpf/btf.h>
@@ -1875,6 +1876,8 @@ static int try_loader(struct gen_loader_opts *gen)
{
struct bpf_load_and_run_opts opts = {};
struct bpf_loader_ctx *ctx;
+ char sig_buf[MAX_SIG_SIZE];
+ __u8 prog_sha[SHA256_DIGEST_LENGTH];
int ctx_sz = sizeof(*ctx) + 64 * max(sizeof(struct bpf_map_desc),
sizeof(struct bpf_prog_desc));
int log_buf_sz = (1u << 24) - 1;
@@ -1898,6 +1901,24 @@ static int try_loader(struct gen_loader_opts *gen)
opts.insns = gen->insns;
opts.insns_sz = gen->insns_sz;
fds_before = count_open_fds();
+
+ if (sign_progs) {
+ opts.excl_prog_hash = prog_sha;
+ opts.excl_prog_hash_sz = sizeof(prog_sha);
+ opts.signature = sig_buf;
+ opts.signature_sz = MAX_SIG_SIZE;
+ opts.keyring_id = KEY_SPEC_SESSION_KEYRING;
+
+ err = bpftool_prog_sign(&opts);
+ if (err < 0)
+ return err;
+
+ err = register_session_key(cert_path);
+ if (err < 0) {
+ p_err("failed to add session key");
+ goto out;
+ }
+ }
err = bpf_load_and_run(&opts);
fd_delta = count_open_fds() - fds_before;
if (err < 0 || verifier_logs) {
@@ -1906,6 +1927,7 @@ static int try_loader(struct gen_loader_opts *gen)
fprintf(stderr, "loader prog leaked %d FDs\n",
fd_delta);
}
+out:
free(log_buf);
return err;
}
@@ -1933,6 +1955,9 @@ static int do_loader(int argc, char **argv)
goto err_close_obj;
}
+ if (sign_progs)
+ gen.gen_hash = true;
+
err = bpf_object__gen_loader(obj, &gen);
if (err)
goto err_close_obj;
diff --git a/tools/bpf/bpftool/sign.c b/tools/bpf/bpftool/sign.c
new file mode 100644
index 000000000000..dde391da6b05
--- /dev/null
+++ b/tools/bpf/bpftool/sign.c
@@ -0,0 +1,211 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (C) 2022 Google LLC.
+ */
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <stdbool.h>
+#include <string.h>
+#include <string.h>
+#include <getopt.h>
+#include <err.h>
+#include <openssl/opensslv.h>
+#include <openssl/bio.h>
+#include <openssl/evp.h>
+#include <openssl/pem.h>
+#include <openssl/err.h>
+#include <openssl/cms.h>
+#include <linux/keyctl.h>
+#include <errno.h>
+
+#include <bpf/skel_internal.h>
+
+#include "main.h"
+
+
+#define OPEN_SSL_ERR_BUF_LEN 256
+
+static void display_openssl_errors(int l)
+{
+ char buf[OPEN_SSL_ERR_BUF_LEN];
+ const char *file;
+ const char *data;
+ unsigned long e;
+ int flags;
+ int line;
+
+ while ((e = ERR_get_error_all(&file, &line, NULL, &data, &flags))) {
+ ERR_error_string_n(e, buf, sizeof(buf));
+ if (data && (flags & ERR_TXT_STRING)) {
+ p_err("OpenSSL %s: %s:%d: %s\n", buf, file, line, data);
+ } else {
+ p_err("OpenSSL %s: %s:%d\n", buf, file, line);
+ }
+ }
+}
+
+#define DISPLAY_OSSL_ERR(cond) \
+ do { \
+ bool __cond = (cond); \
+ if (__cond && ERR_peek_error()) \
+ display_openssl_errors(__LINE__);\
+ } while (0)
+
+static EVP_PKEY *read_private_key(const char *pkey_path)
+{
+ EVP_PKEY *private_key = NULL;
+ BIO *b;
+
+ b = BIO_new_file(pkey_path, "rb");
+ private_key = PEM_read_bio_PrivateKey(b, NULL, NULL, NULL);
+ BIO_free(b);
+ DISPLAY_OSSL_ERR(!private_key);
+ return private_key;
+}
+
+static X509 *read_x509(const char *x509_name)
+{
+ unsigned char buf[2];
+ X509 *x509 = NULL;
+ BIO *b;
+ int n;
+
+ b = BIO_new_file(x509_name, "rb");
+ if (!b)
+ goto cleanup;
+
+ /* Look at the first two bytes of the file to determine the encoding */
+ n = BIO_read(b, buf, 2);
+ if (n != 2)
+ goto cleanup;
+
+ if (BIO_reset(b) != 0)
+ goto cleanup;
+
+ if (buf[0] == 0x30 && buf[1] >= 0x81 && buf[1] <= 0x84)
+ /* Assume raw DER encoded X.509 */
+ x509 = d2i_X509_bio(b, NULL);
+ else
+ /* Assume PEM encoded X.509 */
+ x509 = PEM_read_bio_X509(b, NULL, NULL, NULL);
+
+cleanup:
+ BIO_free(b);
+ DISPLAY_OSSL_ERR(!x509);
+ return x509;
+}
+
+__u32 register_session_key(const char *key_der_path)
+{
+ unsigned char *der_buf = NULL;
+ X509 *x509 = NULL;
+ int key_id = -1;
+ int der_len;
+
+ if (!key_der_path)
+ return key_id;
+ x509 = read_x509(key_der_path);
+ if (!x509)
+ goto cleanup;
+ der_len = i2d_X509(x509, &der_buf);
+ if (der_len < 0)
+ goto cleanup;
+ key_id = syscall(__NR_add_key, "asymmetric", key_der_path, der_buf,
+ (size_t)der_len, KEY_SPEC_SESSION_KEYRING);
+cleanup:
+ X509_free(x509);
+ OPENSSL_free(der_buf);
+ DISPLAY_OSSL_ERR(key_id == -1);
+ return key_id;
+}
+
+int bpftool_prog_sign(struct bpf_load_and_run_opts *opts)
+{
+ BIO *bd_in = NULL, *bd_out = NULL;
+ EVP_PKEY *private_key = NULL;
+ CMS_ContentInfo *cms = NULL;
+ long actual_sig_len = 0;
+ X509 *x509 = NULL;
+ int err = 0;
+
+ bd_in = BIO_new_mem_buf(opts->insns, opts->insns_sz);
+ if (!bd_in) {
+ err = -ENOMEM;
+ goto cleanup;
+ }
+
+ private_key = read_private_key(private_key_path);
+ if (!private_key) {
+ err = -EINVAL;
+ goto cleanup;
+ }
+
+ x509 = read_x509(cert_path);
+ if (!x509) {
+ err = -EINVAL;
+ goto cleanup;
+ }
+
+ cms = CMS_sign(NULL, NULL, NULL, NULL,
+ CMS_NOCERTS | CMS_PARTIAL | CMS_BINARY | CMS_DETACHED |
+ CMS_STREAM);
+ if (!cms) {
+ err = -EINVAL;
+ goto cleanup;
+ }
+
+ if (!CMS_add1_signer(cms, x509, private_key, EVP_sha256(),
+ CMS_NOCERTS | CMS_BINARY | CMS_NOSMIMECAP |
+ CMS_USE_KEYID | CMS_NOATTR)) {
+ err = -EINVAL;
+ goto cleanup;
+ }
+
+ if (CMS_final(cms, bd_in, NULL, CMS_NOCERTS | CMS_BINARY) != 1) {
+ err = -EIO;
+ goto cleanup;
+ }
+
+ EVP_Digest(opts->insns, opts->insns_sz, opts->excl_prog_hash,
+ &opts->excl_prog_hash_sz, EVP_sha256(), NULL);
+
+ bd_out = BIO_new(BIO_s_mem());
+ if (!bd_out) {
+ err = -ENOMEM;
+ goto cleanup;
+ }
+
+ if (!i2d_CMS_bio_stream(bd_out, cms, NULL, 0)) {
+ err = -EIO;
+ goto cleanup;
+ }
+
+ actual_sig_len = BIO_get_mem_data(bd_out, NULL);
+ if (actual_sig_len <= 0) {
+ err = -EIO;
+ goto cleanup;
+ }
+
+ if ((size_t)actual_sig_len > opts->signature_sz) {
+ err = -ENOSPC;
+ goto cleanup;
+ }
+
+ if (BIO_read(bd_out, opts->signature, actual_sig_len) != actual_sig_len) {
+ err = -EIO;
+ goto cleanup;
+ }
+
+ opts->signature_sz = actual_sig_len;
+cleanup:
+ BIO_free(bd_out);
+ CMS_ContentInfo_free(cms);
+ X509_free(x509);
+ EVP_PKEY_free(private_key);
+ BIO_free(bd_in);
+ DISPLAY_OSSL_ERR(err < 0);
+ return err;
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 11/12] bpftool: Add support for signing BPF programs
2025-06-06 23:29 ` [PATCH 11/12] bpftool: Add support for signing BPF programs KP Singh
@ 2025-06-08 14:03 ` James Bottomley
2025-06-10 8:50 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: James Bottomley @ 2025-06-08 14:03 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, keyrings
[+keyrings]
On Sat, 2025-06-07 at 01:29 +0200, KP Singh wrote:
[...]
> diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
> index f010295350be..e1dbbca91e34 100644
> --- a/tools/bpf/bpftool/prog.c
> +++ b/tools/bpf/bpftool/prog.c
> @@ -23,6 +23,7 @@
> #include <linux/err.h>
> #include <linux/perf_event.h>
> #include <linux/sizes.h>
> +#include <linux/keyctl.h>
>
> #include <bpf/bpf.h>
> #include <bpf/btf.h>
> @@ -1875,6 +1876,8 @@ static int try_loader(struct gen_loader_opts
> *gen)
> {
> struct bpf_load_and_run_opts opts = {};
> struct bpf_loader_ctx *ctx;
> + char sig_buf[MAX_SIG_SIZE];
> + __u8 prog_sha[SHA256_DIGEST_LENGTH];
> int ctx_sz = sizeof(*ctx) + 64 * max(sizeof(struct
> bpf_map_desc),
> sizeof(struct
> bpf_prog_desc));
> int log_buf_sz = (1u << 24) - 1;
> @@ -1898,6 +1901,24 @@ static int try_loader(struct gen_loader_opts
> *gen)
> opts.insns = gen->insns;
> opts.insns_sz = gen->insns_sz;
> fds_before = count_open_fds();
> +
> + if (sign_progs) {
> + opts.excl_prog_hash = prog_sha;
> + opts.excl_prog_hash_sz = sizeof(prog_sha);
> + opts.signature = sig_buf;
> + opts.signature_sz = MAX_SIG_SIZE;
> + opts.keyring_id = KEY_SPEC_SESSION_KEYRING;
> +
This looks wrong on a couple of levels. Firstly, if you want system
level integrity you can't search the session keyring because any
process can join (subject to keyring permissions) and the owner, who is
presumably the one inserting the bpf program, can add any key they
like.
The other problem with this scheme is that the keyring_id itself has no
checked integrity, which means that even if a script was marked as
system keyring only anyone can binary edit the user space program to
change it to their preferred keyring and it will still work. If you
want variable keyrings, they should surely be part of the validated
policy.
Regards,
James
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 11/12] bpftool: Add support for signing BPF programs
2025-06-08 14:03 ` James Bottomley
@ 2025-06-10 8:50 ` KP Singh
2025-06-10 15:56 ` James Bottomley
2025-06-10 16:34 ` Blaise Boscaccy
0 siblings, 2 replies; 79+ messages in thread
From: KP Singh @ 2025-06-10 8:50 UTC (permalink / raw)
To: James Bottomley
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii, keyrings
On Sun, Jun 8, 2025 at 4:03 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> [+keyrings]
> On Sat, 2025-06-07 at 01:29 +0200, KP Singh wrote:
> [...]
> > diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
> > index f010295350be..e1dbbca91e34 100644
> > --- a/tools/bpf/bpftool/prog.c
> > +++ b/tools/bpf/bpftool/prog.c
> > @@ -23,6 +23,7 @@
> > #include <linux/err.h>
> > #include <linux/perf_event.h>
> > #include <linux/sizes.h>
> > +#include <linux/keyctl.h>
> >
> > #include <bpf/bpf.h>
> > #include <bpf/btf.h>
> > @@ -1875,6 +1876,8 @@ static int try_loader(struct gen_loader_opts
> > *gen)
> > {
> > struct bpf_load_and_run_opts opts = {};
> > struct bpf_loader_ctx *ctx;
> > + char sig_buf[MAX_SIG_SIZE];
> > + __u8 prog_sha[SHA256_DIGEST_LENGTH];
> > int ctx_sz = sizeof(*ctx) + 64 * max(sizeof(struct
> > bpf_map_desc),
> > sizeof(struct
> > bpf_prog_desc));
> > int log_buf_sz = (1u << 24) - 1;
> > @@ -1898,6 +1901,24 @@ static int try_loader(struct gen_loader_opts
> > *gen)
> > opts.insns = gen->insns;
> > opts.insns_sz = gen->insns_sz;
> > fds_before = count_open_fds();
> > +
> > + if (sign_progs) {
> > + opts.excl_prog_hash = prog_sha;
> > + opts.excl_prog_hash_sz = sizeof(prog_sha);
> > + opts.signature = sig_buf;
> > + opts.signature_sz = MAX_SIG_SIZE;
> > + opts.keyring_id = KEY_SPEC_SESSION_KEYRING;
> > +
>
> This looks wrong on a couple of levels. Firstly, if you want system
> level integrity you can't search the session keyring because any
> process can join (subject to keyring permissions) and the owner, who is
> presumably the one inserting the bpf program, can add any key they
> like.
>
Wanting system level integrity is a security policy question, so this
is something that needs to be implemented at the security layer, the
LSM can deny the keys / keyring IDs they don't trust. Session
keyrings are for sure useful for delegated signing of BPF programs
when dynamically generated.
> The other problem with this scheme is that the keyring_id itself has no
> checked integrity, which means that even if a script was marked as
If an attacker can modify a binary that has permissions to load BPF
programs and update the keyring ID then we have other issues. So, this
does not work in independence, signed BPF programs do not really make
sense without trusted execution).
> system keyring only anyone can binary edit the user space program to
> change it to their preferred keyring and it will still work. If you
> want variable keyrings, they should surely be part of the validated
> policy.
The policy is what I expect to be implemented in the LSM layer. A
variable keyring ID is a critical part of the UAPI to create different
"rings of trust" e.g. LSM can enforce that network programs can be
loaded with a derived key, and have a different keyring for
unprivileged BPF programs.
This patch implements the signing support, not the security policy for it.
- KP
>
> Regards,
>
> James
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 11/12] bpftool: Add support for signing BPF programs
2025-06-10 8:50 ` KP Singh
@ 2025-06-10 15:56 ` James Bottomley
2025-06-10 16:41 ` KP Singh
2025-06-10 16:34 ` Blaise Boscaccy
1 sibling, 1 reply; 79+ messages in thread
From: James Bottomley @ 2025-06-10 15:56 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii, keyrings
On Tue, 2025-06-10 at 10:50 +0200, KP Singh wrote:
> On Sun, Jun 8, 2025 at 4:03 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
> >
> > [+keyrings]
> > On Sat, 2025-06-07 at 01:29 +0200, KP Singh wrote:
> > [...]
> > > diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
> > > index f010295350be..e1dbbca91e34 100644
> > > --- a/tools/bpf/bpftool/prog.c
> > > +++ b/tools/bpf/bpftool/prog.c
> > > @@ -23,6 +23,7 @@
> > > #include <linux/err.h>
> > > #include <linux/perf_event.h>
> > > #include <linux/sizes.h>
> > > +#include <linux/keyctl.h>
> > >
> > > #include <bpf/bpf.h>
> > > #include <bpf/btf.h>
> > > @@ -1875,6 +1876,8 @@ static int try_loader(struct
> > > gen_loader_opts
> > > *gen)
> > > {
> > > struct bpf_load_and_run_opts opts = {};
> > > struct bpf_loader_ctx *ctx;
> > > + char sig_buf[MAX_SIG_SIZE];
> > > + __u8 prog_sha[SHA256_DIGEST_LENGTH];
> > > int ctx_sz = sizeof(*ctx) + 64 * max(sizeof(struct
> > > bpf_map_desc),
> > > sizeof(struct
> > > bpf_prog_desc));
> > > int log_buf_sz = (1u << 24) - 1;
> > > @@ -1898,6 +1901,24 @@ static int try_loader(struct
> > > gen_loader_opts
> > > *gen)
> > > opts.insns = gen->insns;
> > > opts.insns_sz = gen->insns_sz;
> > > fds_before = count_open_fds();
> > > +
> > > + if (sign_progs) {
> > > + opts.excl_prog_hash = prog_sha;
> > > + opts.excl_prog_hash_sz = sizeof(prog_sha);
> > > + opts.signature = sig_buf;
> > > + opts.signature_sz = MAX_SIG_SIZE;
> > > + opts.keyring_id = KEY_SPEC_SESSION_KEYRING;
> > > +
> >
> > This looks wrong on a couple of levels. Firstly, if you want
> > system level integrity you can't search the session keyring because
> > any process can join (subject to keyring permissions) and the
> > owner, who is presumably the one inserting the bpf program, can add
> > any key they like.
> >
>
> Wanting system level integrity is a security policy question, so this
> is something that needs to be implemented at the security layer, the
> LSM can deny the keys / keyring IDs they don't trust. Session
> keyrings are for sure useful for delegated signing of BPF programs
> when dynamically generated.
The problem is you're hard coding it at light skeleton creation time.
Plus there doesn't seem to be any ability to use the system keyrings
anyway as the kernel code only looks up the user keyrings. Since
actual key ids are volatile handles which change from boot to boot (so
can't be stored in anything durable) this can only be used for keyring
specifiers, so it would also make sense to check this is actually a
specifier (system keyring specifiers are positive and user specifiers
negative, so it's easy to check for the range).
> > The other problem with this scheme is that the keyring_id itself
> > has no checked integrity, which means that even if a script was
> > marked as
>
> If an attacker can modify a binary that has permissions to load BPF
> programs and update the keyring ID then we have other issues.
It's a classic supply chain attack (someone modifies the light skeleton
between the creator and the consumer), even Google is claiming SLSA
guarantees, so you can't just wave it away as "other issues".
> So, this does not work in independence, signed BPF programs do not
> really make sense without trusted execution).
The other patch set provided this ability using signed hash chains, so
absolutely there are signed bpf programmes that can work absent a
trusted user execution environment. It may not be what you want for
your use case (which is why the other patch set allowed for both), but
there are lots of integrity use cases out there wanting precisely this.
> > system keyring only anyone can binary edit the user space program
> > to change it to their preferred keyring and it will still work. If
> > you want variable keyrings, they should surely be part of the
> > validated policy.
>
> The policy is what I expect to be implemented in the LSM layer. A
> variable keyring ID is a critical part of the UAPI to create
> different "rings of trust" e.g. LSM can enforce that network programs
> can be loaded with a derived key, and have a different keyring for
> unprivileged BPF programs.
You can't really have it both ways: either the keyring is part of the
LSM supplied policy in which case it doesn't make much sense to have it
in the durable attributes (and the LSM would have to set it before the
signature is verified) or it's part of the durable attribute embedded
security information and should be integrity protected.
I suppose we could compromise and say it should not be part of the
light skeleton durable attributes but should be set (or supplied by
policy) at BPF_PROG_LOAD time.
I should also note that when other systems use derived keys in
different keyrings, they usually have a specific named trusted keyring
(like _ima and .ima) which has policy enforced rules for adding keys.
Regards,
James
> This patch implements the signing support, not the security policy
> for it.
>
> - KP
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 11/12] bpftool: Add support for signing BPF programs
2025-06-10 15:56 ` James Bottomley
@ 2025-06-10 16:41 ` KP Singh
0 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-06-10 16:41 UTC (permalink / raw)
To: James Bottomley
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii, keyrings
On Tue, Jun 10, 2025 at 5:56 PM James Bottomley
<James.Bottomley@hansenpartnership.com> wrote:
>
> On Tue, 2025-06-10 at 10:50 +0200, KP Singh wrote:
> > On Sun, Jun 8, 2025 at 4:03 PM James Bottomley
> > <James.Bottomley@hansenpartnership.com> wrote:
> > >
> > > [+keyrings]
> > > On Sat, 2025-06-07 at 01:29 +0200, KP Singh wrote:
> > > [...]
> > > > diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
> > > > index f010295350be..e1dbbca91e34 100644
> > > > --- a/tools/bpf/bpftool/prog.c
> > > > +++ b/tools/bpf/bpftool/prog.c
> > > > @@ -23,6 +23,7 @@
> > > > #include <linux/err.h>
> > > > #include <linux/perf_event.h>
> > > > #include <linux/sizes.h>
> > > > +#include <linux/keyctl.h>
> > > >
> > > > #include <bpf/bpf.h>
> > > > #include <bpf/btf.h>
> > > > @@ -1875,6 +1876,8 @@ static int try_loader(struct
> > > > gen_loader_opts
> > > > *gen)
> > > > {
> > > > struct bpf_load_and_run_opts opts = {};
> > > > struct bpf_loader_ctx *ctx;
> > > > + char sig_buf[MAX_SIG_SIZE];
> > > > + __u8 prog_sha[SHA256_DIGEST_LENGTH];
> > > > int ctx_sz = sizeof(*ctx) + 64 * max(sizeof(struct
> > > > bpf_map_desc),
> > > > sizeof(struct
> > > > bpf_prog_desc));
> > > > int log_buf_sz = (1u << 24) - 1;
> > > > @@ -1898,6 +1901,24 @@ static int try_loader(struct
> > > > gen_loader_opts
> > > > *gen)
> > > > opts.insns = gen->insns;
> > > > opts.insns_sz = gen->insns_sz;
> > > > fds_before = count_open_fds();
> > > > +
> > > > + if (sign_progs) {
> > > > + opts.excl_prog_hash = prog_sha;
> > > > + opts.excl_prog_hash_sz = sizeof(prog_sha);
> > > > + opts.signature = sig_buf;
> > > > + opts.signature_sz = MAX_SIG_SIZE;
> > > > + opts.keyring_id = KEY_SPEC_SESSION_KEYRING;
> > > > +
> > >
> > > This looks wrong on a couple of levels. Firstly, if you want
> > > system level integrity you can't search the session keyring because
> > > any process can join (subject to keyring permissions) and the
> > > owner, who is presumably the one inserting the bpf program, can add
> > > any key they like.
> > >
> >
> > Wanting system level integrity is a security policy question, so this
> > is something that needs to be implemented at the security layer, the
> > LSM can deny the keys / keyring IDs they don't trust. Session
> > keyrings are for sure useful for delegated signing of BPF programs
> > when dynamically generated.
>
> The problem is you're hard coding it at light skeleton creation time.
> Plus there doesn't seem to be any ability to use the system keyrings
> anyway as the kernel code only looks up the user keyrings. Since
> actual key ids are volatile handles which change from boot to boot (so
> can't be stored in anything durable) this can only be used for keyring
> specifiers, so it would also make sense to check this is actually a
> specifier (system keyring specifiers are positive and user specifiers
> negative, so it's easy to check for the range).
>
> > > The other problem with this scheme is that the keyring_id itself
> > > has no checked integrity, which means that even if a script was
> > > marked as
> >
> > If an attacker can modify a binary that has permissions to load BPF
> > programs and update the keyring ID then we have other issues.
>
> It's a classic supply chain attack (someone modifies the light skeleton
> between the creator and the consumer), even Google is claiming SLSA
> guarantees, so you can't just wave it away as "other issues".
>
> > So, this does not work in independence, signed BPF programs do not
> > really make sense without trusted execution).
>
> The other patch set provided this ability using signed hash chains, so
> absolutely there are signed bpf programmes that can work absent a
> trusted user execution environment. It may not be what you want for
> your use case (which is why the other patch set allowed for both), but
> there are lots of integrity use cases out there wanting precisely this.
>
> > > system keyring only anyone can binary edit the user space program
> > > to change it to their preferred keyring and it will still work. If
> > > you want variable keyrings, they should surely be part of the
> > > validated policy.
> >
> > The policy is what I expect to be implemented in the LSM layer. A
> > variable keyring ID is a critical part of the UAPI to create
> > different "rings of trust" e.g. LSM can enforce that network programs
> > can be loaded with a derived key, and have a different keyring for
> > unprivileged BPF programs.
>
> You can't really have it both ways: either the keyring is part of the
> LSM supplied policy in which case it doesn't make much sense to have it
> in the durable attributes (and the LSM would have to set it before the
> signature is verified) or it's part of the durable attribute embedded
> security information and should be integrity protected.
>
> I suppose we could compromise and say it should not be part of the
> light skeleton durable attributes but should be set (or supplied by
> policy) at BPF_PROG_LOAD time.
Sure, this is expected, I added a default value there but this can be removed.
>
> I should also note that when other systems use derived keys in
> different keyrings, they usually have a specific named trusted keyring
> (like _ima and .ima) which has policy enforced rules for adding keys.
We can potentially add a bpf keyring but in general we don't want
every binary on the machine to use this derived key, but the binary
that's trusted to either load unsigned programs or use a derived key
for which the session keyring is more apt.
- KP
>
> Regards,
>
> James
>
>
> > This patch implements the signing support, not the security policy
> > for it.
> >
> > - KP
>
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 11/12] bpftool: Add support for signing BPF programs
2025-06-10 8:50 ` KP Singh
2025-06-10 15:56 ` James Bottomley
@ 2025-06-10 16:34 ` Blaise Boscaccy
1 sibling, 0 replies; 79+ messages in thread
From: Blaise Boscaccy @ 2025-06-10 16:34 UTC (permalink / raw)
To: KP Singh, James Bottomley
Cc: bpf, linux-security-module, paul, kys, ast, daniel, andrii,
keyrings
KP Singh <kpsingh@kernel.org> writes:
> On Sun, Jun 8, 2025 at 4:03 PM James Bottomley
> <James.Bottomley@hansenpartnership.com> wrote:
>>
>> [+keyrings]
>> On Sat, 2025-06-07 at 01:29 +0200, KP Singh wrote:
>> [...]
>> > diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
>> > index f010295350be..e1dbbca91e34 100644
>> > --- a/tools/bpf/bpftool/prog.c
>> > +++ b/tools/bpf/bpftool/prog.c
>> > @@ -23,6 +23,7 @@
>> > #include <linux/err.h>
>> > #include <linux/perf_event.h>
>> > #include <linux/sizes.h>
>> > +#include <linux/keyctl.h>
>> >
>> > #include <bpf/bpf.h>
>> > #include <bpf/btf.h>
>> > @@ -1875,6 +1876,8 @@ static int try_loader(struct gen_loader_opts
>> > *gen)
>> > {
>> > struct bpf_load_and_run_opts opts = {};
>> > struct bpf_loader_ctx *ctx;
>> > + char sig_buf[MAX_SIG_SIZE];
>> > + __u8 prog_sha[SHA256_DIGEST_LENGTH];
>> > int ctx_sz = sizeof(*ctx) + 64 * max(sizeof(struct
>> > bpf_map_desc),
>> > sizeof(struct
>> > bpf_prog_desc));
>> > int log_buf_sz = (1u << 24) - 1;
>> > @@ -1898,6 +1901,24 @@ static int try_loader(struct gen_loader_opts
>> > *gen)
>> > opts.insns = gen->insns;
>> > opts.insns_sz = gen->insns_sz;
>> > fds_before = count_open_fds();
>> > +
>> > + if (sign_progs) {
>> > + opts.excl_prog_hash = prog_sha;
>> > + opts.excl_prog_hash_sz = sizeof(prog_sha);
>> > + opts.signature = sig_buf;
>> > + opts.signature_sz = MAX_SIG_SIZE;
>> > + opts.keyring_id = KEY_SPEC_SESSION_KEYRING;
>> > +
>>
>> This looks wrong on a couple of levels. Firstly, if you want system
>> level integrity you can't search the session keyring because any
>> process can join (subject to keyring permissions) and the owner, who is
>> presumably the one inserting the bpf program, can add any key they
>> like.
>>
>
> Wanting system level integrity is a security policy question, so this
> is something that needs to be implemented at the security layer, the
> LSM can deny the keys / keyring IDs they don't trust. Session
> keyrings are for sure useful for delegated signing of BPF programs
> when dynamically generated.
>
>> The other problem with this scheme is that the keyring_id itself has no
>> checked integrity, which means that even if a script was marked as
>
> If an attacker can modify a binary that has permissions to load BPF
> programs and update the keyring ID then we have other issues. So, this
> does not work in independence, signed BPF programs do not really make
> sense without trusted execution).
>
Untrusted userspace/root is precisely the issue I solved with previous
patchsets for this effort. Signed BPF programs absolutely work without
trusted execution.
-blaise
>> system keyring only anyone can binary edit the user space program to
>> change it to their preferred keyring and it will still work. If you
>> want variable keyrings, they should surely be part of the validated
>> policy.
>
> The policy is what I expect to be implemented in the LSM layer. A
> variable keyring ID is a critical part of the UAPI to create different
> "rings of trust" e.g. LSM can enforce that network programs can be
> loaded with a derived key, and have a different keyring for
> unprivileged BPF programs.
>
> This patch implements the signing support, not the security policy for it.
>
> - KP
>
>>
>> Regards,
>>
>> James
>>
^ permalink raw reply [flat|nested] 79+ messages in thread
* [PATCH 12/12] selftests/bpf: Enable signature verification for all lskel tests
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (10 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 11/12] bpftool: Add support for signing BPF programs KP Singh
@ 2025-06-06 23:29 ` KP Singh
2025-06-10 0:45 ` Alexei Starovoitov
2025-06-10 16:39 ` Blaise Boscaccy
2025-06-09 8:20 ` [PATCH 00/12] Signed BPF programs Toke Høiland-Jørgensen
2025-07-08 15:15 ` Blaise Boscaccy
13 siblings, 2 replies; 79+ messages in thread
From: KP Singh @ 2025-06-06 23:29 UTC (permalink / raw)
To: bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii, KP Singh
Convert the kernel's generated verification certificate into a C header
file using xxd. Finally, update the main test runner to load this
certificate into the session keyring via the add_key() syscall before
executing any tests.
The kernel's module signing verification certificate is converted to a
headerfile and loaded as a session key and all light skeleton tests are
updated to be signed.
Signed-off-by: KP Singh <kpsingh@kernel.org>
---
tools/testing/selftests/bpf/.gitignore | 1 +
tools/testing/selftests/bpf/Makefile | 13 +++++++++++--
tools/testing/selftests/bpf/test_progs.c | 13 +++++++++++++
3 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
index e2a2c46c008b..5ab96f8ab1c9 100644
--- a/tools/testing/selftests/bpf/.gitignore
+++ b/tools/testing/selftests/bpf/.gitignore
@@ -45,3 +45,4 @@ xdp_redirect_multi
xdp_synproxy
xdp_hw_metadata
xdp_features
+verification_cert.h
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index cf5ed3bee573..778b54be7ef4 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -7,6 +7,7 @@ CXX ?= $(CROSS_COMPILE)g++
CURDIR := $(abspath .)
TOOLSDIR := $(abspath ../../..)
+CERTSDIR := $(abspath ../../../../certs)
LIBDIR := $(TOOLSDIR)/lib
BPFDIR := $(LIBDIR)/bpf
TOOLSINCDIR := $(TOOLSDIR)/include
@@ -534,7 +535,7 @@ HEADERS_FOR_BPF_OBJS := $(wildcard $(BPFDIR)/*.bpf.h) \
# $1 - test runner base binary name (e.g., test_progs)
# $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, bpf_gcc, etc)
define DEFINE_TEST_RUNNER
-
+LSKEL_SIGN := -S -k $(CERTSDIR)/signing_key.pem -i $(CERTSDIR)/signing_key.x509
TRUNNER_OUTPUT := $(OUTPUT)$(if $2,/)$2
TRUNNER_BINARY := $1$(if $2,-)$2
TRUNNER_TEST_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.test.o, \
@@ -601,7 +602,7 @@ $(TRUNNER_BPF_LSKELS): %.lskel.h: %.bpf.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
$(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked2.o) $$(<:.o=.llinked1.o)
$(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked3.o) $$(<:.o=.llinked2.o)
$(Q)diff $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
- $(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.llinked3.o) name $$(notdir $$(<:.bpf.o=_lskel)) > $$@
+ $(Q)$$(BPFTOOL) gen skeleton $(LSKEL_SIGN) $$(<:.o=.llinked3.o) name $$(notdir $$(<:.bpf.o=_lskel)) > $$@
$(Q)rm -f $$(<:.o=.llinked1.o) $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
$(LINKED_BPF_OBJS): %: $(TRUNNER_OUTPUT)/%
@@ -697,6 +698,13 @@ $(OUTPUT)/$(TRUNNER_BINARY): $(TRUNNER_TEST_OBJS) \
endef
+CERT_HEADER := verification_cert.h
+CERT_SOURCE := $(CERTSDIR)/signing_key.x509
+
+$(CERT_HEADER): $(CERT_SOURCE)
+ @echo "GEN-CERT-HEADER: $(CERT_HEADER) from $<"
+ $(Q)xxd -i -n test_progs_verification_cert $< > $@
+
# Define test_progs test runner.
TRUNNER_TESTS_DIR := prog_tests
TRUNNER_BPF_PROGS_DIR := progs
@@ -716,6 +724,7 @@ TRUNNER_EXTRA_SOURCES := test_progs.c \
disasm.c \
disasm_helpers.c \
json_writer.c \
+ $(CERT_HEADER) \
flow_dissector_load.h \
ip_check_defrag_frags.h
TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read \
diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index 309d9d4a8ace..02a85dda30e6 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -14,12 +14,14 @@
#include <netinet/in.h>
#include <sys/select.h>
#include <sys/socket.h>
+#include <linux/keyctl.h>
#include <sys/un.h>
#include <bpf/btf.h>
#include <time.h>
#include "json_writer.h"
#include "network_helpers.h"
+#include "verification_cert.h"
/* backtrace() and backtrace_symbols_fd() are glibc specific,
* use header file when glibc is available and provide stub
@@ -1928,6 +1930,13 @@ static void free_test_states(void)
}
}
+static __u32 register_session_key(const char *key_data, size_t key_data_size)
+{
+ return syscall(__NR_add_key, "asymmetric", "libbpf_session_key",
+ (const void *)key_data, key_data_size,
+ KEY_SPEC_SESSION_KEYRING);
+}
+
int main(int argc, char **argv)
{
static const struct argp argp = {
@@ -1961,6 +1970,10 @@ int main(int argc, char **argv)
/* Use libbpf 1.0 API mode */
libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
libbpf_set_print(libbpf_print_fn);
+ err = register_session_key((const char *)test_progs_verification_cert,
+ test_progs_verification_cert_len);
+ if (err < 0)
+ return err;
traffic_monitor_set_print(traffic_monitor_print_fn);
--
2.43.0
^ permalink raw reply related [flat|nested] 79+ messages in thread
* Re: [PATCH 12/12] selftests/bpf: Enable signature verification for all lskel tests
2025-06-06 23:29 ` [PATCH 12/12] selftests/bpf: Enable signature verification for all lskel tests KP Singh
@ 2025-06-10 0:45 ` Alexei Starovoitov
2025-06-10 16:39 ` Blaise Boscaccy
1 sibling, 0 replies; 79+ messages in thread
From: Alexei Starovoitov @ 2025-06-10 0:45 UTC (permalink / raw)
To: KP Singh
Cc: bpf, LSM List, Blaise Boscaccy, Paul Moore, K. Y. Srinivasan,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
On Fri, Jun 6, 2025 at 4:29 PM KP Singh <kpsingh@kernel.org> wrote:
>
> Convert the kernel's generated verification certificate into a C header
> file using xxd. Finally, update the main test runner to load this
> certificate into the session keyring via the add_key() syscall before
> executing any tests.
>
> The kernel's module signing verification certificate is converted to a
> headerfile and loaded as a session key and all light skeleton tests are
> updated to be signed.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> tools/testing/selftests/bpf/.gitignore | 1 +
> tools/testing/selftests/bpf/Makefile | 13 +++++++++++--
> tools/testing/selftests/bpf/test_progs.c | 13 +++++++++++++
> 3 files changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
> index e2a2c46c008b..5ab96f8ab1c9 100644
> --- a/tools/testing/selftests/bpf/.gitignore
> +++ b/tools/testing/selftests/bpf/.gitignore
> @@ -45,3 +45,4 @@ xdp_redirect_multi
> xdp_synproxy
> xdp_hw_metadata
> xdp_features
> +verification_cert.h
> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index cf5ed3bee573..778b54be7ef4 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile
> @@ -7,6 +7,7 @@ CXX ?= $(CROSS_COMPILE)g++
>
> CURDIR := $(abspath .)
> TOOLSDIR := $(abspath ../../..)
> +CERTSDIR := $(abspath ../../../../certs)
> LIBDIR := $(TOOLSDIR)/lib
> BPFDIR := $(LIBDIR)/bpf
> TOOLSINCDIR := $(TOOLSDIR)/include
> @@ -534,7 +535,7 @@ HEADERS_FOR_BPF_OBJS := $(wildcard $(BPFDIR)/*.bpf.h) \
> # $1 - test runner base binary name (e.g., test_progs)
> # $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, bpf_gcc, etc)
> define DEFINE_TEST_RUNNER
> -
> +LSKEL_SIGN := -S -k $(CERTSDIR)/signing_key.pem -i $(CERTSDIR)/signing_key.x509
Can we do a fallback for setups without CONFIG_MODULE_SIG ?
Reuse setup() helper from verify_sig_setup.sh ?
Doesn't have to be right away. It can be a follow up.
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 12/12] selftests/bpf: Enable signature verification for all lskel tests
2025-06-06 23:29 ` [PATCH 12/12] selftests/bpf: Enable signature verification for all lskel tests KP Singh
2025-06-10 0:45 ` Alexei Starovoitov
@ 2025-06-10 16:39 ` Blaise Boscaccy
2025-06-10 16:42 ` KP Singh
1 sibling, 1 reply; 79+ messages in thread
From: Blaise Boscaccy @ 2025-06-10 16:39 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module
Cc: paul, kys, ast, daniel, andrii, KP Singh
KP Singh <kpsingh@kernel.org> writes:
> Convert the kernel's generated verification certificate into a C header
> file using xxd. Finally, update the main test runner to load this
> certificate into the session keyring via the add_key() syscall before
> executing any tests.
>
> The kernel's module signing verification certificate is converted to a
> headerfile and loaded as a session key and all light skeleton tests are
> updated to be signed.
>
> Signed-off-by: KP Singh <kpsingh@kernel.org>
> ---
> tools/testing/selftests/bpf/.gitignore | 1 +
> tools/testing/selftests/bpf/Makefile | 13 +++++++++++--
> tools/testing/selftests/bpf/test_progs.c | 13 +++++++++++++
> 3 files changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
> index e2a2c46c008b..5ab96f8ab1c9 100644
> --- a/tools/testing/selftests/bpf/.gitignore
> +++ b/tools/testing/selftests/bpf/.gitignore
> @@ -45,3 +45,4 @@ xdp_redirect_multi
> xdp_synproxy
> xdp_hw_metadata
> xdp_features
> +verification_cert.h
> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index cf5ed3bee573..778b54be7ef4 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile
> @@ -7,6 +7,7 @@ CXX ?= $(CROSS_COMPILE)g++
>
> CURDIR := $(abspath .)
> TOOLSDIR := $(abspath ../../..)
> +CERTSDIR := $(abspath ../../../../certs)
> LIBDIR := $(TOOLSDIR)/lib
> BPFDIR := $(LIBDIR)/bpf
> TOOLSINCDIR := $(TOOLSDIR)/include
> @@ -534,7 +535,7 @@ HEADERS_FOR_BPF_OBJS := $(wildcard $(BPFDIR)/*.bpf.h) \
> # $1 - test runner base binary name (e.g., test_progs)
> # $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, bpf_gcc, etc)
> define DEFINE_TEST_RUNNER
> -
> +LSKEL_SIGN := -S -k $(CERTSDIR)/signing_key.pem -i $(CERTSDIR)/signing_key.x509
> TRUNNER_OUTPUT := $(OUTPUT)$(if $2,/)$2
> TRUNNER_BINARY := $1$(if $2,-)$2
> TRUNNER_TEST_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.test.o, \
> @@ -601,7 +602,7 @@ $(TRUNNER_BPF_LSKELS): %.lskel.h: %.bpf.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
> $(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked2.o) $$(<:.o=.llinked1.o)
> $(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked3.o) $$(<:.o=.llinked2.o)
> $(Q)diff $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
> - $(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.llinked3.o) name $$(notdir $$(<:.bpf.o=_lskel)) > $$@
> + $(Q)$$(BPFTOOL) gen skeleton $(LSKEL_SIGN) $$(<:.o=.llinked3.o) name $$(notdir $$(<:.bpf.o=_lskel)) > $$@
> $(Q)rm -f $$(<:.o=.llinked1.o) $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
>
> $(LINKED_BPF_OBJS): %: $(TRUNNER_OUTPUT)/%
> @@ -697,6 +698,13 @@ $(OUTPUT)/$(TRUNNER_BINARY): $(TRUNNER_TEST_OBJS) \
>
> endef
>
> +CERT_HEADER := verification_cert.h
> +CERT_SOURCE := $(CERTSDIR)/signing_key.x509
> +
> +$(CERT_HEADER): $(CERT_SOURCE)
> + @echo "GEN-CERT-HEADER: $(CERT_HEADER) from $<"
> + $(Q)xxd -i -n test_progs_verification_cert $< > $@
> +
> # Define test_progs test runner.
> TRUNNER_TESTS_DIR := prog_tests
> TRUNNER_BPF_PROGS_DIR := progs
> @@ -716,6 +724,7 @@ TRUNNER_EXTRA_SOURCES := test_progs.c \
> disasm.c \
> disasm_helpers.c \
> json_writer.c \
> + $(CERT_HEADER) \
> flow_dissector_load.h \
> ip_check_defrag_frags.h
> TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read \
> diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
> index 309d9d4a8ace..02a85dda30e6 100644
> --- a/tools/testing/selftests/bpf/test_progs.c
> +++ b/tools/testing/selftests/bpf/test_progs.c
> @@ -14,12 +14,14 @@
> #include <netinet/in.h>
> #include <sys/select.h>
> #include <sys/socket.h>
> +#include <linux/keyctl.h>
> #include <sys/un.h>
> #include <bpf/btf.h>
> #include <time.h>
> #include "json_writer.h"
>
> #include "network_helpers.h"
> +#include "verification_cert.h"
>
> /* backtrace() and backtrace_symbols_fd() are glibc specific,
> * use header file when glibc is available and provide stub
> @@ -1928,6 +1930,13 @@ static void free_test_states(void)
> }
> }
>
> +static __u32 register_session_key(const char *key_data, size_t key_data_size)
> +{
> + return syscall(__NR_add_key, "asymmetric", "libbpf_session_key",
> + (const void *)key_data, key_data_size,
> + KEY_SPEC_SESSION_KEYRING);
> +}
> +
> int main(int argc, char **argv)
> {
> static const struct argp argp = {
> @@ -1961,6 +1970,10 @@ int main(int argc, char **argv)
> /* Use libbpf 1.0 API mode */
> libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
> libbpf_set_print(libbpf_print_fn);
> + err = register_session_key((const char *)test_progs_verification_cert,
> + test_progs_verification_cert_len);
> + if (err < 0)
> + return err;
>
> traffic_monitor_set_print(traffic_monitor_print_fn);
>
> --
> 2.43.0
There aren't any test cases showing the "trusted" loader doing any sort
of enforcement of blocking invalid programs or maps.
-blaise
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 12/12] selftests/bpf: Enable signature verification for all lskel tests
2025-06-10 16:39 ` Blaise Boscaccy
@ 2025-06-10 16:42 ` KP Singh
0 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-06-10 16:42 UTC (permalink / raw)
To: Blaise Boscaccy
Cc: bpf, linux-security-module, paul, kys, ast, daniel, andrii
On Tue, Jun 10, 2025 at 6:39 PM Blaise Boscaccy
<bboscaccy@linux.microsoft.com> wrote:
>
> KP Singh <kpsingh@kernel.org> writes:
>
> > Convert the kernel's generated verification certificate into a C header
> > file using xxd. Finally, update the main test runner to load this
> > certificate into the session keyring via the add_key() syscall before
> > executing any tests.
> >
> > The kernel's module signing verification certificate is converted to a
> > headerfile and loaded as a session key and all light skeleton tests are
> > updated to be signed.
> >
> > Signed-off-by: KP Singh <kpsingh@kernel.org>
> > ---
> > tools/testing/selftests/bpf/.gitignore | 1 +
> > tools/testing/selftests/bpf/Makefile | 13 +++++++++++--
> > tools/testing/selftests/bpf/test_progs.c | 13 +++++++++++++
> > 3 files changed, 25 insertions(+), 2 deletions(-)
> >
> > diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
> > index e2a2c46c008b..5ab96f8ab1c9 100644
> > --- a/tools/testing/selftests/bpf/.gitignore
> > +++ b/tools/testing/selftests/bpf/.gitignore
> > @@ -45,3 +45,4 @@ xdp_redirect_multi
> > xdp_synproxy
> > xdp_hw_metadata
> > xdp_features
> > +verification_cert.h
> > diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> > index cf5ed3bee573..778b54be7ef4 100644
> > --- a/tools/testing/selftests/bpf/Makefile
> > +++ b/tools/testing/selftests/bpf/Makefile
> > @@ -7,6 +7,7 @@ CXX ?= $(CROSS_COMPILE)g++
> >
> > CURDIR := $(abspath .)
> > TOOLSDIR := $(abspath ../../..)
> > +CERTSDIR := $(abspath ../../../../certs)
> > LIBDIR := $(TOOLSDIR)/lib
> > BPFDIR := $(LIBDIR)/bpf
> > TOOLSINCDIR := $(TOOLSDIR)/include
> > @@ -534,7 +535,7 @@ HEADERS_FOR_BPF_OBJS := $(wildcard $(BPFDIR)/*.bpf.h) \
> > # $1 - test runner base binary name (e.g., test_progs)
> > # $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, bpf_gcc, etc)
> > define DEFINE_TEST_RUNNER
> > -
> > +LSKEL_SIGN := -S -k $(CERTSDIR)/signing_key.pem -i $(CERTSDIR)/signing_key.x509
> > TRUNNER_OUTPUT := $(OUTPUT)$(if $2,/)$2
> > TRUNNER_BINARY := $1$(if $2,-)$2
> > TRUNNER_TEST_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.test.o, \
> > @@ -601,7 +602,7 @@ $(TRUNNER_BPF_LSKELS): %.lskel.h: %.bpf.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
> > $(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked2.o) $$(<:.o=.llinked1.o)
> > $(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked3.o) $$(<:.o=.llinked2.o)
> > $(Q)diff $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
> > - $(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.llinked3.o) name $$(notdir $$(<:.bpf.o=_lskel)) > $$@
> > + $(Q)$$(BPFTOOL) gen skeleton $(LSKEL_SIGN) $$(<:.o=.llinked3.o) name $$(notdir $$(<:.bpf.o=_lskel)) > $$@
> > $(Q)rm -f $$(<:.o=.llinked1.o) $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
> >
> > $(LINKED_BPF_OBJS): %: $(TRUNNER_OUTPUT)/%
> > @@ -697,6 +698,13 @@ $(OUTPUT)/$(TRUNNER_BINARY): $(TRUNNER_TEST_OBJS) \
> >
> > endef
> >
> > +CERT_HEADER := verification_cert.h
> > +CERT_SOURCE := $(CERTSDIR)/signing_key.x509
> > +
> > +$(CERT_HEADER): $(CERT_SOURCE)
> > + @echo "GEN-CERT-HEADER: $(CERT_HEADER) from $<"
> > + $(Q)xxd -i -n test_progs_verification_cert $< > $@
> > +
> > # Define test_progs test runner.
> > TRUNNER_TESTS_DIR := prog_tests
> > TRUNNER_BPF_PROGS_DIR := progs
> > @@ -716,6 +724,7 @@ TRUNNER_EXTRA_SOURCES := test_progs.c \
> > disasm.c \
> > disasm_helpers.c \
> > json_writer.c \
> > + $(CERT_HEADER) \
> > flow_dissector_load.h \
> > ip_check_defrag_frags.h
> > TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read \
> > diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
> > index 309d9d4a8ace..02a85dda30e6 100644
> > --- a/tools/testing/selftests/bpf/test_progs.c
> > +++ b/tools/testing/selftests/bpf/test_progs.c
> > @@ -14,12 +14,14 @@
> > #include <netinet/in.h>
> > #include <sys/select.h>
> > #include <sys/socket.h>
> > +#include <linux/keyctl.h>
> > #include <sys/un.h>
> > #include <bpf/btf.h>
> > #include <time.h>
> > #include "json_writer.h"
> >
> > #include "network_helpers.h"
> > +#include "verification_cert.h"
> >
> > /* backtrace() and backtrace_symbols_fd() are glibc specific,
> > * use header file when glibc is available and provide stub
> > @@ -1928,6 +1930,13 @@ static void free_test_states(void)
> > }
> > }
> >
> > +static __u32 register_session_key(const char *key_data, size_t key_data_size)
> > +{
> > + return syscall(__NR_add_key, "asymmetric", "libbpf_session_key",
> > + (const void *)key_data, key_data_size,
> > + KEY_SPEC_SESSION_KEYRING);
> > +}
> > +
> > int main(int argc, char **argv)
> > {
> > static const struct argp argp = {
> > @@ -1961,6 +1970,10 @@ int main(int argc, char **argv)
> > /* Use libbpf 1.0 API mode */
> > libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
> > libbpf_set_print(libbpf_print_fn);
> > + err = register_session_key((const char *)test_progs_verification_cert,
> > + test_progs_verification_cert_len);
> > + if (err < 0)
> > + return err;
> >
> > traffic_monitor_set_print(traffic_monitor_print_fn);
> >
> > --
> > 2.43.0
>
>
> There aren't any test cases showing the "trusted" loader doing any sort
> of enforcement of blocking invalid programs or maps.
Sure, we can add some more test cases.
>
> -blaise
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/12] Signed BPF programs
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (11 preceding siblings ...)
2025-06-06 23:29 ` [PATCH 12/12] selftests/bpf: Enable signature verification for all lskel tests KP Singh
@ 2025-06-09 8:20 ` Toke Høiland-Jørgensen
2025-06-09 11:40 ` KP Singh
2025-07-08 15:15 ` Blaise Boscaccy
13 siblings, 1 reply; 79+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-06-09 8:20 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module
Cc: bboscaccy, paul, kys, ast, daniel, andrii
> Given that many use-cases (e.g. Cilium) generate trusted BPF programs,
> trusted loaders are an inevitability and a requirement for signing support, a
> entrusting loader programs will be a fundamental requirement for an security
> policy.
So I've been following this discussion a bit on the sidelines, and have
a question related to this:
From your description a loader would have embedded hashes for a concrete
BPF program, which doesn't really work for dynamically generated
programs. So how would a "trusted loader" work for dynamically generated
programs?
-Toke
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/12] Signed BPF programs
2025-06-09 8:20 ` [PATCH 00/12] Signed BPF programs Toke Høiland-Jørgensen
@ 2025-06-09 11:40 ` KP Singh
2025-06-10 9:45 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-09 11:40 UTC (permalink / raw)
To: Toke Høiland-Jørgensen
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Mon, Jun 9, 2025 at 10:20 AM Toke Høiland-Jørgensen <toke@kernel.org> wrote:
>
>
> > Given that many use-cases (e.g. Cilium) generate trusted BPF programs,
> > trusted loaders are an inevitability and a requirement for signing support, a
> > entrusting loader programs will be a fundamental requirement for an security
> > policy.
>
> So I've been following this discussion a bit on the sidelines, and have
> a question related to this:
>
> From your description a loader would have embedded hashes for a concrete
> BPF program, which doesn't really work for dynamically generated
> programs. So how would a "trusted loader" work for dynamically generated
> programs?
The trusted loader for dynamically generated programs would be the
binary that loads the BPF program. So a security policy will need to
allow certain trusted binaries (signed with a different key) to load
unsigned BPF programs for cilium.
For a stronger policy, the generators can use a derived key and
identity (e.g from the Kubernetes / machine / TLS certificate) and
then sign their programs using this certificate. The LSM policy then
allows verification with a trusted build key and for certain binaries,
with the delegated credentials.
>
> -Toke
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/12] Signed BPF programs
2025-06-09 11:40 ` KP Singh
@ 2025-06-10 9:45 ` Toke Høiland-Jørgensen
2025-06-10 11:18 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-06-10 9:45 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
KP Singh <kpsingh@kernel.org> writes:
> On Mon, Jun 9, 2025 at 10:20 AM Toke Høiland-Jørgensen <toke@kernel.org> wrote:
>>
>>
>> > Given that many use-cases (e.g. Cilium) generate trusted BPF programs,
>> > trusted loaders are an inevitability and a requirement for signing support, a
>> > entrusting loader programs will be a fundamental requirement for an security
>> > policy.
>>
>> So I've been following this discussion a bit on the sidelines, and have
>> a question related to this:
>>
>> From your description a loader would have embedded hashes for a concrete
>> BPF program, which doesn't really work for dynamically generated
>> programs. So how would a "trusted loader" work for dynamically generated
>> programs?
>
> The trusted loader for dynamically generated programs would be the
> binary that loads the BPF program. So a security policy will need to
> allow certain trusted binaries (signed with a different key) to load
> unsigned BPF programs for cilium.
OK, so this refers to a policy along the line of: "Only allow signed BPF
program except for this particular userspace binary that is allowed to
load anything"?
> For a stronger policy, the generators can use a derived key and
> identity (e.g from the Kubernetes / machine / TLS certificate) and
> then sign their programs using this certificate. The LSM policy then
> allows verification with a trusted build key and for certain binaries,
> with the delegated credentials.
And this means "add a separate trusted key on the kernel side that the
userspace binary signs things with before passing it to the kernel"?
In which case, how does that tie into the original statement I quoted at
the top of this email? The "trusted loaders are an inevitability" bit? I
was assuming that the "trusted loaders" in that sentence referred to the
light-skeleton loader program, but from your reply I'm not thinking
maybe it just means "some userspace binaries need to be exempt from any
signing requirement"? Or am I missing something?
-Toke
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/12] Signed BPF programs
2025-06-10 9:45 ` Toke Høiland-Jørgensen
@ 2025-06-10 11:18 ` KP Singh
2025-06-10 11:58 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-10 11:18 UTC (permalink / raw)
To: Toke Høiland-Jørgensen
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
On Tue, Jun 10, 2025 at 11:45 AM Toke Høiland-Jørgensen <toke@kernel.org> wrote:
>
> KP Singh <kpsingh@kernel.org> writes:
>
> > On Mon, Jun 9, 2025 at 10:20 AM Toke Høiland-Jørgensen <toke@kernel.org> wrote:
> >>
> >>
> >> > Given that many use-cases (e.g. Cilium) generate trusted BPF programs,
> >> > trusted loaders are an inevitability and a requirement for signing support, a
> >> > entrusting loader programs will be a fundamental requirement for an security
> >> > policy.
> >>
> >> So I've been following this discussion a bit on the sidelines, and have
> >> a question related to this:
> >>
> >> From your description a loader would have embedded hashes for a concrete
> >> BPF program, which doesn't really work for dynamically generated
> >> programs. So how would a "trusted loader" work for dynamically generated
> >> programs?
> >
> > The trusted loader for dynamically generated programs would be the
> > binary that loads the BPF program. So a security policy will need to
> > allow certain trusted binaries (signed with a different key) to load
> > unsigned BPF programs for cilium.
>
> OK, so this refers to a policy along the line of: "Only allow signed BPF
> program except for this particular userspace binary that is allowed to
> load anything"?
>
> > For a stronger policy, the generators can use a derived key and
> > identity (e.g from the Kubernetes / machine / TLS certificate) and
> > then sign their programs using this certificate. The LSM policy then
> > allows verification with a trusted build key and for certain binaries,
> > with the delegated credentials.
>
> And this means "add a separate trusted key on the kernel side that the
> userspace binary signs things with before passing it to the kernel"?
>
> In which case, how does that tie into the original statement I quoted at
> the top of this email? The "trusted loaders are an inevitability" bit? I
> was assuming that the "trusted loaders" in that sentence referred to the
> light-skeleton loader program, but from your reply I'm not thinking
No trusted loaders are exactly what they mean, trusted blobs of code
that can load BPF programs, these can be loader programs in light
skeletons or trusted user-space binaries.
- KP
> maybe it just means "some userspace binaries need to be exempt from any
> signing requirement"? Or am I missing something?
>
> -Toke
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/12] Signed BPF programs
2025-06-10 11:18 ` KP Singh
@ 2025-06-10 11:58 ` Toke Høiland-Jørgensen
2025-06-10 12:26 ` KP Singh
0 siblings, 1 reply; 79+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-06-10 11:58 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
KP Singh <kpsingh@kernel.org> writes:
> On Tue, Jun 10, 2025 at 11:45 AM Toke Høiland-Jørgensen <toke@kernel.org> wrote:
>>
>> KP Singh <kpsingh@kernel.org> writes:
>>
>> > On Mon, Jun 9, 2025 at 10:20 AM Toke Høiland-Jørgensen <toke@kernel.org> wrote:
>> >>
>> >>
>> >> > Given that many use-cases (e.g. Cilium) generate trusted BPF programs,
>> >> > trusted loaders are an inevitability and a requirement for signing support, a
>> >> > entrusting loader programs will be a fundamental requirement for an security
>> >> > policy.
>> >>
>> >> So I've been following this discussion a bit on the sidelines, and have
>> >> a question related to this:
>> >>
>> >> From your description a loader would have embedded hashes for a concrete
>> >> BPF program, which doesn't really work for dynamically generated
>> >> programs. So how would a "trusted loader" work for dynamically generated
>> >> programs?
>> >
>> > The trusted loader for dynamically generated programs would be the
>> > binary that loads the BPF program. So a security policy will need to
>> > allow certain trusted binaries (signed with a different key) to load
>> > unsigned BPF programs for cilium.
>>
>> OK, so this refers to a policy along the line of: "Only allow signed BPF
>> program except for this particular userspace binary that is allowed to
>> load anything"?
>>
>> > For a stronger policy, the generators can use a derived key and
>> > identity (e.g from the Kubernetes / machine / TLS certificate) and
>> > then sign their programs using this certificate. The LSM policy then
>> > allows verification with a trusted build key and for certain binaries,
>> > with the delegated credentials.
>>
>> And this means "add a separate trusted key on the kernel side that the
>> userspace binary signs things with before passing it to the kernel"?
>>
>> In which case, how does that tie into the original statement I quoted at
>> the top of this email? The "trusted loaders are an inevitability" bit? I
>> was assuming that the "trusted loaders" in that sentence referred to the
>> light-skeleton loader program, but from your reply I'm not thinking
>
> No trusted loaders are exactly what they mean, trusted blobs of code
> that can load BPF programs, these can be loader programs in light
> skeletons or trusted user-space binaries.
Right, but this patch series has no mechanism for establishing a
userspace loader binary as trusted (right?). The paragraph I quoted
makes it sound like these are related, and I was trying to figure out
what the relation was. But it sounds like the answer is that they are
not?
-Toke
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/12] Signed BPF programs
2025-06-10 11:58 ` Toke Høiland-Jørgensen
@ 2025-06-10 12:26 ` KP Singh
2025-06-10 14:25 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 79+ messages in thread
From: KP Singh @ 2025-06-10 12:26 UTC (permalink / raw)
To: Toke Høiland-Jørgensen
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
>
> Right, but this patch series has no mechanism for establishing a
> userspace loader binary as trusted (right?). The paragraph I quoted
> makes it sound like these are related, and I was trying to figure out
> what the relation was. But it sounds like the answer is that they are
> not?
>
The relation here is that no matter what we do, the kernel cannot be
the only trusted blob on the system and this was aimed at answering
questions people had earlier when I proposed the design. This patch
does add signing support and this allows us to add the following
policy, it does not directly add any user space support.
bprm_committed_creds (check signature of program, if verifies with a
separate key) add a blob that allows:
* unsigned bpf programs
* signed with a derived key
security_bpf:
* Check for the right attributes for signing.
* restrict which program types can be loaded.
(additional key hooks for restricting which keys are allowed to verify
programs).
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/12] Signed BPF programs
2025-06-10 12:26 ` KP Singh
@ 2025-06-10 14:25 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 79+ messages in thread
From: Toke Høiland-Jørgensen @ 2025-06-10 14:25 UTC (permalink / raw)
To: KP Singh
Cc: bpf, linux-security-module, bboscaccy, paul, kys, ast, daniel,
andrii
KP Singh <kpsingh@kernel.org> writes:
>>
>> Right, but this patch series has no mechanism for establishing a
>> userspace loader binary as trusted (right?). The paragraph I quoted
>> makes it sound like these are related, and I was trying to figure out
>> what the relation was. But it sounds like the answer is that they are
>> not?
>>
>
> The relation here is that no matter what we do, the kernel cannot be
> the only trusted blob on the system and this was aimed at answering
> questions people had earlier when I proposed the design. This patch
> does add signing support and this allows us to add the following
> policy, it does not directly add any user space support.
>
> bprm_committed_creds (check signature of program, if verifies with a
> separate key) add a blob that allows:
>
> * unsigned bpf programs
> * signed with a derived key
>
> security_bpf:
>
> * Check for the right attributes for signing.
> * restrict which program types can be loaded.
>
> (additional key hooks for restricting which keys are allowed to verify
> programs).
Right, gotcha - thanks for clarifying! :)
-Toke
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/12] Signed BPF programs
2025-06-06 23:29 [PATCH 00/12] Signed BPF programs KP Singh
` (12 preceding siblings ...)
2025-06-09 8:20 ` [PATCH 00/12] Signed BPF programs Toke Høiland-Jørgensen
@ 2025-07-08 15:15 ` Blaise Boscaccy
2025-07-10 14:49 ` KP Singh
13 siblings, 1 reply; 79+ messages in thread
From: Blaise Boscaccy @ 2025-07-08 15:15 UTC (permalink / raw)
To: KP Singh, bpf, linux-security-module; +Cc: paul, kys, ast, daniel, andrii
KP Singh <kpsingh@kernel.org> writes:
> BPF Signing has gone over multiple discussions in various conferences with the
> kernel and BPF community and the following patch series is a culmination
> of the current of discussion on signed BPF programs. Once signing is
> implemented, the next focus would be to implement the right security policies
> for all BPF use-cases (dynamically generated bpf programs, simple non CO-RE
> programs).
>
> Signing also paves the way for allowing unrivileged users to
> load vetted BPF programs and helps in adhering to the principle of least
> privlege by avoiding unnecessary elevation of privileges to CAP_BPF and
> CAP_SYS_ADMIN (ofcourse, with the appropriate security policy active).
>
> A early version of this design was proposed in [1]:
>
> # General Idea: Trusted Hash Chain
>
> The key idea of the design is to use a signing algorithm that allows
> us to integrity-protect a number of future payloads, including their
> order, by creating a chain of trust.
>
> Consider that Alice needs to send messages M_1, M_2, ..., M_n to Bob.
> We define blocks of data such that:
>
> B_n = M_n || H(termination_marker)
>
> (Each block contains its corresponding message and the hash of the
> *next* block in the chain.)
>
> B_{n-1} = M_{n-1} || H(B_n)
> B_{n-2} = M_{n-2} || H(B_{n-1})
>
> ...
>
> B_2 = M_2 || H(B_3)
> B_1 = M_1 || H(B_2)
>
> Alice does the following (e.g., on a build system where all payloads
> are available):
>
> * Assembles the blocks B_1, B_2, ..., B_n.
> * Calculates H(B_1) and signs it, yielding Sig(H(B_1)).
>
> Alice sends the following to Bob:
>
> M_1, H(B_2), Sig(H(B_1))
>
> Bob receives this payload and does the following:
>
> * Reconstructs B_1 as B_1' using the received M_1 and H(B_2)
> (i.e., B_1' = M_1 || H(B_2)).
> * Recomputes H(B_1') and verifies the signature against the
> received Sig(H(B_1)).
> * If the signature verifies, it establishes the integrity of M_1
> and H(B_2) (and transitively, the integrity of the entire chain). Bob
> now stores the verified H(B_2) until it receives the next message.
> * When Bob receives M_2 (and H(B_3) if n > 2), it reconstructs
> B_2' (e.g., B_2' = M_2 || H(B_3), or if n=2, B_2' = M_2 ||
> H(termination_marker)). Bob then computes H(B_2') and compares it
> against the stored H(B_2) that was verified in the previous step.
>
> This process continues until the last block is received and verified.
>
> Now, applying this to the BPF signing use-case, we simplify to two messages:
>
> M_1 = I_loader (the instructions of the loader program)
> M_2 = M_metadata (the metadata for the loader program, passed in a
> map, which includes the programs to be loaded and other context)
>
> For this specific BPF case, we will directly sign a composite of the
> first message and the hash of the second. Let H_meta = H(M_metadata).
> The block to be signed is effectively:
>
> B_signed = I_loader || H_meta
>
> The signature generated is Sig(B_signed).
>
> The process then follows a similar pattern to the Alice and Bob model,
> where the kernel (Bob) verifies I_loader and H_meta using the
> signature. Then, the trusted I_loader is responsible for verifying
> M_metadata against the trusted H_meta.
>
>>From an implementation standpoint:
>
> # Build
>
> bpftool (or some other tool in a trusted build environment) knows
> about the metadata (M_metadata) and the loader program (I_loader). It
> first calculates H_meta = H(M_metadata). Then it constructs the object
> to be signed and computes the signature:
>
> Sig(I_loader || H_meta)
>
> # Loader
>
> The loader program and the metadata are a hermetic representation of the source
> of the eBPF program, its maps and context. The loader program is generated by
> libbpf as a part of a standard API i.e. bpf_object__gen_loader.
>
> ## Supply chain
>
> While users can use light skeletons as a convenient method to use signing
> support, they can directly use the loader program generation using libbpf
> (bpf_object__gen_loader) into their own trusted toolchains.
>
> libbpf, which has access to the program's instruction buffer is a key part of
> the TCB of the build environment
>
> An advanced threat model that does not intend to depend on libbpf (or any provenant
> userspace BPF libraries) due to supply chain risks despite it being developed
> in the kernel source and by the kernel community will require reimplmenting a
> lot of the core BPF userspace support (like instruction relocation, map handling).
>
> Such an advanced user would also need to integrate the generation of the loader
> into their toolchain.
>
> Given that many use-cases (e.g. Cilium) generate trusted BPF programs,
> trusted loaders are an inevitability and a requirement for signing support, a
> entrusting loader programs will be a fundamental requirement for an security
> policy.
>
> The initial instructions of the loader program verify the SHA256 hash
> of the metadata (M_metadata) that will be passed in a map. These instructions
> effectively embed the precomputed H_meta as immediate values.
>
> ld_imm64 r1, const_ptr_to_map // insn[0].src_reg == BPF_PSEUDO_MAP_IDX
> r2 = *(u64 *)(r1 + 0);
> ld_imm64 r3, sha256_of_map_part1 // precomputed by bpf_object__gen_load/libbpf (H_meta_1)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 8);
> ld_imm64 r3, sha256_of_map_part2 // precomputed by bpf_object__gen_load/libbpf (H_meta_2)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 16);
> ld_imm64 r3, sha256_of_map_part3 // precomputed by bpf_object__gen_load/libbpf (H_meta_3)
> if r2 != r3 goto out;
>
> r2 = *(u64 *)(r1 + 24);
> ld_imm64 r3, sha256_of_map_part4 // precomputed by bpf_object__gen_load/libbpf (H_meta_4)
> if r2 != r3 goto out;
> ...
>
> This implicitly makes the payload equivalent to the signed block (B_signed)
>
> I_loader || H_meta
>
> bpftool then generates the signature of this I_loader payload (which
> now contains the expected H_meta) using a key and an identity:
>
> This signature is stored in bpf_attr, which is extended as follows for
> the BPF_PROG_LOAD command:
>
> __aligned_u64 signature;
> __u32 signature_size;
> __u32 keyring_id;
>
> The reasons for a simpler UAPI is that it's more future proof (e.g.) with more
> stable instruction buffers, loader programs being directly into the compilers.
> A simple API also allows simple programs e.g. for networking that don't need
> loader programs to directly use signing.
>
> # Extending OBJ_GET_INFO_BY_FD for hashes
>
> OBJ_GET_INFO_BY_FD is used to get information about BPF objects (maps, programs, links) and
> returning the hash of the map is a natural extension of the UAPI as it can be
> helpful for debugging, fingerprinting etc.
>
> Currently, it's only implemented for BPF_MAP_TYPE_ARRAY. It can be trivially
> extended for BPF programs to return the complete SHA256 along with the tag.
>
> The SHA is stored in struct bpf_map for exclusive and frozen maps
>
> struct bpf_map {
> + u64 sha[4];
> const struct bpf_map_ops *ops;
> struct bpf_map *inner_map_meta;
> };
>
> ## Exclusive BPF maps
>
> Exclusivity ensures that the map can only be used by a future BPF
> program whose SHA256 hash matches sha256_of_future_prog.
>
> First, bpf_prog_calc_tag() is updated to compute the SHA256 instead of
> SHA1, and this hash is stored in struct bpf_prog_aux:
>
> @@ -1588,6 +1588,7 @@ struct bpf_prog_aux {
> int cgroup_atype; /* enum cgroup_bpf_attach_type */
> struct bpf_map *cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE];
> char name[BPF_OBJ_NAME_LEN];
> + u64 sha[4];
> u64 (*bpf_exception_cb)(u64 cookie, u64 sp, u64 bp, u64, u64);
> // ...
> };
>
> An exclusive is created by passing an excl_prog_hash
> (and excl_prog_hash_size) in the BPF_MAP_CREATE command.
> When a BPF program is subsequently loaded and it attempts to use this map,
> the kernel will compare the program's own SHA256 hash against the one
> registered with the map, if matching, it will be added to prog->used_maps[].
>
> The program load will fail if the hashes do not match or if the map is
> already in use by another (non-matching) exclusive program.
>
> Exclusive maps ensure that no other BPF programs and compromise the intergity of
> the map post the signature verification.
>
> NOTE: Exclusive maps cannot be added as inner maps.
>
> # Light Skeleton Sequence (Userspace Example)
>
> err = map_fd = skel_map_create(BPF_MAP_TYPE_ARRAY, "__loader.map",
> opts->excl_prog_hash,
> opts->excl_prog_hash_sz, 4,
> opts->data_sz, 1);
> err = skel_map_update_elem(map_fd, &key, opts->data, 0);
>
> err = skel_map_freeze(map_fd);
>
> // Kernel computes the hash of the map.
> err = skel_obj_get_info_by_fd(map_fd);
>
> memset(&attr, 0, prog_load_attr_sz);
> attr.prog_type = BPF_PROG_TYPE_SYSCALL;
> attr.insns = (long) opts->insns;
> attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
> attr.signature = (long) opts->signature;
> attr.signature_size = opts->signature_sz;
> attr.keyring_id = opts->keyring_id;
> attr.license = (long) "Dual BSD/GPL";
>
> The kernel will:
>
> * Compute the hash of the provided I_loader bytecode.
> * Verify the signature against this computed hash.
> * Check if the metadata map (now exclusive) is intended for this
> program's hash.
>
> The signature check happens in BPF_PROG_LOAD before the security_bpf_prog
> LSM hook.
>
> This ensures that the loaded loader program (I_loader), including the
> embedded expected hash of the metadata (H_meta), is trusted.
> Since the loader program is now trusted, it can be entrusted to verify
> the actual metadata (M_metadata) read from the (now exclusive and
> frozen) map against the embedded (and trusted) H_meta. There is no
> Time-of-Check-Time-of-Use (TOCTOU) vulnerability here because:
>
> * The signature covers the I_loader and its embedded H_meta.
> * The metadata map M_metadata is frozen before the loader program is loaded
> and associated with it.
> * The map is made exclusive to the specific (signed and verified)
> loader program.
>
> [1] https://lore.kernel.org/bpf/CACYkzJ6VQUExfyt0=-FmXz46GHJh3d=FXh5j4KfexcEFbHV-vg@mail.gmail.com/#t
>
Can we expect to see a v2 of this patchset sometime soon? We are
planning on submitting follow-up patchsets that build on this effort.
-blaise
>
> KP Singh (12):
> bpf: Implement an internal helper for SHA256 hashing
> bpf: Update the bpf_prog_calc_tag to use SHA256
> bpf: Implement exclusive map creation
> libbpf: Implement SHA256 internal helper
> libbpf: Support exclusive map creation
> selftests/bpf: Add tests for exclusive maps
> bpf: Return hashes of maps in BPF_OBJ_GET_INFO_BY_FD
> bpf: Implement signature verification for BPF programs
> libbpf: Update light skeleton for signing
> libbpf: Embed and verify the metadata hash in the loader
> bpftool: Add support for signing BPF programs
> selftests/bpf: Enable signature verification for all lskel tests
>
> include/linux/bpf.h | 22 +-
> include/linux/filter.h | 6 -
> include/uapi/linux/bpf.h | 15 +-
> kernel/bpf/arraymap.c | 17 ++
> kernel/bpf/core.c | 88 ++++----
> kernel/bpf/hashtab.c | 15 +-
> kernel/bpf/syscall.c | 112 +++++++++-
> kernel/bpf/verifier.c | 7 +
> kernel/trace/bpf_trace.c | 6 +-
> .../bpf/bpftool/Documentation/bpftool-gen.rst | 12 +
> .../bpftool/Documentation/bpftool-prog.rst | 12 +
> tools/bpf/bpftool/Makefile | 6 +-
> tools/bpf/bpftool/cgroup.c | 5 +-
> tools/bpf/bpftool/gen.c | 58 ++++-
> tools/bpf/bpftool/main.c | 21 +-
> tools/bpf/bpftool/main.h | 11 +
> tools/bpf/bpftool/prog.c | 25 +++
> tools/bpf/bpftool/sign.c | 211 ++++++++++++++++++
> tools/include/uapi/linux/bpf.h | 15 +-
> tools/lib/bpf/bpf.c | 6 +-
> tools/lib/bpf/bpf.h | 4 +-
> tools/lib/bpf/bpf_gen_internal.h | 2 +
> tools/lib/bpf/gen_loader.c | 52 +++++
> tools/lib/bpf/libbpf.c | 125 ++++++++++-
> tools/lib/bpf/libbpf.h | 16 +-
> tools/lib/bpf/libbpf.map | 5 +
> tools/lib/bpf/libbpf_internal.h | 9 +
> tools/lib/bpf/libbpf_version.h | 2 +-
> tools/lib/bpf/skel_internal.h | 57 ++++-
> tools/testing/selftests/bpf/.gitignore | 1 +
> tools/testing/selftests/bpf/Makefile | 13 +-
> .../selftests/bpf/prog_tests/map_excl.c | 130 +++++++++++
> tools/testing/selftests/bpf/progs/map_excl.c | 65 ++++++
> tools/testing/selftests/bpf/test_progs.c | 13 ++
> 34 files changed, 1079 insertions(+), 85 deletions(-)
> create mode 100644 tools/bpf/bpftool/sign.c
> create mode 100644 tools/testing/selftests/bpf/prog_tests/map_excl.c
> create mode 100644 tools/testing/selftests/bpf/progs/map_excl.c
>
> --
> 2.43.0
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [PATCH 00/12] Signed BPF programs
2025-07-08 15:15 ` Blaise Boscaccy
@ 2025-07-10 14:49 ` KP Singh
0 siblings, 0 replies; 79+ messages in thread
From: KP Singh @ 2025-07-10 14:49 UTC (permalink / raw)
To: Blaise Boscaccy
Cc: bpf, linux-security-module, paul, kys, ast, daniel, andrii
> >
> > This ensures that the loaded loader program (I_loader), including the
> > embedded expected hash of the metadata (H_meta), is trusted.
> > Since the loader program is now trusted, it can be entrusted to verify
> > the actual metadata (M_metadata) read from the (now exclusive and
> > frozen) map against the embedded (and trusted) H_meta. There is no
> > Time-of-Check-Time-of-Use (TOCTOU) vulnerability here because:
> >
> > * The signature covers the I_loader and its embedded H_meta.
> > * The metadata map M_metadata is frozen before the loader program is loaded
> > and associated with it.
> > * The map is made exclusive to the specific (signed and verified)
> > loader program.
> >
> > [1] https://lore.kernel.org/bpf/CACYkzJ6VQUExfyt0=-FmXz46GHJh3d=FXh5j4KfexcEFbHV-vg@mail.gmail.com/#t
> >
>
> Can we expect to see a v2 of this patchset sometime soon? We are
> planning on submitting follow-up patchsets that build on this effort.
>
I have been on PTO due to personal stuff, will try to send this in the
coming week or two.
- KP
^ permalink raw reply [flat|nested] 79+ messages in thread