* [PATCH bpf-next 01/11] bpf: make bpf_d_path() helper use probe-read semantics
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
@ 2024-02-20 9:27 ` Matt Bobrowski
2024-02-20 9:48 ` Christian Brauner
2024-02-20 9:27 ` [PATCH bpf-next 02/11] bpf/selftests: adjust selftests for BPF helper bpf_d_path() Matt Bobrowski
` (9 subsequent siblings)
10 siblings, 1 reply; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:27 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
There has now been several reported instances [0, 1, 2] where the
usage of the BPF helper bpf_d_path() has led to some form of memory
corruption issue.
The fundamental reason behind why we repeatedly see bpf_d_path() being
susceptible to such memory corruption issues is because it only
enforces ARG_PTR_TO_BTF_ID constraints onto it's struct path
argument. This essentially means that it only requires an in-kernel
pointer of type struct path to be provided to it. Depending on the
underlying context and where the supplied struct path was obtained
from and when, depends on whether the struct path is fully intact or
not when calling bpf_d_path(). It's certainly possible to call
bpf_d_path() and subsequently d_path() from contexts where the
supplied struct path to bpf_d_path() has already started being torn
down by __fput() and such. An example of this is perfectly illustrated
in [0].
Moving forward, we simply cannot enforce KF_TRUSTED_ARGS semantics
onto struct path of bpf_d_path(), as this approach would presumably
lead to some pretty wide scale and highly undesirable BPF program
breakage. To avoid breaking any pre-existing BPF program that is
dependent on bpf_d_path(), I propose that we take a different path and
re-implement an incredibly minimalistic and bare bone version of
d_path() which is entirely backed by kernel probe-read semantics. IOW,
a version of d_path() that is backed by
copy_from_kernel_nofault(). This ensures that any reads performed
against the supplied struct path to bpf_d_path() which may end up
faulting for whatever reason end up being gracefully handled and fixed
up.
The caveats with such an approach is that we can't fully uphold all of
d_path()'s path resolution capabilities. Resolving a path which is
comprised of a dentry that make use of dynamic names via isn't
possible as we can't enforce probe-read semantics onto indirect
function calls performed via d_op as they're implementation
dependent. For such cases, we just return -EOPNOTSUPP. This might be a
little surprising to some users, especially those that are interested
in resolving paths that involve a dentry that resides on some
non-mountable pseudo-filesystem, being pipefs/sockfs/nsfs, but it's
arguably better than enforcing KF_TRUSTED_ARGS onto bpf_d_path() and
causing an unnecessary shemozzle for users. Additionally, we don't
make use of all the locking semantics, or handle all the erroneous
cases in which d_path() naturally would. This is fine however, as
we're only looking to provide users with a rather acceptable version
of a reconstructed path, whilst they eventually migrate over to the
trusted bpf_path_d_path() BPF kfunc variant.
Note that the selftests that go with this change to bpf_d_path() have
been purposely split out into a completely separate patch. This is so
that the reviewers attention is not torn by noise and can remain
focused on reviewing the implementation details contained within this
patch.
[0] https://lore.kernel.org/bpf/CAG48ez0ppjcT=QxU-jtCUfb5xQb3mLr=5FcwddF_VKfEBPs_Dg@mail.gmail.com/
[1] https://lore.kernel.org/bpf/20230606181714.532998-1-jolsa@kernel.org/
[2] https://lore.kernel.org/bpf/20220219113744.1852259-1-memxor@gmail.com/
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
fs/Makefile | 6 +-
fs/probe_read_d_path.c | 150 ++++++++++++++++++++++++++++++
include/linux/probe_read_d_path.h | 13 +++
kernel/trace/bpf_trace.c | 13 ++-
4 files changed, 172 insertions(+), 10 deletions(-)
create mode 100644 fs/probe_read_d_path.c
create mode 100644 include/linux/probe_read_d_path.h
diff --git a/fs/Makefile b/fs/Makefile
index c09016257f05..945c9c84d35d 100644
--- a/fs/Makefile
+++ b/fs/Makefile
@@ -4,7 +4,7 @@
#
# 14 Sep 2000, Christoph Hellwig <hch@infradead.org>
# Rewritten to use lists instead of if-statements.
-#
+#
obj-y := open.o read_write.o file_table.o super.o \
@@ -12,7 +12,7 @@ obj-y := open.o read_write.o file_table.o super.o \
ioctl.o readdir.o select.o dcache.o inode.o \
attr.o bad_inode.o file.o filesystems.o namespace.o \
seq_file.o xattr.o libfs.o fs-writeback.o \
- pnode.o splice.o sync.o utimes.o d_path.o \
+ pnode.o splice.o sync.o utimes.o d_path.o probe_read_d_path.o \
stack.o fs_struct.o statfs.o fs_pin.o nsfs.o \
fs_types.o fs_context.o fs_parser.o fsopen.o init.o \
kernel_read_file.o mnt_idmapping.o remap_range.o
@@ -58,7 +58,7 @@ obj-$(CONFIG_CONFIGFS_FS) += configfs/
obj-y += devpts/
obj-$(CONFIG_DLM) += dlm/
-
+
# Do not add any filesystems before this line
obj-$(CONFIG_NETFS_SUPPORT) += netfs/
obj-$(CONFIG_REISERFS_FS) += reiserfs/
diff --git a/fs/probe_read_d_path.c b/fs/probe_read_d_path.c
new file mode 100644
index 000000000000..8d0db902f836
--- /dev/null
+++ b/fs/probe_read_d_path.c
@@ -0,0 +1,150 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Google LLC.
+ */
+
+#include "asm/ptrace.h"
+#include <linux/container_of.h>
+#include <linux/dcache.h>
+#include <linux/fs_struct.h>
+#include <linux/uaccess.h>
+#include <linux/path.h>
+#include <linux/probe_read_d_path.h>
+
+#include "mount.h"
+
+#define PROBE_READ(src) \
+ ({ \
+ typeof(src) __r; \
+ if (copy_from_kernel_nofault((void *)(&__r), (&src), \
+ sizeof((__r)))) \
+ memset((void *)(&__r), 0, sizeof((__r))); \
+ __r; \
+ })
+
+static inline bool probe_read_d_unlinked(const struct dentry *dentry)
+{
+ return !PROBE_READ(dentry->d_hash.pprev) &&
+ !(dentry == PROBE_READ(dentry->d_parent));
+}
+
+static long probe_read_prepend(const char *s, int len, char *buf, int *buflen)
+{
+ /*
+ * The supplied len that is to be copied into the buffer will result in
+ * an overflow. The true implementation of d_path() already returns an
+ * error for such overflow cases, so the semantics with regards to the
+ * bpf_d_path() helper returning the same error value for overflow cases
+ * remain the same.
+ */
+ if (len > *buflen)
+ return -ENAMETOOLONG;
+
+ /*
+ * The supplied string fits completely into the remaining buffer
+ * space. Attempt to make the copy.
+ */
+ *buflen -= len;
+ buf += *buflen;
+ return copy_from_kernel_nofault(buf, s, len);
+}
+
+static bool use_dname(const struct path *path)
+{
+ const struct dentry_operations *d_op;
+ char *(*d_dname)(struct dentry *, char *, int);
+
+ d_op = PROBE_READ(path->dentry->d_op);
+ d_dname = PROBE_READ(d_op->d_dname);
+
+ return d_op && d_dname &&
+ (!(path->dentry == PROBE_READ(path->dentry->d_parent)) ||
+ path->dentry != PROBE_READ(path->mnt->mnt_root));
+}
+
+char *probe_read_d_path(const struct path *path, char *buf, int buflen)
+{
+ int len;
+ long err;
+ struct path root;
+ struct mount *mnt;
+ struct dentry *dentry;
+
+ dentry = path->dentry;
+ mnt = container_of(path->mnt, struct mount, mnt);
+
+ /*
+ * We cannot back dentry->d_op->d_dname() with probe-read semantics, so
+ * just return an error to the caller when the supplied path contains a
+ * dentry component that makes use of a dynamic name.
+ */
+ if (use_dname(path))
+ return ERR_PTR(-EOPNOTSUPP);
+
+ err = probe_read_prepend("\0", 1, buf, &buflen);
+ if (err)
+ return ERR_PTR(err);
+
+ if (probe_read_d_unlinked(dentry)) {
+ err = probe_read_prepend(" (deleted)", 10, buf, &buflen);
+ if (err)
+ return ERR_PTR(err);
+ }
+
+ len = buflen;
+ root = PROBE_READ(current->fs->root);
+ while (dentry != root.dentry || &mnt->mnt != root.mnt) {
+ struct dentry *parent;
+ if (dentry == PROBE_READ(mnt->mnt.mnt_root)) {
+ struct mount *m;
+
+ m = PROBE_READ(mnt->mnt_parent);
+ if (mnt != m) {
+ dentry = PROBE_READ(mnt->mnt_mountpoint);
+ mnt = m;
+ continue;
+ }
+
+ /*
+ * If we've reached the global root, then there's
+ * nothing we can really do but bail.
+ */
+ break;
+ }
+
+ parent = PROBE_READ(dentry->d_parent);
+ if (dentry == parent) {
+ /*
+ * Escaped? We return an ECANCELED error here to signify
+ * that we've prematurely terminated pathname
+ * reconstruction. We've potentially hit a root dentry
+ * that isn't associated with any roots from the mounted
+ * filesystems that we've jumped through, so it's not
+ * clear where we are in the VFS exactly.
+ */
+ err = -ECANCELED;
+ break;
+ }
+
+ err = probe_read_prepend(dentry->d_name.name,
+ PROBE_READ(dentry->d_name.len), buf,
+ &buflen);
+ if (err)
+ break;
+
+ err = probe_read_prepend("/", 1, buf, &buflen);
+ if (err)
+ break;
+ dentry = parent;
+ }
+
+ if (err)
+ return ERR_PTR(err);
+
+ if (len == buflen) {
+ err = probe_read_prepend("/", 1, buf, &buflen);
+ if (err)
+ return ERR_PTR(err);
+ }
+ return buf + buflen;
+}
diff --git a/include/linux/probe_read_d_path.h b/include/linux/probe_read_d_path.h
new file mode 100644
index 000000000000..9b3908746657
--- /dev/null
+++ b/include/linux/probe_read_d_path.h
@@ -0,0 +1,13 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Google LLC.
+ */
+
+#ifndef _LINUX_PROBE_READ_D_PATH_H
+#define _LINUX_PROBE_READ_D_PATH_H
+
+#include <linux/path.h>
+
+extern char *probe_read_d_path(const struct path *path, char *buf, int buflen);
+
+#endif /* _LINUX_PROBE_READ_D_PATH_H */
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 241ddf5e3895..12dbd9cef1fa 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -25,6 +25,7 @@
#include <linux/verification.h>
#include <linux/namei.h>
#include <linux/fileattr.h>
+#include <linux/probe_read_d_path.h>
#include <net/bpf_sk_storage.h>
@@ -923,14 +924,12 @@ BPF_CALL_3(bpf_d_path, struct path *, path, char *, buf, u32, sz)
if (len < 0)
return len;
- p = d_path(©, buf, sz);
- if (IS_ERR(p)) {
- len = PTR_ERR(p);
- } else {
- len = buf + sz - p;
- memmove(buf, p, len);
- }
+ p = probe_read_d_path(©, buf, sz);
+ if (IS_ERR(p))
+ return PTR_ERR(p);
+ len = buf + sz - p;
+ memmove(buf, p, len);
return len;
}
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH bpf-next 01/11] bpf: make bpf_d_path() helper use probe-read semantics
2024-02-20 9:27 ` [PATCH bpf-next 01/11] bpf: make bpf_d_path() helper use probe-read semantics Matt Bobrowski
@ 2024-02-20 9:48 ` Christian Brauner
2024-02-20 13:22 ` Matt Bobrowski
0 siblings, 1 reply; 16+ messages in thread
From: Christian Brauner @ 2024-02-20 9:48 UTC (permalink / raw)
To: Matt Bobrowski
Cc: bpf, ast, andrii, kpsingh, jannh, jolsa, daniel, linux-fsdevel,
Linus Torvalds
On Tue, Feb 20, 2024 at 09:27:23AM +0000, Matt Bobrowski wrote:
> There has now been several reported instances [0, 1, 2] where the
> usage of the BPF helper bpf_d_path() has led to some form of memory
> corruption issue.
>
> The fundamental reason behind why we repeatedly see bpf_d_path() being
> susceptible to such memory corruption issues is because it only
> enforces ARG_PTR_TO_BTF_ID constraints onto it's struct path
> argument. This essentially means that it only requires an in-kernel
> pointer of type struct path to be provided to it. Depending on the
> underlying context and where the supplied struct path was obtained
> from and when, depends on whether the struct path is fully intact or
> not when calling bpf_d_path(). It's certainly possible to call
> bpf_d_path() and subsequently d_path() from contexts where the
> supplied struct path to bpf_d_path() has already started being torn
> down by __fput() and such. An example of this is perfectly illustrated
> in [0].
>
> Moving forward, we simply cannot enforce KF_TRUSTED_ARGS semantics
> onto struct path of bpf_d_path(), as this approach would presumably
> lead to some pretty wide scale and highly undesirable BPF program
> breakage. To avoid breaking any pre-existing BPF program that is
> dependent on bpf_d_path(), I propose that we take a different path and
> re-implement an incredibly minimalistic and bare bone version of
> d_path() which is entirely backed by kernel probe-read semantics. IOW,
> a version of d_path() that is backed by
> copy_from_kernel_nofault(). This ensures that any reads performed
> against the supplied struct path to bpf_d_path() which may end up
> faulting for whatever reason end up being gracefully handled and fixed
> up.
>
> The caveats with such an approach is that we can't fully uphold all of
> d_path()'s path resolution capabilities. Resolving a path which is
> comprised of a dentry that make use of dynamic names via isn't
> possible as we can't enforce probe-read semantics onto indirect
> function calls performed via d_op as they're implementation
> dependent. For such cases, we just return -EOPNOTSUPP. This might be a
> little surprising to some users, especially those that are interested
> in resolving paths that involve a dentry that resides on some
> non-mountable pseudo-filesystem, being pipefs/sockfs/nsfs, but it's
> arguably better than enforcing KF_TRUSTED_ARGS onto bpf_d_path() and
> causing an unnecessary shemozzle for users. Additionally, we don't
NAK. We're not going to add a semi-functional reimplementation of
d_path() for bpf. This relied on VFS internals and guarantees that were
never given. Restrict it to KF_TRUSTED_ARGS as it was suggested when
this originally came up or fix it another way. But we're not adding a
bunch of kfuncs to even more sensitive VFS machinery and then build a
d_path() clone just so we can retroactively justify broken behavior.
> make use of all the locking semantics, or handle all the erroneous
> cases in which d_path() naturally would. This is fine however, as
> we're only looking to provide users with a rather acceptable version
> of a reconstructed path, whilst they eventually migrate over to the
> trusted bpf_path_d_path() BPF kfunc variant.
>
> Note that the selftests that go with this change to bpf_d_path() have
> been purposely split out into a completely separate patch. This is so
> that the reviewers attention is not torn by noise and can remain
> focused on reviewing the implementation details contained within this
> patch.
>
> [0] https://lore.kernel.org/bpf/CAG48ez0ppjcT=QxU-jtCUfb5xQb3mLr=5FcwddF_VKfEBPs_Dg@mail.gmail.com/
> [1] https://lore.kernel.org/bpf/20230606181714.532998-1-jolsa@kernel.org/
> [2] https://lore.kernel.org/bpf/20220219113744.1852259-1-memxor@gmail.com/
>
> Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
> ---
> fs/Makefile | 6 +-
> fs/probe_read_d_path.c | 150 ++++++++++++++++++++++++++++++
> include/linux/probe_read_d_path.h | 13 +++
> kernel/trace/bpf_trace.c | 13 ++-
> 4 files changed, 172 insertions(+), 10 deletions(-)
> create mode 100644 fs/probe_read_d_path.c
> create mode 100644 include/linux/probe_read_d_path.h
>
> diff --git a/fs/Makefile b/fs/Makefile
> index c09016257f05..945c9c84d35d 100644
> --- a/fs/Makefile
> +++ b/fs/Makefile
> @@ -4,7 +4,7 @@
> #
> # 14 Sep 2000, Christoph Hellwig <hch@infradead.org>
> # Rewritten to use lists instead of if-statements.
> -#
> +#
>
>
> obj-y := open.o read_write.o file_table.o super.o \
> @@ -12,7 +12,7 @@ obj-y := open.o read_write.o file_table.o super.o \
> ioctl.o readdir.o select.o dcache.o inode.o \
> attr.o bad_inode.o file.o filesystems.o namespace.o \
> seq_file.o xattr.o libfs.o fs-writeback.o \
> - pnode.o splice.o sync.o utimes.o d_path.o \
> + pnode.o splice.o sync.o utimes.o d_path.o probe_read_d_path.o \
> stack.o fs_struct.o statfs.o fs_pin.o nsfs.o \
> fs_types.o fs_context.o fs_parser.o fsopen.o init.o \
> kernel_read_file.o mnt_idmapping.o remap_range.o
> @@ -58,7 +58,7 @@ obj-$(CONFIG_CONFIGFS_FS) += configfs/
> obj-y += devpts/
>
> obj-$(CONFIG_DLM) += dlm/
> -
> +
> # Do not add any filesystems before this line
> obj-$(CONFIG_NETFS_SUPPORT) += netfs/
> obj-$(CONFIG_REISERFS_FS) += reiserfs/
> diff --git a/fs/probe_read_d_path.c b/fs/probe_read_d_path.c
> new file mode 100644
> index 000000000000..8d0db902f836
> --- /dev/null
> +++ b/fs/probe_read_d_path.c
> @@ -0,0 +1,150 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2024 Google LLC.
> + */
> +
> +#include "asm/ptrace.h"
> +#include <linux/container_of.h>
> +#include <linux/dcache.h>
> +#include <linux/fs_struct.h>
> +#include <linux/uaccess.h>
> +#include <linux/path.h>
> +#include <linux/probe_read_d_path.h>
> +
> +#include "mount.h"
> +
> +#define PROBE_READ(src) \
> + ({ \
> + typeof(src) __r; \
> + if (copy_from_kernel_nofault((void *)(&__r), (&src), \
> + sizeof((__r)))) \
> + memset((void *)(&__r), 0, sizeof((__r))); \
> + __r; \
> + })
> +
> +static inline bool probe_read_d_unlinked(const struct dentry *dentry)
> +{
> + return !PROBE_READ(dentry->d_hash.pprev) &&
> + !(dentry == PROBE_READ(dentry->d_parent));
> +}
> +
> +static long probe_read_prepend(const char *s, int len, char *buf, int *buflen)
> +{
> + /*
> + * The supplied len that is to be copied into the buffer will result in
> + * an overflow. The true implementation of d_path() already returns an
> + * error for such overflow cases, so the semantics with regards to the
> + * bpf_d_path() helper returning the same error value for overflow cases
> + * remain the same.
> + */
> + if (len > *buflen)
> + return -ENAMETOOLONG;
> +
> + /*
> + * The supplied string fits completely into the remaining buffer
> + * space. Attempt to make the copy.
> + */
> + *buflen -= len;
> + buf += *buflen;
> + return copy_from_kernel_nofault(buf, s, len);
> +}
> +
> +static bool use_dname(const struct path *path)
> +{
> + const struct dentry_operations *d_op;
> + char *(*d_dname)(struct dentry *, char *, int);
> +
> + d_op = PROBE_READ(path->dentry->d_op);
> + d_dname = PROBE_READ(d_op->d_dname);
> +
> + return d_op && d_dname &&
> + (!(path->dentry == PROBE_READ(path->dentry->d_parent)) ||
> + path->dentry != PROBE_READ(path->mnt->mnt_root));
> +}
> +
> +char *probe_read_d_path(const struct path *path, char *buf, int buflen)
> +{
> + int len;
> + long err;
> + struct path root;
> + struct mount *mnt;
> + struct dentry *dentry;
> +
> + dentry = path->dentry;
> + mnt = container_of(path->mnt, struct mount, mnt);
> +
> + /*
> + * We cannot back dentry->d_op->d_dname() with probe-read semantics, so
> + * just return an error to the caller when the supplied path contains a
> + * dentry component that makes use of a dynamic name.
> + */
> + if (use_dname(path))
> + return ERR_PTR(-EOPNOTSUPP);
> +
> + err = probe_read_prepend("\0", 1, buf, &buflen);
> + if (err)
> + return ERR_PTR(err);
> +
> + if (probe_read_d_unlinked(dentry)) {
> + err = probe_read_prepend(" (deleted)", 10, buf, &buflen);
> + if (err)
> + return ERR_PTR(err);
> + }
> +
> + len = buflen;
> + root = PROBE_READ(current->fs->root);
> + while (dentry != root.dentry || &mnt->mnt != root.mnt) {
> + struct dentry *parent;
> + if (dentry == PROBE_READ(mnt->mnt.mnt_root)) {
> + struct mount *m;
> +
> + m = PROBE_READ(mnt->mnt_parent);
> + if (mnt != m) {
> + dentry = PROBE_READ(mnt->mnt_mountpoint);
> + mnt = m;
> + continue;
> + }
> +
> + /*
> + * If we've reached the global root, then there's
> + * nothing we can really do but bail.
> + */
> + break;
> + }
> +
> + parent = PROBE_READ(dentry->d_parent);
> + if (dentry == parent) {
> + /*
> + * Escaped? We return an ECANCELED error here to signify
> + * that we've prematurely terminated pathname
> + * reconstruction. We've potentially hit a root dentry
> + * that isn't associated with any roots from the mounted
> + * filesystems that we've jumped through, so it's not
> + * clear where we are in the VFS exactly.
> + */
> + err = -ECANCELED;
> + break;
> + }
> +
> + err = probe_read_prepend(dentry->d_name.name,
> + PROBE_READ(dentry->d_name.len), buf,
> + &buflen);
> + if (err)
> + break;
> +
> + err = probe_read_prepend("/", 1, buf, &buflen);
> + if (err)
> + break;
> + dentry = parent;
> + }
> +
> + if (err)
> + return ERR_PTR(err);
> +
> + if (len == buflen) {
> + err = probe_read_prepend("/", 1, buf, &buflen);
> + if (err)
> + return ERR_PTR(err);
> + }
> + return buf + buflen;
> +}
> diff --git a/include/linux/probe_read_d_path.h b/include/linux/probe_read_d_path.h
> new file mode 100644
> index 000000000000..9b3908746657
> --- /dev/null
> +++ b/include/linux/probe_read_d_path.h
> @@ -0,0 +1,13 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2024 Google LLC.
> + */
> +
> +#ifndef _LINUX_PROBE_READ_D_PATH_H
> +#define _LINUX_PROBE_READ_D_PATH_H
> +
> +#include <linux/path.h>
> +
> +extern char *probe_read_d_path(const struct path *path, char *buf, int buflen);
> +
> +#endif /* _LINUX_PROBE_READ_D_PATH_H */
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 241ddf5e3895..12dbd9cef1fa 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -25,6 +25,7 @@
> #include <linux/verification.h>
> #include <linux/namei.h>
> #include <linux/fileattr.h>
> +#include <linux/probe_read_d_path.h>
>
> #include <net/bpf_sk_storage.h>
>
> @@ -923,14 +924,12 @@ BPF_CALL_3(bpf_d_path, struct path *, path, char *, buf, u32, sz)
> if (len < 0)
> return len;
>
> - p = d_path(©, buf, sz);
> - if (IS_ERR(p)) {
> - len = PTR_ERR(p);
> - } else {
> - len = buf + sz - p;
> - memmove(buf, p, len);
> - }
> + p = probe_read_d_path(©, buf, sz);
> + if (IS_ERR(p))
> + return PTR_ERR(p);
>
> + len = buf + sz - p;
> + memmove(buf, p, len);
> return len;
> }
>
> --
> 2.44.0.rc0.258.g7320e95886-goog
>
> /M
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH bpf-next 01/11] bpf: make bpf_d_path() helper use probe-read semantics
2024-02-20 9:48 ` Christian Brauner
@ 2024-02-20 13:22 ` Matt Bobrowski
2024-02-21 7:55 ` Christian Brauner
0 siblings, 1 reply; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 13:22 UTC (permalink / raw)
To: Christian Brauner
Cc: bpf, ast, andrii, kpsingh, jannh, jolsa, daniel, linux-fsdevel,
Linus Torvalds
On Tue, Feb 20, 2024 at 10:48:10AM +0100, Christian Brauner wrote:
> On Tue, Feb 20, 2024 at 09:27:23AM +0000, Matt Bobrowski wrote:
> > There has now been several reported instances [0, 1, 2] where the
> > usage of the BPF helper bpf_d_path() has led to some form of memory
> > corruption issue.
> >
> > The fundamental reason behind why we repeatedly see bpf_d_path() being
> > susceptible to such memory corruption issues is because it only
> > enforces ARG_PTR_TO_BTF_ID constraints onto it's struct path
> > argument. This essentially means that it only requires an in-kernel
> > pointer of type struct path to be provided to it. Depending on the
> > underlying context and where the supplied struct path was obtained
> > from and when, depends on whether the struct path is fully intact or
> > not when calling bpf_d_path(). It's certainly possible to call
> > bpf_d_path() and subsequently d_path() from contexts where the
> > supplied struct path to bpf_d_path() has already started being torn
> > down by __fput() and such. An example of this is perfectly illustrated
> > in [0].
> >
> > Moving forward, we simply cannot enforce KF_TRUSTED_ARGS semantics
> > onto struct path of bpf_d_path(), as this approach would presumably
> > lead to some pretty wide scale and highly undesirable BPF program
> > breakage. To avoid breaking any pre-existing BPF program that is
> > dependent on bpf_d_path(), I propose that we take a different path and
> > re-implement an incredibly minimalistic and bare bone version of
> > d_path() which is entirely backed by kernel probe-read semantics. IOW,
> > a version of d_path() that is backed by
> > copy_from_kernel_nofault(). This ensures that any reads performed
> > against the supplied struct path to bpf_d_path() which may end up
> > faulting for whatever reason end up being gracefully handled and fixed
> > up.
> >
> > The caveats with such an approach is that we can't fully uphold all of
> > d_path()'s path resolution capabilities. Resolving a path which is
> > comprised of a dentry that make use of dynamic names via isn't
> > possible as we can't enforce probe-read semantics onto indirect
> > function calls performed via d_op as they're implementation
> > dependent. For such cases, we just return -EOPNOTSUPP. This might be a
> > little surprising to some users, especially those that are interested
> > in resolving paths that involve a dentry that resides on some
> > non-mountable pseudo-filesystem, being pipefs/sockfs/nsfs, but it's
> > arguably better than enforcing KF_TRUSTED_ARGS onto bpf_d_path() and
> > causing an unnecessary shemozzle for users. Additionally, we don't
>
> NAK. We're not going to add a semi-functional reimplementation of
> d_path() for bpf. This relied on VFS internals and guarantees that were
> never given. Restrict it to KF_TRUSTED_ARGS as it was suggested when
> this originally came up or fix it another way. But we're not adding a
> bunch of kfuncs to even more sensitive VFS machinery and then build a
> d_path() clone just so we can retroactively justify broken behavior.
OK, I agree, having a semi-functional re-implementation of d_path() is
indeed suboptimal. However, also understand that slapping the
KF_TRUSTED_ARGS constraint onto the pre-existing BPF helper
bpf_d_path() would outright break a lot of BPF programs out there, so
I can't see how taht would be an acceptable approach moving forward
here either.
Let's say that we decided to leave the pre-existing bpf_d_path()
implementation as is, accepting that it is fundamentally succeptible
to memory corruption issues, are you saying that you're also not for
adding the KF_TRUSTED_ARGS d_path() variant as I've done so here
[0]. Or, is it the other supporting reference counting based BPF
kfuncs [1, 2] that have irked you and aren't supportive of either?
[0] https://lore.kernel.org/bpf/20240220-erstochen-notwehr-755dbd0a02b3@brauner/T/#m542b86991b257cf9612406f1cc4d5692bcb75da8
[1] https://lore.kernel.org/bpf/20240220-erstochen-notwehr-755dbd0a02b3@brauner/T/#mc2aaadbe17490aeb1dde09071629b0b2a87d7436
[2] https://lore.kernel.org/bpf/20240220-erstochen-notwehr-755dbd0a02b3@brauner/T/#m07fa7a0c03af530d2ab3c4ef25c377b1d6ef17f8
/M
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 01/11] bpf: make bpf_d_path() helper use probe-read semantics
2024-02-20 13:22 ` Matt Bobrowski
@ 2024-02-21 7:55 ` Christian Brauner
2024-02-21 13:38 ` Matt Bobrowski
0 siblings, 1 reply; 16+ messages in thread
From: Christian Brauner @ 2024-02-21 7:55 UTC (permalink / raw)
To: Matt Bobrowski
Cc: bpf, ast, andrii, kpsingh, jannh, jolsa, daniel, linux-fsdevel,
Linus Torvalds
On Tue, Feb 20, 2024 at 01:22:14PM +0000, Matt Bobrowski wrote:
> On Tue, Feb 20, 2024 at 10:48:10AM +0100, Christian Brauner wrote:
> > On Tue, Feb 20, 2024 at 09:27:23AM +0000, Matt Bobrowski wrote:
> > > There has now been several reported instances [0, 1, 2] where the
> > > usage of the BPF helper bpf_d_path() has led to some form of memory
> > > corruption issue.
> > >
> > > The fundamental reason behind why we repeatedly see bpf_d_path() being
> > > susceptible to such memory corruption issues is because it only
> > > enforces ARG_PTR_TO_BTF_ID constraints onto it's struct path
> > > argument. This essentially means that it only requires an in-kernel
> > > pointer of type struct path to be provided to it. Depending on the
> > > underlying context and where the supplied struct path was obtained
> > > from and when, depends on whether the struct path is fully intact or
> > > not when calling bpf_d_path(). It's certainly possible to call
> > > bpf_d_path() and subsequently d_path() from contexts where the
> > > supplied struct path to bpf_d_path() has already started being torn
> > > down by __fput() and such. An example of this is perfectly illustrated
> > > in [0].
> > >
> > > Moving forward, we simply cannot enforce KF_TRUSTED_ARGS semantics
> > > onto struct path of bpf_d_path(), as this approach would presumably
> > > lead to some pretty wide scale and highly undesirable BPF program
> > > breakage. To avoid breaking any pre-existing BPF program that is
> > > dependent on bpf_d_path(), I propose that we take a different path and
> > > re-implement an incredibly minimalistic and bare bone version of
> > > d_path() which is entirely backed by kernel probe-read semantics. IOW,
> > > a version of d_path() that is backed by
> > > copy_from_kernel_nofault(). This ensures that any reads performed
> > > against the supplied struct path to bpf_d_path() which may end up
> > > faulting for whatever reason end up being gracefully handled and fixed
> > > up.
> > >
> > > The caveats with such an approach is that we can't fully uphold all of
> > > d_path()'s path resolution capabilities. Resolving a path which is
> > > comprised of a dentry that make use of dynamic names via isn't
> > > possible as we can't enforce probe-read semantics onto indirect
> > > function calls performed via d_op as they're implementation
> > > dependent. For such cases, we just return -EOPNOTSUPP. This might be a
> > > little surprising to some users, especially those that are interested
> > > in resolving paths that involve a dentry that resides on some
> > > non-mountable pseudo-filesystem, being pipefs/sockfs/nsfs, but it's
> > > arguably better than enforcing KF_TRUSTED_ARGS onto bpf_d_path() and
> > > causing an unnecessary shemozzle for users. Additionally, we don't
> >
> > NAK. We're not going to add a semi-functional reimplementation of
> > d_path() for bpf. This relied on VFS internals and guarantees that were
> > never given. Restrict it to KF_TRUSTED_ARGS as it was suggested when
> > this originally came up or fix it another way. But we're not adding a
> > bunch of kfuncs to even more sensitive VFS machinery and then build a
> > d_path() clone just so we can retroactively justify broken behavior.
>
> OK, I agree, having a semi-functional re-implementation of d_path() is
> indeed suboptimal. However, also understand that slapping the
The ugliness of the duplicated code made me start my mail with NAK. It
would've been enough to just say no.
> KF_TRUSTED_ARGS constraint onto the pre-existing BPF helper
> bpf_d_path() would outright break a lot of BPF programs out there, so
> I can't see how taht would be an acceptable approach moving forward
> here either.
>
> Let's say that we decided to leave the pre-existing bpf_d_path()
> implementation as is, accepting that it is fundamentally succeptible
> to memory corruption issues, are you saying that you're also not for
> adding the KF_TRUSTED_ARGS d_path() variant as I've done so here
No, that's fine and was the initial proposal anyway. You're already
using the existing d_path() anway in that bpf_d_path() thing. So
exposing another variant with KF_TRUSTED_ARGS restriction is fine. But
not hacking up a custom d_path() variant.
> [0]. Or, is it the other supporting reference counting based BPF
> kfuncs [1, 2] that have irked you and aren't supportive of either?
Yes, because you're exposing fs_root, fs_pwd, path_put() and fdput(),
get_task_exe_file(), get_mm_exe_file(). None of that I see being turned
into kfuncs.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 01/11] bpf: make bpf_d_path() helper use probe-read semantics
2024-02-21 7:55 ` Christian Brauner
@ 2024-02-21 13:38 ` Matt Bobrowski
0 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-21 13:38 UTC (permalink / raw)
To: Christian Brauner
Cc: bpf, ast, andrii, kpsingh, jannh, jolsa, daniel, linux-fsdevel,
Linus Torvalds
Hey Christian,
On Wed, Feb 21, 2024 at 08:55:25AM +0100, Christian Brauner wrote:
> On Tue, Feb 20, 2024 at 01:22:14PM +0000, Matt Bobrowski wrote:
> > On Tue, Feb 20, 2024 at 10:48:10AM +0100, Christian Brauner wrote:
> > > On Tue, Feb 20, 2024 at 09:27:23AM +0000, Matt Bobrowski wrote:
> > > > There has now been several reported instances [0, 1, 2] where the
> > > > usage of the BPF helper bpf_d_path() has led to some form of memory
> > > > corruption issue.
> > > >
> > > > The fundamental reason behind why we repeatedly see bpf_d_path() being
> > > > susceptible to such memory corruption issues is because it only
> > > > enforces ARG_PTR_TO_BTF_ID constraints onto it's struct path
> > > > argument. This essentially means that it only requires an in-kernel
> > > > pointer of type struct path to be provided to it. Depending on the
> > > > underlying context and where the supplied struct path was obtained
> > > > from and when, depends on whether the struct path is fully intact or
> > > > not when calling bpf_d_path(). It's certainly possible to call
> > > > bpf_d_path() and subsequently d_path() from contexts where the
> > > > supplied struct path to bpf_d_path() has already started being torn
> > > > down by __fput() and such. An example of this is perfectly illustrated
> > > > in [0].
> > > >
> > > > Moving forward, we simply cannot enforce KF_TRUSTED_ARGS semantics
> > > > onto struct path of bpf_d_path(), as this approach would presumably
> > > > lead to some pretty wide scale and highly undesirable BPF program
> > > > breakage. To avoid breaking any pre-existing BPF program that is
> > > > dependent on bpf_d_path(), I propose that we take a different path and
> > > > re-implement an incredibly minimalistic and bare bone version of
> > > > d_path() which is entirely backed by kernel probe-read semantics. IOW,
> > > > a version of d_path() that is backed by
> > > > copy_from_kernel_nofault(). This ensures that any reads performed
> > > > against the supplied struct path to bpf_d_path() which may end up
> > > > faulting for whatever reason end up being gracefully handled and fixed
> > > > up.
> > > >
> > > > The caveats with such an approach is that we can't fully uphold all of
> > > > d_path()'s path resolution capabilities. Resolving a path which is
> > > > comprised of a dentry that make use of dynamic names via isn't
> > > > possible as we can't enforce probe-read semantics onto indirect
> > > > function calls performed via d_op as they're implementation
> > > > dependent. For such cases, we just return -EOPNOTSUPP. This might be a
> > > > little surprising to some users, especially those that are interested
> > > > in resolving paths that involve a dentry that resides on some
> > > > non-mountable pseudo-filesystem, being pipefs/sockfs/nsfs, but it's
> > > > arguably better than enforcing KF_TRUSTED_ARGS onto bpf_d_path() and
> > > > causing an unnecessary shemozzle for users. Additionally, we don't
> > >
> > > NAK. We're not going to add a semi-functional reimplementation of
> > > d_path() for bpf. This relied on VFS internals and guarantees that were
> > > never given. Restrict it to KF_TRUSTED_ARGS as it was suggested when
> > > this originally came up or fix it another way. But we're not adding a
> > > bunch of kfuncs to even more sensitive VFS machinery and then build a
> > > d_path() clone just so we can retroactively justify broken behavior.
> >
> > OK, I agree, having a semi-functional re-implementation of d_path() is
> > indeed suboptimal. However, also understand that slapping the
>
> The ugliness of the duplicated code made me start my mail with NAK. It
> would've been enough to just say no.
>
> > KF_TRUSTED_ARGS constraint onto the pre-existing BPF helper
> > bpf_d_path() would outright break a lot of BPF programs out there, so
> > I can't see how taht would be an acceptable approach moving forward
> > here either.
> >
> > Let's say that we decided to leave the pre-existing bpf_d_path()
> > implementation as is, accepting that it is fundamentally succeptible
> > to memory corruption issues, are you saying that you're also not for
> > adding the KF_TRUSTED_ARGS d_path() variant as I've done so here
>
> No, that's fine and was the initial proposal anyway. You're already
> using the existing d_path() anway in that bpf_d_path() thing. So
> exposing another variant with KF_TRUSTED_ARGS restriction is fine. But
> not hacking up a custom d_path() variant.
OK, thank you for clarifying. Perhaps we should just make a remark in
the form of a comment against bpf_d_path() stating that this BPF
helper is considered unsafe and users should look to migrate to the
newly added KF_TRUSTED_ARGS variant if at all possible.
> > [0]. Or, is it the other supporting reference counting based BPF
> > kfuncs [1, 2] that have irked you and aren't supportive of either?
>
> Yes, because you're exposing fs_root, fs_pwd, path_put() and fdput(),
> get_task_exe_file(), get_mm_exe_file(). None of that I see being turned
> into kfuncs.
Hm, OK, but do know that BPF kfuncs do not make any promises around
being a stable interface, they never have and never will. Therefore,
it's not like introducing this kind of dependency on such APIs from
BPF kfuncs would hinder you from fundamentally modifying them moving
forward?
Additionally, given that these new acquire/release based BPF kfuncs
which rely on APIs like get_fs_root() and path_put() are in fact
restricted to BPF LSM programs, usage of such BPF kfuncs from the
context of a BPF LSM program would rather be analogous to a
pre-existing LSM module calling get_fs_root() and path_put()
explicitly within one of its implemented hooks, no? IOW, once a BPF
LSM program is loaded and JITed, what's the fundamental difference
between a baked in LSM module hook implementation which calls
get_fs_root() and a BPF LSM program which calls
bpf_get_task_fs_root()? They're both being used in a perfectly
reasonable and sane like-for-like context, so what's the issue with
exposing such APIs as BPF kfuncs if they're being used appropriately?
It really doesn't make sense to provide independent reference counting
implementations just for BPF if there's some pre-existing
infrastructure in the kernel that does it the right way.
Also note that without such new reference counting BPF kfuncs which
I've proposed within this patch series the KF_TRUSTED_ARGS variant of
bpf_d_path() that we've agreed on becomes somewhat difficult to use in
practice. It'd essentially only be usable from LSM hooks that pass in
a struct path via the context parameter. Whilst in reality, it's
considered rather typical to also pass a struct path like
¤t->mm->exe_file->f_path and ¤t->fs->pwd to bpf_d_path()
and friends from within the the implementation of an LSM hook. Such
struct path objects nested some levels deep isn't considered as being
trusted and therefore cannot be passed to a BPF kfunc that enforces
the KF_TRUSTED_ARGS constraint. The only way to acquire trust on a
pointer after performing such a struct walk is by grabbing a reference
on it, and hence why this KF_TRUSTED_ARGS change to d_path() and these
new BPF kfuncs go hand in hand.
Apologies about all the questions and comments here, but I'm really
just trying to understand why there's so much push back with regards
adding these reference counting BPF kfuncs?
/M
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next 02/11] bpf/selftests: adjust selftests for BPF helper bpf_d_path()
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
2024-02-20 9:27 ` [PATCH bpf-next 01/11] bpf: make bpf_d_path() helper use probe-read semantics Matt Bobrowski
@ 2024-02-20 9:27 ` Matt Bobrowski
2024-02-20 9:27 ` [PATCH bpf-next 03/11] bpf: rename fs_kfunc_set_ids to lsm_kfunc_set_ids Matt Bobrowski
` (8 subsequent siblings)
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:27 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
The BPF helper bpf_d_path() has been modified such that it makes use
of probe-read semantics. In turn, the behaviour of the BPF helper
bpf_d_path() has slightly changed under certain circumstances,
therefore needing to also adjust the backing test suite to account for
this.
The probe-read based d_path() implementation cannot handle dentries
that have their name backed by d_op->d_dname(). For paths containing
such dentries, the probe-read based implementation returns
-EOPNOTSUPP, so the test suite has been updated to assert this
behaviour.
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
.../testing/selftests/bpf/prog_tests/d_path.c | 84 +++++++++++++++----
.../testing/selftests/bpf/progs/test_d_path.c | 2 +-
2 files changed, 68 insertions(+), 18 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/d_path.c b/tools/testing/selftests/bpf/prog_tests/d_path.c
index ccc768592e66..d77ae1b1e6ba 100644
--- a/tools/testing/selftests/bpf/prog_tests/d_path.c
+++ b/tools/testing/selftests/bpf/prog_tests/d_path.c
@@ -6,7 +6,7 @@
#include <sys/syscall.h>
#define MAX_PATH_LEN 128
-#define MAX_FILES 7
+#define MAX_FILES 8
#include "test_d_path.skel.h"
#include "test_d_path_check_rdonly_mem.skel.h"
@@ -25,9 +25,15 @@
static int duration;
+struct want {
+ bool err;
+ long err_code;
+ char path[MAX_PATH_LEN];
+};
+
static struct {
__u32 cnt;
- char paths[MAX_FILES][MAX_PATH_LEN];
+ struct want want[MAX_FILES];
} src;
static int set_pathname(int fd, pid_t pid)
@@ -35,12 +41,12 @@ static int set_pathname(int fd, pid_t pid)
char buf[MAX_PATH_LEN];
snprintf(buf, MAX_PATH_LEN, "/proc/%d/fd/%d", pid, fd);
- return readlink(buf, src.paths[src.cnt++], MAX_PATH_LEN);
+ return readlink(buf, src.want[src.cnt++].path, MAX_PATH_LEN);
}
static int trigger_fstat_events(pid_t pid)
{
- int sockfd = -1, procfd = -1, devfd = -1;
+ int sockfd = -1, procfd = -1, devfd = -1, mntnsfd = -1;
int localfd = -1, indicatorfd = -1;
int pipefd[2] = { -1, -1 };
struct stat fileStat;
@@ -49,10 +55,15 @@ static int trigger_fstat_events(pid_t pid)
/* unmountable pseudo-filesystems */
if (CHECK(pipe(pipefd) < 0, "trigger", "pipe failed\n"))
return ret;
- /* unmountable pseudo-filesystems */
+
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (CHECK(sockfd < 0, "trigger", "socket failed\n"))
goto out_close;
+
+ mntnsfd = open("/proc/self/ns/mnt", O_RDONLY);
+ if (CHECK(mntnsfd < 0, "trigger", "mntnsfd failed"))
+ goto out_close;
+
/* mountable pseudo-filesystems */
procfd = open("/proc/self/comm", O_RDONLY);
if (CHECK(procfd < 0, "trigger", "open /proc/self/comm failed\n"))
@@ -69,15 +80,35 @@ static int trigger_fstat_events(pid_t pid)
if (CHECK(indicatorfd < 0, "trigger", "open /tmp/ failed\n"))
goto out_close;
+ /*
+ * With bpf_d_path() being backed by probe-read semantics, we cannot
+ * safely resolve paths that are comprised of dentries that make use of
+ * dynamic names. We expect to return -EOPNOTSUPP for such paths.
+ */
+ src.want[src.cnt].err = true;
+ src.want[src.cnt].err_code = -EOPNOTSUPP;
ret = set_pathname(pipefd[0], pid);
if (CHECK(ret < 0, "trigger", "set_pathname failed for pipe[0]\n"))
goto out_close;
+
+ src.want[src.cnt].err = true;
+ src.want[src.cnt].err_code = -EOPNOTSUPP;
ret = set_pathname(pipefd[1], pid);
if (CHECK(ret < 0, "trigger", "set_pathname failed for pipe[1]\n"))
goto out_close;
+
+ src.want[src.cnt].err = true;
+ src.want[src.cnt].err_code = -EOPNOTSUPP;
ret = set_pathname(sockfd, pid);
if (CHECK(ret < 0, "trigger", "set_pathname failed for socket\n"))
goto out_close;
+
+ src.want[src.cnt].err = true;
+ src.want[src.cnt].err_code = -EOPNOTSUPP;
+ ret = set_pathname(mntnsfd, pid);
+ if (CHECK(ret < 0, "trigger", "set_pathname failed for mntnsfd\n"))
+ goto out_close;
+
ret = set_pathname(procfd, pid);
if (CHECK(ret < 0, "trigger", "set_pathname failed for proc\n"))
goto out_close;
@@ -95,6 +126,7 @@ static int trigger_fstat_events(pid_t pid)
fstat(pipefd[0], &fileStat);
fstat(pipefd[1], &fileStat);
fstat(sockfd, &fileStat);
+ fstat(mntnsfd, &fileStat);
fstat(procfd, &fileStat);
fstat(devfd, &fileStat);
fstat(localfd, &fileStat);
@@ -109,6 +141,7 @@ static int trigger_fstat_events(pid_t pid)
close(pipefd[0]);
close(pipefd[1]);
close(sockfd);
+ close(mntnsfd);
close(procfd);
close(devfd);
close(localfd);
@@ -150,24 +183,41 @@ static void test_d_path_basic(void)
goto cleanup;
for (int i = 0; i < MAX_FILES; i++) {
- CHECK(strncmp(src.paths[i], bss->paths_stat[i], MAX_PATH_LEN),
- "check",
- "failed to get stat path[%d]: %s vs %s\n",
- i, src.paths[i], bss->paths_stat[i]);
- CHECK(strncmp(src.paths[i], bss->paths_close[i], MAX_PATH_LEN),
- "check",
- "failed to get close path[%d]: %s vs %s\n",
- i, src.paths[i], bss->paths_close[i]);
+ struct want want = src.want[i];
+
+ /*
+ * Assert that we get the correct error code from bpf_d_path()
+ * when the underlying path contains a dentry that is backed by
+ * a dynamic name.
+ */
+ if (want.err) {
+ CHECK(want.err_code != bss->rets_stat[i], "check",
+ "failed to match stat return[%d]: got=%d, want=%ld [%s]\n",
+ i, bss->rets_stat[i], want.err_code,
+ bss->paths_stat[i]);
+ CHECK(want.err_code != bss->rets_close[i], "check",
+ "failed to match close return[%d]: got=%d, want=%ld [%s]\n",
+ i, bss->rets_close[i], want.err_code,
+ bss->paths_close[i]);
+ continue;
+ }
+
+ CHECK(strncmp(want.path, bss->paths_stat[i], MAX_PATH_LEN),
+ "check", "failed to get stat path[%d]: %s vs %s\n", i,
+ want.path, bss->paths_stat[i]);
+ CHECK(strncmp(want.path, bss->paths_close[i], MAX_PATH_LEN),
+ "check", "failed to get close path[%d]: %s vs %s\n", i,
+ want.path, bss->paths_close[i]);
/* The d_path helper returns size plus NUL char, hence + 1 */
CHECK(bss->rets_stat[i] != strlen(bss->paths_stat[i]) + 1,
"check",
- "failed to match stat return [%d]: %d vs %zd [%s]\n",
- i, bss->rets_stat[i], strlen(bss->paths_stat[i]) + 1,
+ "failed to match stat return [%d]: %d vs %zd [%s]\n", i,
+ bss->rets_stat[i], strlen(bss->paths_stat[i]) + 1,
bss->paths_stat[i]);
CHECK(bss->rets_close[i] != strlen(bss->paths_stat[i]) + 1,
"check",
- "failed to match stat return [%d]: %d vs %zd [%s]\n",
- i, bss->rets_close[i], strlen(bss->paths_close[i]) + 1,
+ "failed to match stat return [%d]: %d vs %zd [%s]\n", i,
+ bss->rets_close[i], strlen(bss->paths_close[i]) + 1,
bss->paths_stat[i]);
}
diff --git a/tools/testing/selftests/bpf/progs/test_d_path.c b/tools/testing/selftests/bpf/progs/test_d_path.c
index 84e1f883f97b..fc2754f166ec 100644
--- a/tools/testing/selftests/bpf/progs/test_d_path.c
+++ b/tools/testing/selftests/bpf/progs/test_d_path.c
@@ -5,7 +5,7 @@
#include <bpf/bpf_tracing.h>
#define MAX_PATH_LEN 128
-#define MAX_FILES 7
+#define MAX_FILES 8
pid_t my_pid = 0;
__u32 cnt_stat = 0;
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next 03/11] bpf: rename fs_kfunc_set_ids to lsm_kfunc_set_ids
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
2024-02-20 9:27 ` [PATCH bpf-next 01/11] bpf: make bpf_d_path() helper use probe-read semantics Matt Bobrowski
2024-02-20 9:27 ` [PATCH bpf-next 02/11] bpf/selftests: adjust selftests for BPF helper bpf_d_path() Matt Bobrowski
@ 2024-02-20 9:27 ` Matt Bobrowski
2024-02-20 9:27 ` [PATCH bpf-next 04/11] bpf: add new acquire/release BPF kfuncs for mm_struct Matt Bobrowski
` (7 subsequent siblings)
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:27 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
fs_kfunc_set_ids is rather specific to a single kfunc at the
moment. Rename it to something a little more generic such that other
future kfuncs restricted to BPF LSM programs can reside in the same
set and make use of the same filter.
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
kernel/trace/bpf_trace.c | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 12dbd9cef1fa..c45c8d42316c 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1434,7 +1434,7 @@ static int __init bpf_key_sig_kfuncs_init(void)
late_initcall(bpf_key_sig_kfuncs_init);
#endif /* CONFIG_KEYS */
-/* filesystem kfuncs */
+/* A set of kfuncs that may only be called from BPF LSM programs. */
__bpf_kfunc_start_defs();
/**
@@ -1474,31 +1474,33 @@ __bpf_kfunc int bpf_get_file_xattr(struct file *file, const char *name__str,
__bpf_kfunc_end_defs();
-BTF_KFUNCS_START(fs_kfunc_set_ids)
+BTF_KFUNCS_START(lsm_kfunc_set_ids)
BTF_ID_FLAGS(func, bpf_get_file_xattr, KF_SLEEPABLE | KF_TRUSTED_ARGS)
-BTF_KFUNCS_END(fs_kfunc_set_ids)
+BTF_KFUNCS_END(lsm_kfunc_set_ids)
-static int bpf_get_file_xattr_filter(const struct bpf_prog *prog, u32 kfunc_id)
+static int bpf_lsm_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id)
{
- if (!btf_id_set8_contains(&fs_kfunc_set_ids, kfunc_id))
+ if (!btf_id_set8_contains(&lsm_kfunc_set_ids, kfunc_id))
return 0;
- /* Only allow to attach from LSM hooks, to avoid recursion */
+ /* To avoid recursion, only permit kfuncs included within
+ * lsm_kfunc_set_ids to be called from BPF LSM programs.
+ */
return prog->type != BPF_PROG_TYPE_LSM ? -EACCES : 0;
}
-static const struct btf_kfunc_id_set bpf_fs_kfunc_set = {
+static const struct btf_kfunc_id_set bpf_lsm_kfunc_set = {
.owner = THIS_MODULE,
- .set = &fs_kfunc_set_ids,
- .filter = bpf_get_file_xattr_filter,
+ .set = &lsm_kfunc_set_ids,
+ .filter = bpf_lsm_kfunc_filter,
};
-static int __init bpf_fs_kfuncs_init(void)
+static int __init bpf_lsm_kfuncs_init(void)
{
- return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM, &bpf_fs_kfunc_set);
+ return register_btf_kfunc_id_set(BPF_PROG_TYPE_LSM, &bpf_lsm_kfunc_set);
}
-late_initcall(bpf_fs_kfuncs_init);
+late_initcall(bpf_lsm_kfuncs_init);
static const struct bpf_func_proto *
bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next 04/11] bpf: add new acquire/release BPF kfuncs for mm_struct
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
` (2 preceding siblings ...)
2024-02-20 9:27 ` [PATCH bpf-next 03/11] bpf: rename fs_kfunc_set_ids to lsm_kfunc_set_ids Matt Bobrowski
@ 2024-02-20 9:27 ` Matt Bobrowski
2024-02-20 9:27 ` [PATCH bpf-next 05/11] bpf/selftests: add selftests for mm_struct acquire/release BPF kfuncs Matt Bobrowski
` (6 subsequent siblings)
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:27 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
A BPF LSM program at times will introspect a mm_struct that is
associated with a task_struct. In order to perform this reliably, we
need introduce a new set of BPF kfuncs that have the ability to
acquire and release references on a mm_struct.
The following BPF kfuncs have been added in order to support this
capability:
struct mm_struct *bpf_task_mm_grab(struct task_struct *task);
void bpf_mm_drop(struct mm_struct *mm);
These newly added mm_struct based BPF kfuncs are simple wrappers
around the mmgrab() and mmdrop() in-kernel helpers. Both mmgrab() and
mmdrop() are used in favour of their somewhat similar counterparts
mmget() and mmput() as they're considered to be the more lightweight
variants in comparison i.e. they don't pin the associated address
space.
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
kernel/trace/bpf_trace.c | 43 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index c45c8d42316c..d1d29452dd0c 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1472,10 +1472,53 @@ __bpf_kfunc int bpf_get_file_xattr(struct file *file, const char *name__str,
return __vfs_getxattr(dentry, dentry->d_inode, name__str, value, value_len);
}
+/**
+ * bpf_task_mm_grab - get a reference on the mm_struct associated with the
+ * supplied task_struct
+ * @task: task_struct of which the mm_struct is to be referenced
+ *
+ * Grab a reference on the mm_struct associated with the supplied *task*. This
+ * kfunc will return NULL for threads that do not possess a valid mm_struct, for
+ * example those that are flagged as PF_KTHREAD. A reference on a mm_struct
+ * pointer acquired by this kfunc must be released using bpf_mm_drop().
+ *
+ * This helper only pins the underlying mm_struct and not necessarily the
+ * address space that is associated with the referenced mm_struct that is
+ * returned from this kfunc. This kfunc internally calls mmgrab().
+ *
+ * Return: A referenced pointer to the mm_struct associated with the supplied
+ * *task*, or NULL.
+ */
+__bpf_kfunc struct mm_struct *bpf_task_mm_grab(struct task_struct *task)
+{
+ struct mm_struct *mm;
+
+ task_lock(task);
+ mm = task->mm;
+ if (likely(mm))
+ mmgrab(mm);
+ task_unlock(task);
+
+ return mm;
+}
+
+/**
+ * bpf_mm_drop - put the reference on the supplied mm_struct
+ * @mm: mm_struct of which to put the reference on
+ *
+ * Put the reference on the supplied *mm*. This kfunc internally calls mmdrop().
+ */
+__bpf_kfunc void bpf_mm_drop(struct mm_struct *mm)
+{
+ mmdrop(mm);
+}
+
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(lsm_kfunc_set_ids)
BTF_ID_FLAGS(func, bpf_get_file_xattr, KF_SLEEPABLE | KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_task_mm_grab, KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL);
+BTF_ID_FLAGS(func, bpf_mm_drop, KF_RELEASE);
BTF_KFUNCS_END(lsm_kfunc_set_ids)
static int bpf_lsm_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id)
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next 05/11] bpf/selftests: add selftests for mm_struct acquire/release BPF kfuncs
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
` (3 preceding siblings ...)
2024-02-20 9:27 ` [PATCH bpf-next 04/11] bpf: add new acquire/release BPF kfuncs for mm_struct Matt Bobrowski
@ 2024-02-20 9:27 ` Matt Bobrowski
2024-02-20 9:28 ` [PATCH bpf-next 06/11] bpf: add new acquire/release based BPF kfuncs for exe_file Matt Bobrowski
` (5 subsequent siblings)
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:27 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
Add a new mm_kfunc test suite that is responsible for verifying the
behaviour of the newly added mm_struct based BPF kfuncs. As of now,
these selftests cover the operability of the following:
struct mm_struct *bpf_task_mm_grab(struct task_struct *task);
void bpf_mm_drop(struct mm_struct *mm);
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
.../selftests/bpf/prog_tests/mm_kfunc.c | 48 ++++++++
.../selftests/bpf/progs/mm_kfunc_common.h | 19 ++++
.../selftests/bpf/progs/mm_kfunc_failure.c | 103 ++++++++++++++++++
.../selftests/bpf/progs/mm_kfunc_success.c | 30 +++++
4 files changed, 200 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/mm_kfunc.c
create mode 100644 tools/testing/selftests/bpf/progs/mm_kfunc_common.h
create mode 100644 tools/testing/selftests/bpf/progs/mm_kfunc_failure.c
create mode 100644 tools/testing/selftests/bpf/progs/mm_kfunc_success.c
diff --git a/tools/testing/selftests/bpf/prog_tests/mm_kfunc.c b/tools/testing/selftests/bpf/prog_tests/mm_kfunc.c
new file mode 100644
index 000000000000..aece5c25486d
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/mm_kfunc.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#define _GNU_SOURCE
+#include <test_progs.h>
+
+#include "mm_kfunc_failure.skel.h"
+#include "mm_kfunc_success.skel.h"
+
+static void run_test(const char *prog_name)
+{
+ struct bpf_link *link;
+ struct bpf_program *prog;
+ struct mm_kfunc_success *skel;
+
+ skel = mm_kfunc_success__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "mm_kfunc_success__open_and_load"))
+ return;
+
+ link = NULL;
+ prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+ if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+ goto cleanup;
+
+ link = bpf_program__attach(prog);
+ ASSERT_OK_PTR(link, "bpf_program__attach");
+cleanup:
+ bpf_link__destroy(link);
+ mm_kfunc_success__destroy(skel);
+}
+
+static const char * const success_tests[] = {
+ "task_mm_grab_drop_from_argument",
+ "task_mm_acquire_release_from_current",
+};
+
+void test_mm_kfunc(void)
+{
+ int i = 0;
+
+ for (; i < ARRAY_SIZE(success_tests); i++) {
+ if (!test__start_subtest(success_tests[i]))
+ continue;
+ run_test(success_tests[i]);
+ }
+
+ RUN_TESTS(mm_kfunc_failure);
+}
diff --git a/tools/testing/selftests/bpf/progs/mm_kfunc_common.h b/tools/testing/selftests/bpf/progs/mm_kfunc_common.h
new file mode 100644
index 000000000000..043d74d4148b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mm_kfunc_common.h
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#ifndef _MM_KFUNC_COMMON_H
+#define _MM_KFUNC_COMMON_H
+
+#include <vmlinux.h>
+#include <errno.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#include "bpf_misc.h"
+
+struct mm_struct *bpf_task_mm_grab(struct task_struct *task) __ksym;
+void bpf_mm_drop(struct mm_struct *mm) __ksym;
+
+char _license[] SEC("license") = "GPL";
+
+#endif /* _MM_KFUNC_COMMON_H */
diff --git a/tools/testing/selftests/bpf/progs/mm_kfunc_failure.c b/tools/testing/selftests/bpf/progs/mm_kfunc_failure.c
new file mode 100644
index 000000000000..d818dfcab20e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mm_kfunc_failure.c
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#include "mm_kfunc_common.h"
+
+SEC("lsm.s/file_open")
+__failure __msg("Possibly NULL pointer passed to trusted arg0")
+int BPF_PROG(task_mm_grab_null_kfunc)
+{
+ struct mm_struct *acquired;
+
+ /* Can't pass a NULL pointer to bpf_task_mm_grab(). */
+ acquired = bpf_task_mm_grab(NULL);
+ if (!acquired)
+ return 0;
+ bpf_mm_drop(acquired);
+
+ return 0;
+}
+
+SEC("lsm/task_free")
+__failure __msg("R1 must be referenced or trusted")
+int BPF_PROG(task_mm_grab_from_lsm_task_free_kfunc, struct task_struct *task)
+{
+ struct mm_struct *acquired;
+
+ /* The task_struct supplied to this LSM hook isn't trusted. */
+ acquired = bpf_task_mm_grab(task);
+ if (!acquired)
+ return 0;
+ bpf_mm_drop(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/task_alloc")
+__failure __msg("arg#0 pointer type STRUCT task_struct must point")
+int BPF_PROG(task_mm_grab_fp_kfunc, struct task_struct *task, u64 clone_flags)
+{
+ struct task_struct *fp;
+ struct mm_struct *acquired;
+
+ fp = (struct task_struct *)&clone_flags;
+ /* Can't pass random frame pointer to bpf_task_mm_grab(). */
+ acquired = bpf_task_mm_grab(fp);
+ if (!acquired)
+ return 0;
+ bpf_mm_drop(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/task_alloc")
+__failure __msg("Unreleased reference")
+int BPF_PROG(task_mm_grab_unreleased_kfunc, struct task_struct *task)
+{
+ struct mm_struct *acquired;
+
+ acquired = bpf_task_mm_grab(task);
+ __sink(acquired);
+
+ /* Acquired but never released. */
+ return 0;
+}
+
+SEC("lsm.s/task_alloc")
+__failure __msg("R1 must be referenced or trusted")
+int BPF_PROG(task_mm_drop_untrusted_kfunc, struct task_struct *task)
+{
+ struct mm_struct *acquired;
+
+ /* task->mm from struct task_struct yields an untrusted pointer. */
+ acquired = task->mm;
+ if (!acquired)
+ return 0;
+ bpf_mm_drop(acquired);
+
+ return 0;
+}
+
+SEC("lsm/vm_enough_memory")
+__failure __msg("release kernel function bpf_mm_drop expects")
+int BPF_PROG(mm_drop_unacquired_kfunc, struct mm_struct *mm)
+{
+ /* Can't release an unacquired pointer. */
+ bpf_mm_drop(mm);
+
+ return 0;
+}
+
+SEC("lsm/vm_enough_memory")
+__failure __msg("arg#0 pointer type STRUCT mm_struct must point")
+int BPF_PROG(mm_drop_fp_kfunc, struct mm_struct *mm, long pages)
+{
+ struct mm_struct *fp;
+
+ fp = (struct mm_struct *)&pages;
+
+ /* Can't release random frame pointer. */
+ bpf_mm_drop(fp);
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/mm_kfunc_success.c b/tools/testing/selftests/bpf/progs/mm_kfunc_success.c
new file mode 100644
index 000000000000..5400abd2ee2d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mm_kfunc_success.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#include "mm_kfunc_common.h"
+
+SEC("lsm.s/task_alloc")
+int BPF_PROG(task_mm_grab_drop_from_argument, struct task_struct *task)
+{
+ struct mm_struct *acquired;
+
+ acquired = bpf_task_mm_grab(task);
+ if (!acquired)
+ return 0;
+ bpf_mm_drop(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+int BPF_PROG(task_mm_acquire_release_from_current)
+{
+ struct mm_struct *acquired;
+
+ acquired = bpf_task_mm_grab(bpf_get_current_task_btf());
+ if (!acquired)
+ return 0;
+ bpf_mm_drop(acquired);
+
+ return 0;
+}
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next 06/11] bpf: add new acquire/release based BPF kfuncs for exe_file
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
` (4 preceding siblings ...)
2024-02-20 9:27 ` [PATCH bpf-next 05/11] bpf/selftests: add selftests for mm_struct acquire/release BPF kfuncs Matt Bobrowski
@ 2024-02-20 9:28 ` Matt Bobrowski
2024-02-20 9:28 ` [PATCH bpf-next 07/11] bpf/selftests: add selftests for exe_file acquire/release BPF kfuncs Matt Bobrowski
` (4 subsequent siblings)
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:28 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
It's common for BPF LSM programs to perform the following struct walk
current->mm->exe_file and subsequently operate on fields of the
backing struct file. At times, some of these operations involve
passing a exe_file's field on to BPF helpers and such
i.e. bpf_d_path(¤t->mm->exe_file->f_path). However, doing this
isn't necessarily always reliable as the backing struct file that
exe_file is pointing may be in the midst of being torn down and
handing anything contained within this file to BPF helpers and such
can lead to memory corruption issues [0].
To alleviate possibly operating on semi-torn down instances of
current->mm->exe_file we introduce a set of BPF kfuncs that will allow
BPF LSM program to reliably acquire a reference on a
current->mm->exe_file. The following new BPF kfuncs have been added:
struct file *bpf_get_task_exe_file(struct task_struct *task);
struct file *bpf_get_mm_exe_file(struct mm_struct *mm);
void bpf_put_file(struct file *f);
Internally, these new BPF kfuncs simply call the pre-existing
in-kernel functions get_task_exe_file(), get_mm_exe_file(), and fput()
accordingly. Note that we explicitly do not rely on the use of very
low-level in-kernel functions like get_file_rcu() and
get_file_active() to acquire a reference on current->mm->exe_file and
such. This is super subtle code and we probably want to avoid exposing
any such subtleties to BPF in the form of BPF kfuncs. Additionally,
the usage of a double pointer i.e. struct file **, isn't something
that the BPF verifier currently recognizes nor has any intention to
recognize for the foreseeable future.
[0] https://lore.kernel.org/bpf/CAG48ez0ppjcT=QxU-jtCUfb5xQb3mLr=5FcwddF_VKfEBPs_Dg@mail.gmail.com/
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
kernel/trace/bpf_trace.c | 49 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index d1d29452dd0c..fbb252ad1d40 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1513,12 +1513,61 @@ __bpf_kfunc void bpf_mm_drop(struct mm_struct *mm)
mmdrop(mm);
}
+/**
+ * bpf_get_task_exe_file - get a reference on a mm's exe_file for the supplied
+ * task_struct
+ * @task: task_struct of which the mm's exe_file to get a reference on
+ *
+ * Get a reference on a mm's exe_file that is associated with the supplied
+ * *task*. A reference on a file pointer acquired using this kfunc must be
+ * released using bpf_put_file().
+ *
+ * Return: A referenced file pointer to the supplied *task* mm's exe_file, or
+ * NULL.
+ */
+__bpf_kfunc struct file *bpf_get_task_exe_file(struct task_struct *task)
+{
+ return get_task_exe_file(task);
+}
+
+/**
+ * bpf_get_mm_exe_file - get a reference on the exe_file for the supplied
+ * mm_struct.
+ * @mm: mm_struct of which the exe_file to get a reference on
+ *
+ * Get a reference on the supplued *mm* exe_file. A reference on a file pointer
+ * acquired using this kfunc must be released using bpf_put_file().
+ *
+ * Return: A referenced file pointer to the exe_file for the supplied *mm*, or
+ * NULL.
+ */
+__bpf_kfunc struct file *bpf_get_mm_exe_file(struct mm_struct *mm)
+{
+ return get_mm_exe_file(mm);
+}
+
+/**
+ * bpf_put_file - put the reference on the supplied file
+ * @f: file of which to put a reference on
+ *
+ * Put a reference on the supplied *f*.
+ */
+__bpf_kfunc void bpf_put_file(struct file *f)
+{
+ fput(f);
+}
+
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(lsm_kfunc_set_ids)
BTF_ID_FLAGS(func, bpf_get_file_xattr, KF_SLEEPABLE | KF_TRUSTED_ARGS)
BTF_ID_FLAGS(func, bpf_task_mm_grab, KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL);
BTF_ID_FLAGS(func, bpf_mm_drop, KF_RELEASE);
+BTF_ID_FLAGS(func, bpf_get_task_exe_file,
+ KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_get_mm_exe_file,
+ KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_put_file, KF_RELEASE | KF_SLEEPABLE)
BTF_KFUNCS_END(lsm_kfunc_set_ids)
static int bpf_lsm_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id)
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next 07/11] bpf/selftests: add selftests for exe_file acquire/release BPF kfuncs
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
` (5 preceding siblings ...)
2024-02-20 9:28 ` [PATCH bpf-next 06/11] bpf: add new acquire/release based BPF kfuncs for exe_file Matt Bobrowski
@ 2024-02-20 9:28 ` Matt Bobrowski
2024-02-20 9:28 ` [PATCH bpf-next 08/11] bpf: add acquire/release based BPF kfuncs for fs_struct's paths Matt Bobrowski
` (3 subsequent siblings)
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:28 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
Add a new exe_file_kfunc test suite that is responsible for verifying
the behaviour of the newly added exe_file based BPF kfuncs.
For now, this new exe_file_kfunc test suite covers the following BPF
kfuncs:
struct file *bpf_get_task_exe_file(struct task_struct *task);
struct file *bpf_get_mm_exe_file(struct mm_struct *mm);
void bpf_put_file(struct file *f);
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
.../selftests/bpf/prog_tests/exe_file_kfunc.c | 49 +++++
.../bpf/progs/exe_file_kfunc_common.h | 23 +++
.../bpf/progs/exe_file_kfunc_failure.c | 181 ++++++++++++++++++
.../bpf/progs/exe_file_kfunc_success.c | 52 +++++
4 files changed, 305 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/exe_file_kfunc.c
create mode 100644 tools/testing/selftests/bpf/progs/exe_file_kfunc_common.h
create mode 100644 tools/testing/selftests/bpf/progs/exe_file_kfunc_failure.c
create mode 100644 tools/testing/selftests/bpf/progs/exe_file_kfunc_success.c
diff --git a/tools/testing/selftests/bpf/prog_tests/exe_file_kfunc.c b/tools/testing/selftests/bpf/prog_tests/exe_file_kfunc.c
new file mode 100644
index 000000000000..5900c1d4e820
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/exe_file_kfunc.c
@@ -0,0 +1,49 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#define _GNU_SOURCE
+#include <test_progs.h>
+
+#include "exe_file_kfunc_failure.skel.h"
+#include "exe_file_kfunc_success.skel.h"
+
+static void run_test(const char *prog_name)
+{
+ struct bpf_link *link;
+ struct bpf_program *prog;
+ struct exe_file_kfunc_success *skel;
+
+ skel = exe_file_kfunc_success__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "file_kfunc_success__open_and_load"))
+ return;
+
+ link = NULL;
+ prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+ if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+ goto cleanup;
+
+ link = bpf_program__attach(prog);
+ ASSERT_OK_PTR(link, "bpf_program__attach");
+cleanup:
+ bpf_link__destroy(link);
+ exe_file_kfunc_success__destroy(skel);
+}
+
+static const char * const success_tests[] = {
+ "get_task_exe_file_and_put_kfunc_from_current",
+ "get_task_exe_file_and_put_kfunc_from_argument",
+ "get_mm_exe_file_and_put_kfunc_from_current",
+};
+
+void test_exe_file_kfunc(void)
+{
+ int i = 0;
+
+ for (; i < ARRAY_SIZE(success_tests); i++) {
+ if (!test__start_subtest(success_tests[i]))
+ continue;
+ run_test(success_tests[i]);
+ }
+
+ RUN_TESTS(exe_file_kfunc_failure);
+}
diff --git a/tools/testing/selftests/bpf/progs/exe_file_kfunc_common.h b/tools/testing/selftests/bpf/progs/exe_file_kfunc_common.h
new file mode 100644
index 000000000000..6623bcc130c3
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/exe_file_kfunc_common.h
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#ifndef _FILE_KFUNC_COMMON_H
+#define _FILE_KFUNC_COMMON_H
+
+#include <vmlinux.h>
+#include <errno.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#include "bpf_misc.h"
+
+struct mm_struct *bpf_task_mm_grab(struct task_struct *task) __ksym;
+void bpf_mm_drop(struct mm_struct *mm) __ksym;
+
+struct file *bpf_get_task_exe_file(struct task_struct *task) __ksym;
+struct file *bpf_get_mm_exe_file(struct mm_struct *mm) __ksym;
+void bpf_put_file(struct file *f) __ksym;
+
+char _license[] SEC("license") = "GPL";
+
+#endif /* _FILE_KFUNC_COMMON_H */
diff --git a/tools/testing/selftests/bpf/progs/exe_file_kfunc_failure.c b/tools/testing/selftests/bpf/progs/exe_file_kfunc_failure.c
new file mode 100644
index 000000000000..8a4464481531
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/exe_file_kfunc_failure.c
@@ -0,0 +1,181 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#include "exe_file_kfunc_common.h"
+
+SEC("lsm.s/file_open")
+__failure __msg("Possibly NULL pointer passed to trusted arg0")
+int BPF_PROG(get_task_exe_file_kfunc_null)
+{
+ struct file *acquired;
+
+ /* Can't pass a NULL pointer to bpf_get_task_exe_file(). */
+ acquired = bpf_get_task_exe_file(NULL);
+ bpf_put_file(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("Possibly NULL pointer passed to trusted arg0")
+int BPF_PROG(get_mm_exe_file_kfunc_null)
+{
+ struct file *acquired;
+
+ /* Can't pass a NULL pointer to bpf_get_mm_exe_file(). */
+ acquired = bpf_get_mm_exe_file(NULL);
+ bpf_put_file(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/inode_getxattr")
+__failure __msg("arg#0 pointer type STRUCT task_struct must point to scalar, or struct with scalar")
+int BPF_PROG(get_task_exe_file_kfunc_fp)
+{
+ u64 x;
+ struct file *acquired;
+ struct task_struct *fp;
+
+ fp = (struct task_struct *)&x;
+ /* Can't pass random frame pointer to bpf_get_task_exe_file(). */
+ acquired = bpf_get_task_exe_file(fp);
+ bpf_put_file(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/inode_getxattr")
+__failure __msg("arg#0 pointer type STRUCT mm_struct must point to scalar, or struct with scalar")
+int BPF_PROG(get_mm_exe_file_kfunc_fp)
+{
+ int x;
+ struct file *acquired;
+ struct mm_struct *fp;
+
+ fp = (struct mm_struct *)&x;
+ /* Can't pass random frame pointer to bpf_get_mm_exe_file(). */
+ acquired = bpf_get_mm_exe_file(fp);
+ if (!acquired)
+ return 0;
+ bpf_put_file(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("R1 must be referenced or trusted")
+int BPF_PROG(get_task_exe_file_kfunc_untrusted)
+{
+ struct file *acquired;
+ struct task_struct *parent;
+
+ /* Walking a trusted struct task_struct returned from
+ * bpf_get_current_task_btf() yields an untrusted pointer. */
+ parent = bpf_get_current_task_btf()->parent;
+ /* Can't pass untrusted pointer to bpf_get_task_exe_file(). */
+ acquired = bpf_get_task_exe_file(parent);
+ if (!acquired)
+ return 0;
+ bpf_put_file(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("R1 must be referenced or trusted")
+int BPF_PROG(get_mm_exe_file_kfunc_untrusted)
+{
+ struct file *acquired;
+ struct mm_struct *mm;
+
+ /* Walking a struct task_struct obtained from bpf_get_current_task_btf()
+ * yields an untrusted pointer. */
+ mm = bpf_get_current_task_btf()->mm;
+ /* Can't pass untrusted pointer to bpf_get_mm_exe_file(). */
+ acquired = bpf_get_mm_exe_file(mm);
+ if (!acquired)
+ return 0;
+ bpf_put_file(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("Unreleased reference")
+int BPF_PROG(get_task_exe_file_kfunc_unreleased)
+{
+ struct file *acquired;
+
+ acquired = bpf_get_task_exe_file(bpf_get_current_task_btf());
+ if (!acquired)
+ return 0;
+ __sink(acquired);
+
+ /* Acquired but never released. */
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("Unreleased reference")
+int BPF_PROG(get_mm_exe_file_kfunc_unreleased)
+{
+ struct file *acquired;
+ struct mm_struct *mm;
+
+ mm = bpf_task_mm_grab(bpf_get_current_task_btf());
+ if (!mm)
+ return 0;
+
+ acquired = bpf_get_mm_exe_file(mm);
+ if (!acquired) {
+ bpf_mm_drop(mm);
+ return 0;
+ }
+ __sink(acquired);
+ bpf_mm_drop(mm);
+
+ /* Acquired but never released. */
+ return 0;
+}
+
+SEC("lsm/file_open")
+__failure __msg("program must be sleepable to call sleepable kfunc bpf_put_file")
+int BPF_PROG(put_file_kfunc_not_sleepable, struct file *f)
+{
+ struct file *acquired;
+
+ acquired = bpf_get_task_exe_file(bpf_get_current_task_btf());
+ if (!acquired)
+ return 0;
+
+ /* Can't call bpf_put_file() from non-sleepable BPF program. */
+ bpf_put_file(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("release kernel function bpf_put_file expects")
+int BPF_PROG(put_file_kfunc_unacquired, struct file *f)
+{
+ /* Can't release an unacquired pointer. */
+ bpf_put_file(f);
+
+ return 0;
+}
+
+SEC("tp_btf/task_newtask")
+__failure __msg("calling kernel function bpf_get_task_exe_file is not allowed")
+int BPF_PROG(get_task_exe_file_kfunc_not_lsm_prog, struct task_struct *task)
+{
+ struct file *acquired;
+
+ /* bpf_get_task_exe_file() can only be called from BPF LSM program. */
+ acquired = bpf_get_task_exe_file(bpf_get_current_task_btf());
+ if (!acquired)
+ return 0;
+ bpf_put_file(acquired);
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/exe_file_kfunc_success.c b/tools/testing/selftests/bpf/progs/exe_file_kfunc_success.c
new file mode 100644
index 000000000000..ae789cb0a9ae
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/exe_file_kfunc_success.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#include "exe_file_kfunc_common.h"
+
+SEC("lsm.s/file_open")
+int BPF_PROG(get_task_exe_file_and_put_kfunc_from_current)
+{
+ struct file *acquired;
+
+ acquired = bpf_get_task_exe_file(bpf_get_current_task_btf());
+ if (!acquired)
+ return 0;
+ bpf_put_file(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/task_alloc")
+int BPF_PROG(get_task_exe_file_and_put_kfunc_from_argument,
+ struct task_struct *task)
+{
+ struct file *acquired;
+
+ acquired = bpf_get_task_exe_file(task);
+ if (!acquired)
+ return 0;
+ bpf_put_file(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+int BPF_PROG(get_mm_exe_file_and_put_kfunc_from_current)
+{
+ struct file *acquired;
+ struct mm_struct *mm;
+
+ mm = bpf_task_mm_grab(bpf_get_current_task_btf());
+ if (!mm)
+ return 0;
+
+ acquired = bpf_get_mm_exe_file(mm);
+ if (!acquired) {
+ bpf_mm_drop(mm);
+ return 0;
+ }
+ bpf_put_file(acquired);
+ bpf_mm_drop(mm);
+
+ return 0;
+}
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next 08/11] bpf: add acquire/release based BPF kfuncs for fs_struct's paths
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
` (6 preceding siblings ...)
2024-02-20 9:28 ` [PATCH bpf-next 07/11] bpf/selftests: add selftests for exe_file acquire/release BPF kfuncs Matt Bobrowski
@ 2024-02-20 9:28 ` Matt Bobrowski
2024-02-20 9:28 ` [PATCH bpf-next 09/11] bpf/selftests: add selftests for root/pwd path based BPF kfuncs Matt Bobrowski
` (2 subsequent siblings)
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:28 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
Add the ability to obtain a reference on a common set of path's that
are associated with a task_struct's fs_struct. Both fs_struct's root
and pwd paths are commonly operated on in BPF LSM programs and at
times handed off to BPF helpers and such. There needs to be a
mechanism that supports BPF LSM programs to obtain stable handle to
such in-kernel structures.
We provide that mechanism through the introduction of the following
new BPF kfuncs:
struct path *bpf_get_task_fs_root(struct task_struct *task);
struct path *bpf_get_task_fs_pwd(struct task_struct *task);
void bpf_put_path(struct path *path);
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
kernel/trace/bpf_trace.c | 83 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 83 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index fbb252ad1d40..2bb7766337ca 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -10,6 +10,7 @@
#include <linux/bpf_perf_event.h>
#include <linux/btf.h>
#include <linux/filter.h>
+#include <linux/fs_struct.h>
#include <linux/uaccess.h>
#include <linux/ctype.h>
#include <linux/kprobes.h>
@@ -1557,6 +1558,83 @@ __bpf_kfunc void bpf_put_file(struct file *f)
fput(f);
}
+/**
+ * bpf_get_task_fs_root - get a reference on the fs_struct's root for the
+ * supplied task_struct
+ * @task: task_struct of which the fs_struct's root path to get a reference on
+ *
+ * Get a reference on the root path associated with the supplied *task*. The
+ * referenced path retruned from this kfunc must be released using
+ * bpf_put_path().
+ *
+ * Return: A referenced path pointer to the fs_struct's root of the supplied
+ * *task*, or NULL.
+ */
+__bpf_kfunc struct path *bpf_get_task_fs_root(struct task_struct *task)
+{
+ struct path *root;
+ struct fs_struct *fs;
+
+ task_lock(task);
+ fs = task->fs;
+ if (unlikely(fs)) {
+ task_unlock(task);
+ return NULL;
+ }
+
+ spin_lock(&fs->lock);
+ root = &fs->root;
+ path_get(root);
+ spin_unlock(&fs->lock);
+ task_unlock(task);
+
+ return root;
+}
+
+/**
+ * bpf_get_task_fs_pwd - get a reference on the fs_struct's pwd for the supplied
+ * task_struct
+ * @task: task_struct of which the fs_struct's pwd path to get a reference on
+ *
+ * Get a reference on the pwd path associated with the supplied *task*. A
+ * referenced path returned from this kfunc must be released using
+ * bpf_put_path().
+ *
+ * Return: A referenced path pointer to the fs_struct's pwd of the supplied
+ * *task*, or NULL.
+ */
+__bpf_kfunc struct path *bpf_get_task_fs_pwd(struct task_struct *task)
+{
+ struct path *pwd;
+ struct fs_struct *fs;
+
+ task_lock(task);
+ fs = task->fs;
+ if (unlikely(fs)) {
+ task_unlock(task);
+ return NULL;
+ }
+
+ spin_lock(&fs->lock);
+ pwd = &fs->pwd;
+ path_get(pwd);
+ spin_unlock(&fs->lock);
+ task_unlock(task);
+
+ return pwd;
+}
+
+/**
+ * bpf_put_path - put the reference on the supplied path
+ * @path: path of which to put a reference on
+ *
+ * Put a reference on the supplied *path*.
+ */
+__bpf_kfunc void bpf_put_path(struct path *path)
+{
+ path_put(path);
+}
+
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(lsm_kfunc_set_ids)
@@ -1568,6 +1646,11 @@ BTF_ID_FLAGS(func, bpf_get_task_exe_file,
BTF_ID_FLAGS(func, bpf_get_mm_exe_file,
KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_put_file, KF_RELEASE | KF_SLEEPABLE)
+BTF_ID_FLAGS(func, bpf_get_task_fs_root,
+ KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_get_task_fs_pwd,
+ KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_put_path, KF_RELEASE | KF_SLEEPABLE)
BTF_KFUNCS_END(lsm_kfunc_set_ids)
static int bpf_lsm_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id)
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next 09/11] bpf/selftests: add selftests for root/pwd path based BPF kfuncs
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
` (7 preceding siblings ...)
2024-02-20 9:28 ` [PATCH bpf-next 08/11] bpf: add acquire/release based BPF kfuncs for fs_struct's paths Matt Bobrowski
@ 2024-02-20 9:28 ` Matt Bobrowski
2024-02-20 9:28 ` [PATCH bpf-next 10/11] bpf: add trusted d_path() based BPF kfunc bpf_path_d_path() Matt Bobrowski
2024-02-20 9:28 ` [PATCH bpf-next 11/11] bpf/selftests: adapt selftests test_d_path for " Matt Bobrowski
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:28 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
Add a new path_kfunc test suite that is responsible for verifiying the
operability of the newly added root/pwd path based BPF kfuncs. This
test suite covers the following BPF kfuncs:
struct path *bpf_get_task_fs_root(struct task_struct *task);
struct path *bpf_get_task_fs_pwd(struct task_struct *task);
void bpf_put_path(struct path *path);
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
.../selftests/bpf/prog_tests/path_kfunc.c | 48 ++++++++
.../selftests/bpf/progs/path_kfunc_common.h | 20 +++
.../selftests/bpf/progs/path_kfunc_failure.c | 114 ++++++++++++++++++
.../selftests/bpf/progs/path_kfunc_success.c | 30 +++++
4 files changed, 212 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/path_kfunc.c
create mode 100644 tools/testing/selftests/bpf/progs/path_kfunc_common.h
create mode 100644 tools/testing/selftests/bpf/progs/path_kfunc_failure.c
create mode 100644 tools/testing/selftests/bpf/progs/path_kfunc_success.c
diff --git a/tools/testing/selftests/bpf/prog_tests/path_kfunc.c b/tools/testing/selftests/bpf/prog_tests/path_kfunc.c
new file mode 100644
index 000000000000..9a8701a7999c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/path_kfunc.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Google LLC. */
+
+#define _GNU_SOURCE
+#include <test_progs.h>
+
+#include "path_kfunc_failure.skel.h"
+#include "path_kfunc_success.skel.h"
+
+static void run_test(const char *prog_name)
+{
+ struct bpf_link *link;
+ struct bpf_program *prog;
+ struct path_kfunc_success *skel;
+
+ skel = path_kfunc_success__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "path_kfunc_success__open_and_load"))
+ return;
+
+ link = NULL;
+ prog = bpf_object__find_program_by_name(skel->obj, prog_name);
+ if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+ goto cleanup;
+
+ link = bpf_program__attach(prog);
+ ASSERT_OK_PTR(link, "bpf_program__attach");
+cleanup:
+ bpf_link__destroy(link);
+ path_kfunc_success__destroy(skel);
+}
+
+static const char * const success_tests[] = {
+ "get_task_fs_root_and_put_from_current",
+ "get_task_fs_pwd_and_put_from_current",
+};
+
+void test_path_kfunc(void)
+{
+ int i = 0;
+
+ for (; i < ARRAY_SIZE(success_tests); i++) {
+ if (!test__start_subtest(success_tests[i]))
+ continue;
+ run_test(success_tests[i]);
+ }
+
+ RUN_TESTS(path_kfunc_failure);
+}
diff --git a/tools/testing/selftests/bpf/progs/path_kfunc_common.h b/tools/testing/selftests/bpf/progs/path_kfunc_common.h
new file mode 100644
index 000000000000..837dc03c136d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/path_kfunc_common.h
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Google LLC. */
+
+#ifndef _PATH_KFUNC_COMMON_H
+#define _PATH_KFUNC_COMMON_H
+
+#include <vmlinux.h>
+#include <errno.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct path *bpf_get_task_fs_root(struct task_struct *task) __ksym;
+struct path *bpf_get_task_fs_pwd(struct task_struct *task) __ksym;
+void bpf_put_path(struct path *path) __ksym;
+
+#endif /* _PATH_KFUNC_COMMON_H */
diff --git a/tools/testing/selftests/bpf/progs/path_kfunc_failure.c b/tools/testing/selftests/bpf/progs/path_kfunc_failure.c
new file mode 100644
index 000000000000..a28797e245e3
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/path_kfunc_failure.c
@@ -0,0 +1,114 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Google LLC. */
+
+#include "path_kfunc_common.h"
+
+SEC("lsm.s/file_open")
+__failure __msg("Possibly NULL pointer passed to trusted arg0")
+int BPF_PROG(get_task_fs_root_kfunc_null)
+{
+ struct path *acquired;
+
+ /* Can't pass a NULL pointer to bpf_get_task_fs_root(). */
+ acquired = bpf_get_task_fs_root(NULL);
+ if (!acquired)
+ return 0;
+ bpf_put_path(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("Possibly NULL pointer passed to trusted arg0")
+int BPF_PROG(get_task_fs_pwd_kfunc_null)
+{
+ struct path *acquired;
+
+ /* Can't pass a NULL pointer to bpf_get_task_fs_pwd(). */
+ acquired = bpf_get_task_fs_pwd(NULL);
+ if (!acquired)
+ return 0;
+ bpf_put_path(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/task_alloc")
+__failure __msg("R1 must be referenced or trusted")
+int BPF_PROG(get_task_fs_root_kfunc_untrusted, struct task_struct *task)
+{
+ struct path *acquired;
+ struct task_struct *parent;
+
+ /* Walking the struct task_struct will yield an untrusted pointer. */
+ parent = task->parent;
+ if (!parent)
+ return 0;
+
+ acquired = bpf_get_task_fs_root(parent);
+ if (!acquired)
+ return 0;
+ bpf_put_path(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/task_alloc")
+__failure __msg("R1 must be referenced or trusted")
+int BPF_PROG(get_task_fs_pwd_kfunc_untrusted, struct task_struct *task)
+{
+ struct path *acquired;
+ struct task_struct *parent;
+
+ /* Walking the struct task_struct will yield an untrusted pointer. */
+ parent = task->parent;
+ if (!parent)
+ return 0;
+
+ acquired = bpf_get_task_fs_pwd(parent);
+ if (!acquired)
+ return 0;
+ bpf_put_path(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("Unreleased reference")
+int BPF_PROG(get_task_fs_root_kfunc_unreleased)
+{
+ struct path *acquired;
+
+ acquired = bpf_get_task_fs_root(bpf_get_current_task_btf());
+ if (!acquired)
+ return 0;
+ __sink(acquired);
+
+ /* Acquired but never released. */
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("Unreleased reference")
+int BPF_PROG(get_task_fs_pwd_kfunc_unreleased)
+{
+ struct path *acquired;
+
+ acquired = bpf_get_task_fs_pwd(bpf_get_current_task_btf());
+ if (!acquired)
+ return 0;
+ __sink(acquired);
+
+ /* Acquired but never released. */
+ return 0;
+}
+
+SEC("lsm.s/inode_getattr")
+__failure __msg("release kernel function bpf_put_path expects refcounted PTR_TO_BTF_ID")
+int BPF_PROG(put_path_kfunc_unacquired, struct path *path)
+{
+ /* Can't release an unacquired pointer. */
+ bpf_put_path(path);
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/path_kfunc_success.c b/tools/testing/selftests/bpf/progs/path_kfunc_success.c
new file mode 100644
index 000000000000..8fc8e3c51405
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/path_kfunc_success.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Google LLC. */
+
+#include "path_kfunc_common.h"
+
+SEC("lsm.s/file_open")
+int BPF_PROG(get_task_fs_root_and_put_from_current)
+{
+ struct path *acquired;
+
+ acquired = bpf_get_task_fs_root(bpf_get_current_task_btf());
+ if (!acquired)
+ return 0;
+ bpf_put_path(acquired);
+
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+int BPF_PROG(get_task_fs_pwd_and_put_from_current)
+{
+ struct path *acquired;
+
+ acquired = bpf_get_task_fs_pwd(bpf_get_current_task_btf());
+ if (!acquired)
+ return 0;
+ bpf_put_path(acquired);
+
+ return 0;
+}
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next 10/11] bpf: add trusted d_path() based BPF kfunc bpf_path_d_path()
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
` (8 preceding siblings ...)
2024-02-20 9:28 ` [PATCH bpf-next 09/11] bpf/selftests: add selftests for root/pwd path based BPF kfuncs Matt Bobrowski
@ 2024-02-20 9:28 ` Matt Bobrowski
2024-02-20 9:28 ` [PATCH bpf-next 11/11] bpf/selftests: adapt selftests test_d_path for " Matt Bobrowski
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:28 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
The legacy bpf_d_path() helper didn't operate on trusted pointer
arguments, therefore under certain circumstances was susceptible to
memory corruption issues [0]. This new d_path() based BPF kfunc
bpf_path_d_path() makes use of trusted pointer arguments and therefore
is subject to the BPF verifier constraints associated with
KF_TRUSTED_ARGS semantics. Making use of such trusted pointer
semantics will allow d_path() to be called safely from the contexts of
a BPF program.
For now, we restrict bpf_path_d_path() to BPF LSM program types, but
this may be relaxed in the future.
[0]
https://lore.kernel.org/bpf/CAG48ez0ppjcT=QxU-jtCUfb5xQb3mLr=5FcwddF_VKfEBPs_Dg@mail.gmail.com/
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
kernel/trace/bpf_trace.c | 31 +++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 2bb7766337ca..57a7b4aae8d5 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1635,6 +1635,36 @@ __bpf_kfunc void bpf_put_path(struct path *path)
path_put(path);
}
+/**
+ * bpf_path_d_path - resolve the pathname for a given path
+ * @path: path to resolve the pathname for
+ * @buf: buffer to return the resolved path value in
+ * @buflen: length of the supplied buffer
+ *
+ * Resolve the pathname for the supplied trusted *path* in *buf*. This kfunc is
+ * the trusted/safer variant of the legacy bpf_d_path() helper.
+ *
+ * Return: A strictly positive integer corresponding to the length of the string
+ * representing the resolved pathname, including the NUL termination
+ * character. On error, a negative integer is returned.
+ */
+__bpf_kfunc int bpf_path_d_path(struct path *path, char *buf, int buflen)
+{
+ int len;
+ char *ret;
+
+ if (buflen <= 0)
+ return -EINVAL;
+
+ ret = d_path(path, buf, buflen);
+ if (IS_ERR(ret))
+ return PTR_ERR(ret);
+
+ len = buf + buflen - ret;
+ memmove(buf, ret, len);
+ return len;
+}
+
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(lsm_kfunc_set_ids)
@@ -1651,6 +1681,7 @@ BTF_ID_FLAGS(func, bpf_get_task_fs_root,
BTF_ID_FLAGS(func, bpf_get_task_fs_pwd,
KF_ACQUIRE | KF_TRUSTED_ARGS | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_put_path, KF_RELEASE | KF_SLEEPABLE)
+BTF_ID_FLAGS(func, bpf_path_d_path, KF_TRUSTED_ARGS | KF_SLEEPABLE)
BTF_KFUNCS_END(lsm_kfunc_set_ids)
static int bpf_lsm_kfunc_filter(const struct bpf_prog *prog, u32 kfunc_id)
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH bpf-next 11/11] bpf/selftests: adapt selftests test_d_path for BPF kfunc bpf_path_d_path()
2024-02-20 9:27 [PATCH bpf-next 00/11] bpf: probe-read bpf_d_path() and add new acquire/release BPF kfuncs Matt Bobrowski
` (9 preceding siblings ...)
2024-02-20 9:28 ` [PATCH bpf-next 10/11] bpf: add trusted d_path() based BPF kfunc bpf_path_d_path() Matt Bobrowski
@ 2024-02-20 9:28 ` Matt Bobrowski
10 siblings, 0 replies; 16+ messages in thread
From: Matt Bobrowski @ 2024-02-20 9:28 UTC (permalink / raw)
To: bpf; +Cc: ast, andrii, kpsingh, jannh, jolsa, daniel, brauner,
linux-fsdevel
Adapt the existing test_d_path test suite to cover the operability of
the newly added trusted d_path() based BPF kfunc bpf_path_d_path().
Signed-off-by: Matt Bobrowski <mattbobrowski@google.com>
---
.../testing/selftests/bpf/prog_tests/d_path.c | 106 ++++++++++++++++--
.../selftests/bpf/progs/d_path_common.h | 34 ++++++
.../bpf/progs/d_path_kfunc_failure.c | 66 +++++++++++
.../bpf/progs/d_path_kfunc_success.c | 25 +++++
.../testing/selftests/bpf/progs/test_d_path.c | 20 +---
.../bpf/progs/test_d_path_check_rdonly_mem.c | 6 +-
.../bpf/progs/test_d_path_check_types.c | 6 +-
7 files changed, 222 insertions(+), 41 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/d_path_common.h
create mode 100644 tools/testing/selftests/bpf/progs/d_path_kfunc_failure.c
create mode 100644 tools/testing/selftests/bpf/progs/d_path_kfunc_success.c
diff --git a/tools/testing/selftests/bpf/prog_tests/d_path.c b/tools/testing/selftests/bpf/prog_tests/d_path.c
index d77ae1b1e6ba..893324d4d59f 100644
--- a/tools/testing/selftests/bpf/prog_tests/d_path.c
+++ b/tools/testing/selftests/bpf/prog_tests/d_path.c
@@ -11,6 +11,8 @@
#include "test_d_path.skel.h"
#include "test_d_path_check_rdonly_mem.skel.h"
#include "test_d_path_check_types.skel.h"
+#include "d_path_kfunc_failure.skel.h"
+#include "d_path_kfunc_success.skel.h"
/* sys_close_range is not around for long time, so let's
* make sure we can call it on systems with older glibc
@@ -44,7 +46,7 @@ static int set_pathname(int fd, pid_t pid)
return readlink(buf, src.want[src.cnt++].path, MAX_PATH_LEN);
}
-static int trigger_fstat_events(pid_t pid)
+static int trigger_fstat_events(pid_t pid, bool want_error)
{
int sockfd = -1, procfd = -1, devfd = -1, mntnsfd = -1;
int localfd = -1, indicatorfd = -1;
@@ -85,25 +87,25 @@ static int trigger_fstat_events(pid_t pid)
* safely resolve paths that are comprised of dentries that make use of
* dynamic names. We expect to return -EOPNOTSUPP for such paths.
*/
- src.want[src.cnt].err = true;
+ src.want[src.cnt].err = want_error;
src.want[src.cnt].err_code = -EOPNOTSUPP;
ret = set_pathname(pipefd[0], pid);
if (CHECK(ret < 0, "trigger", "set_pathname failed for pipe[0]\n"))
goto out_close;
- src.want[src.cnt].err = true;
+ src.want[src.cnt].err = want_error;
src.want[src.cnt].err_code = -EOPNOTSUPP;
ret = set_pathname(pipefd[1], pid);
if (CHECK(ret < 0, "trigger", "set_pathname failed for pipe[1]\n"))
goto out_close;
- src.want[src.cnt].err = true;
+ src.want[src.cnt].err = want_error;
src.want[src.cnt].err_code = -EOPNOTSUPP;
ret = set_pathname(sockfd, pid);
if (CHECK(ret < 0, "trigger", "set_pathname failed for socket\n"))
goto out_close;
- src.want[src.cnt].err = true;
+ src.want[src.cnt].err = want_error;
src.want[src.cnt].err_code = -EOPNOTSUPP;
ret = set_pathname(mntnsfd, pid);
if (CHECK(ret < 0, "trigger", "set_pathname failed for mntnsfd\n"))
@@ -151,12 +153,19 @@ static int trigger_fstat_events(pid_t pid)
return ret;
}
-static void test_d_path_basic(void)
+static void test_bpf_d_path_basic(void)
{
struct test_d_path__bss *bss;
struct test_d_path *skel;
int err;
+ /*
+ * Carrying global state across test function invocations is super
+ * gross, but it was late and I was tired and I just wanted to get the
+ * darn test working. Zero'ing this out was a simple no brainer.
+ */
+ memset(&src, 0, sizeof(src));
+
skel = test_d_path__open_and_load();
if (CHECK(!skel, "setup", "d_path skeleton failed\n"))
goto cleanup;
@@ -168,7 +177,7 @@ static void test_d_path_basic(void)
bss = skel->bss;
bss->my_pid = getpid();
- err = trigger_fstat_events(bss->my_pid);
+ err = trigger_fstat_events(bss->my_pid, /*want_error=*/true);
if (err < 0)
goto cleanup;
@@ -225,7 +234,7 @@ static void test_d_path_basic(void)
test_d_path__destroy(skel);
}
-static void test_d_path_check_rdonly_mem(void)
+static void test_bpf_d_path_check_rdonly_mem(void)
{
struct test_d_path_check_rdonly_mem *skel;
@@ -235,7 +244,7 @@ static void test_d_path_check_rdonly_mem(void)
test_d_path_check_rdonly_mem__destroy(skel);
}
-static void test_d_path_check_types(void)
+static void test_bpf_d_path_check_types(void)
{
struct test_d_path_check_types *skel;
@@ -245,14 +254,87 @@ static void test_d_path_check_types(void)
test_d_path_check_types__destroy(skel);
}
+static struct bpf_path_d_path_t {
+ const char *prog_name;
+} success_test_cases[] = {
+ {
+ .prog_name = "path_d_path_from_path_argument",
+ },
+};
+
+static void test_bpf_path_d_path(struct bpf_path_d_path_t *t)
+{
+ int i, ret;
+ struct bpf_link *link;
+ struct bpf_program *prog;
+ struct d_path_kfunc_success__bss *bss;
+ struct d_path_kfunc_success *skel;
+
+ /*
+ * Carrying global state across function invocations is super gross, but
+ * it was late and I was tired and I just wanted to get the darn test
+ * working. Zero'ing this out was a simple no brainer.
+ */
+ memset(&src, 0, sizeof(src));
+
+ skel = d_path_kfunc_success__open();
+ if (!ASSERT_OK_PTR(skel, "d_path_kfunc_success__open"))
+ return;
+
+ bss = skel->bss;
+ bss->my_pid = getpid();
+
+ ret = d_path_kfunc_success__load(skel);
+ if (CHECK(ret, "setup", "d_path_kfunc_success__load\n"))
+ goto cleanup;
+
+ link = NULL;
+ prog = bpf_object__find_program_by_name(skel->obj, t->prog_name);
+ if (!ASSERT_OK_PTR(prog, "bpf_object__find_program_by_name"))
+ goto cleanup;
+
+ link = bpf_program__attach(prog);
+ if (!ASSERT_OK_PTR(link, "bpf_program__attach"))
+ goto cleanup;
+
+ ret = trigger_fstat_events(bss->my_pid, /*want_error=*/false);
+ if (ret < 0)
+ goto cleanup;
+
+ for (i = 0; i < MAX_FILES; i++) {
+ struct want want = src.want[i];
+ CHECK(strncmp(want.path, bss->paths_stat[i], MAX_PATH_LEN),
+ "check", "failed to get stat path[%d]: %s vs %s\n", i,
+ want.path, bss->paths_stat[i]);
+ CHECK(bss->rets_stat[i] != strlen(bss->paths_stat[i]) + 1,
+ "check",
+ "failed to match stat return [%d]: %d vs %zd [%s]\n",
+ i, bss->rets_stat[i], strlen(bss->paths_stat[i]) + 1,
+ bss->paths_stat[i]);
+ }
+cleanup:
+ bpf_link__destroy(link);
+ d_path_kfunc_success__destroy(skel);
+}
+
void test_d_path(void)
{
+ int i = 0;
+
if (test__start_subtest("basic"))
- test_d_path_basic();
+ test_bpf_d_path_basic();
if (test__start_subtest("check_rdonly_mem"))
- test_d_path_check_rdonly_mem();
+ test_bpf_d_path_check_rdonly_mem();
if (test__start_subtest("check_alloc_mem"))
- test_d_path_check_types();
+ test_bpf_d_path_check_types();
+
+ for (; i < ARRAY_SIZE(success_test_cases); i++) {
+ if (!test__start_subtest(success_test_cases[i].prog_name))
+ continue;
+ test_bpf_path_d_path(&success_test_cases[i]);
+ }
+
+ RUN_TESTS(d_path_kfunc_failure);
}
diff --git a/tools/testing/selftests/bpf/progs/d_path_common.h b/tools/testing/selftests/bpf/progs/d_path_common.h
new file mode 100644
index 000000000000..42d0a28a94ea
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/d_path_common.h
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#ifndef _D_PATH_COMMON_H
+#define _D_PATH_COMMON_H
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#include "bpf_misc.h"
+
+#define MAX_PATH_LEN 128
+#define MAX_FILES 8
+
+int bpf_path_d_path(struct path *path, char *buf, int buflen) __ksym;
+
+pid_t my_pid = 0;
+
+__u32 cnt_stat = 0;
+__u32 cnt_close = 0;
+
+char paths_stat[MAX_FILES][MAX_PATH_LEN] = {};
+char paths_close[MAX_FILES][MAX_PATH_LEN] = {};
+
+int rets_stat[MAX_FILES] = {};
+int rets_close[MAX_FILES] = {};
+
+int called_stat = 0;
+int called_close = 0;
+
+char _license[] SEC("license") = "GPL";
+
+#endif /* _D_PATH_COMMON_H */
diff --git a/tools/testing/selftests/bpf/progs/d_path_kfunc_failure.c b/tools/testing/selftests/bpf/progs/d_path_kfunc_failure.c
new file mode 100644
index 000000000000..9da5f0d395c9
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/d_path_kfunc_failure.c
@@ -0,0 +1,66 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#include "d_path_common.h"
+
+char buf[MAX_PATH_LEN] = {};
+
+SEC("lsm.s/file_open")
+__failure __msg("Possibly NULL pointer passed to trusted arg0")
+int BPF_PROG(path_d_path_kfunc_null)
+{
+ /* Can't pass NULL value to bpf_path_d_path() kfunc. */
+ bpf_path_d_path(NULL, buf, sizeof(buf));
+ return 0;
+}
+
+SEC("fentry/vfs_open")
+__failure __msg("calling kernel function bpf_path_d_path is not allowed")
+int BPF_PROG(path_d_path_kfunc_non_lsm, struct path *path, struct file *f)
+{
+ /* Calling bpf_path_d_path() kfunc from a non-sleepable and non-LSM
+ * based program isn't permitted.
+ */
+ bpf_path_d_path(path, buf, sizeof(buf));
+ return 0;
+}
+
+SEC("lsm.s/task_alloc")
+__failure __msg("R1 must be referenced or trusted")
+int BPF_PROG(path_d_path_kfunc_untrusted_from_argument, struct task_struct *task)
+{
+ struct path *root;
+
+ /* Walking a trusted argument yields an untrusted pointer. */
+ root = &task->fs->root;
+ bpf_path_d_path(root, buf, sizeof(buf));
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("R1 must be referenced or trusted")
+int BPF_PROG(path_d_path_kfunc_untrusted_from_current)
+{
+ struct path *pwd;
+ struct task_struct *current;
+
+ current = bpf_get_current_task_btf();
+ /* Walking a trusted pointer returned from bpf_get_current_task_btf()
+ * yields and untrusted pointer. */
+ pwd = ¤t->fs->pwd;
+ bpf_path_d_path(pwd, buf, sizeof(buf));
+ return 0;
+}
+
+SEC("lsm.s/file_open")
+__failure __msg("R1 must have zero offset when passed to release func or trusted arg to kfunc")
+int BPF_PROG(path_d_path_kfunc_trusted_variable_offset, struct file *file)
+{
+ /* Passing variable offsets from a trusted aren't supported just yet,
+ * despite being perfectly OK i.e. file->f_path. Once the BPF verifier
+ * has been updated to handle this case, this test can be removed. For
+ * now, ensure we reject the BPF program upon load if this is attempted.
+ */
+ bpf_path_d_path(&file->f_path, buf, sizeof(buf));
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/d_path_kfunc_success.c b/tools/testing/selftests/bpf/progs/d_path_kfunc_success.c
new file mode 100644
index 000000000000..72d1a64618d1
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/d_path_kfunc_success.c
@@ -0,0 +1,25 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Google LLC. */
+
+#include "d_path_common.h"
+
+SEC("lsm.s/inode_getattr")
+int BPF_PROG(path_d_path_from_path_argument, struct path *path)
+{
+ u32 cnt = cnt_stat;
+ int ret;
+ pid_t pid;
+
+ pid = bpf_get_current_pid_tgid() >> 32;
+ if (pid != my_pid)
+ return 0;
+
+ if (cnt >= MAX_FILES)
+ return 0;
+
+ ret = bpf_path_d_path(path, paths_stat[cnt], MAX_PATH_LEN);
+ rets_stat[cnt] = ret;
+ cnt_stat++;
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/test_d_path.c b/tools/testing/selftests/bpf/progs/test_d_path.c
index fc2754f166ec..5bdfa4abb5f6 100644
--- a/tools/testing/selftests/bpf/progs/test_d_path.c
+++ b/tools/testing/selftests/bpf/progs/test_d_path.c
@@ -1,22 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
-#include "vmlinux.h"
-#include <bpf/bpf_helpers.h>
-#include <bpf/bpf_tracing.h>
-
-#define MAX_PATH_LEN 128
-#define MAX_FILES 8
-
-pid_t my_pid = 0;
-__u32 cnt_stat = 0;
-__u32 cnt_close = 0;
-char paths_stat[MAX_FILES][MAX_PATH_LEN] = {};
-char paths_close[MAX_FILES][MAX_PATH_LEN] = {};
-int rets_stat[MAX_FILES] = {};
-int rets_close[MAX_FILES] = {};
-
-int called_stat = 0;
-int called_close = 0;
+#include "d_path_common.h"
SEC("fentry/security_inode_getattr")
int BPF_PROG(prog_stat, struct path *path, struct kstat *stat,
@@ -61,5 +45,3 @@ int BPF_PROG(prog_close, struct file *file, void *id)
cnt_close++;
return 0;
}
-
-char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_d_path_check_rdonly_mem.c b/tools/testing/selftests/bpf/progs/test_d_path_check_rdonly_mem.c
index 27c27cff6a3a..76654dbf637e 100644
--- a/tools/testing/selftests/bpf/progs/test_d_path_check_rdonly_mem.c
+++ b/tools/testing/selftests/bpf/progs/test_d_path_check_rdonly_mem.c
@@ -1,9 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2022 Google */
-#include "vmlinux.h"
-#include <bpf/bpf_helpers.h>
-#include <bpf/bpf_tracing.h>
+#include "d_path_common.h"
extern const int bpf_prog_active __ksym;
@@ -24,5 +22,3 @@ int BPF_PROG(d_path_check_rdonly_mem, struct path *path, struct kstat *stat,
}
return 0;
}
-
-char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_d_path_check_types.c b/tools/testing/selftests/bpf/progs/test_d_path_check_types.c
index 7e02b7361307..c722754aedb0 100644
--- a/tools/testing/selftests/bpf/progs/test_d_path_check_types.c
+++ b/tools/testing/selftests/bpf/progs/test_d_path_check_types.c
@@ -1,8 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
-#include "vmlinux.h"
-#include <bpf/bpf_helpers.h>
-#include <bpf/bpf_tracing.h>
+#include "d_path_common.h"
extern const int bpf_prog_active __ksym;
@@ -28,5 +26,3 @@ int BPF_PROG(d_path_check_rdonly_mem, struct path *path, struct kstat *stat,
}
return 0;
}
-
-char _license[] SEC("license") = "GPL";
--
2.44.0.rc0.258.g7320e95886-goog
/M
^ permalink raw reply related [flat|nested] 16+ messages in thread