From: John Fastabend <john.fastabend@gmail.com>
To: Namhyung Kim <namhyung@kernel.org>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>
Cc: Martin KaFai Lau <kafai@fb.com>, Song Liu <songliubraving@fb.com>,
Yonghong Song <yhs@fb.com>,
John Fastabend <john.fastabend@gmail.com>,
KP Singh <kpsingh@kernel.org>,
Stanislav Fomichev <sdf@google.com>, Hao Luo <haoluo@google.com>,
Jiri Olsa <jolsa@kernel.org>,
Steven Rostedt <rostedt@goodmis.org>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@kernel.org>,
bpf@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>
Subject: RE: [PATCH bpf-next] bpf: Add bpf_read_raw_record() helper
Date: Tue, 23 Aug 2022 22:31:40 -0700 [thread overview]
Message-ID: <6305b7bcbd7a3_6d4fc208d9@john.notmuch> (raw)
In-Reply-To: <20220823210354.1407473-1-namhyung@kernel.org>
Namhyung Kim wrote:
> The helper is for BPF programs attached to perf_event in order to read
> event-specific raw data. I followed the convention of the
> bpf_read_branch_records() helper so that it can tell the size of
> record using BPF_F_GET_RAW_RECORD flag.
>
> The use case is to filter perf event samples based on the HW provided
> data which have more detailed information about the sample.
>
> Note that it only reads the first fragment of the raw record. But it
> seems mostly ok since all the existing PMU raw data have only single
> fragment and the multi-fragment records are only for BPF output attached
> to sockets. So unless it's used with such an extreme case, it'd work
> for most of tracing use cases.
>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
Acked-by: John Fastabend <john.fastabend@gmail.com>
> I don't know how to test this. As the raw data is available on some
> hardware PMU only (e.g. AMD IBS). I tried a tracepoint event but it was
> rejected by the verifier. Actually it needs a bpf_perf_event_data
> context so that's not an option IIUC.
not a pmu expert but also no good ideas on my side.
...
>
> +BPF_CALL_4(bpf_read_raw_record, struct bpf_perf_event_data_kern *, ctx,
> + void *, buf, u32, size, u64, flags)
> +{
> + struct perf_raw_record *raw = ctx->data->raw;
> + struct perf_raw_frag *frag;
> + u32 to_copy;
> +
> + if (unlikely(flags & ~BPF_F_GET_RAW_RECORD_SIZE))
> + return -EINVAL;
> +
> + if (unlikely(!raw))
> + return -ENOENT;
> +
> + if (flags & BPF_F_GET_RAW_RECORD_SIZE)
> + return raw->size;
> +
> + if (!buf || (size % sizeof(u32) != 0))
> + return -EINVAL;
> +
> + frag = &raw->frag;
> + WARN_ON_ONCE(!perf_raw_frag_last(frag));
> +
> + to_copy = min_t(u32, frag->size, size);
> + memcpy(buf, frag->data, to_copy);
> +
> + return to_copy;
> +}
> +
> +static const struct bpf_func_proto bpf_read_raw_record_proto = {
> + .func = bpf_read_raw_record,
> + .gpl_only = true,
> + .ret_type = RET_INTEGER,
> + .arg1_type = ARG_PTR_TO_CTX,
> + .arg2_type = ARG_PTR_TO_MEM_OR_NULL,
> + .arg3_type = ARG_CONST_SIZE_OR_ZERO,
> + .arg4_type = ARG_ANYTHING,
> +};
Patch lgtm but curious why allow the ARG_PTR_TO_MEM_OR_NULL from API
side instead of just ARG_PTR_TO_MEM? Maybe, just to match the
existing perf_event_read()? I acked it as I think matching existing
API is likely good enough reason.
> +
> static const struct bpf_func_proto *
> pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> {
> @@ -1548,6 +1587,8 @@ pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> return &bpf_read_branch_records_proto;
> case BPF_FUNC_get_attach_cookie:
> return &bpf_get_attach_cookie_proto_pe;
> + case BPF_FUNC_read_raw_record:
> + return &bpf_read_raw_record_proto;
> default:
> return bpf_tracing_func_proto(func_id, prog);
> }
> --
> 2.37.2.609.g9ff673ca1a-goog
>
next prev parent reply other threads:[~2022-08-24 5:31 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-23 21:03 [PATCH bpf-next] bpf: Add bpf_read_raw_record() helper Namhyung Kim
2022-08-23 22:19 ` Song Liu
2022-08-23 22:45 ` Namhyung Kim
2022-08-24 5:32 ` John Fastabend
2022-08-25 16:57 ` Namhyung Kim
2022-08-25 17:04 ` Song Liu
2022-08-25 17:21 ` Namhyung Kim
2022-08-24 5:31 ` John Fastabend [this message]
2022-08-24 6:09 ` Namhyung Kim
2022-08-25 21:33 ` Andrii Nakryiko
2022-08-25 22:08 ` Song Liu
2022-08-25 23:03 ` Andrii Nakryiko
2022-08-26 2:35 ` Song Liu
2022-08-26 5:22 ` Namhyung Kim
2022-08-26 5:53 ` Song Liu
2022-08-26 16:33 ` Namhyung Kim
2022-08-26 18:09 ` Song Liu
2022-08-26 18:44 ` Song Liu
2022-08-26 19:30 ` Namhyung Kim
2022-08-26 20:58 ` Song Liu
2022-08-26 21:12 ` Namhyung Kim
2022-08-26 21:25 ` Song Liu
2022-08-27 6:25 ` Namhyung Kim
2022-08-29 7:20 ` Song Liu
2022-08-29 20:11 ` Namhyung Kim
2022-08-26 19:21 ` Namhyung Kim
2022-08-26 20:52 ` Song Liu
2022-08-25 22:13 ` Namhyung Kim
2022-08-25 23:09 ` Andrii Nakryiko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6305b7bcbd7a3_6d4fc208d9@john.notmuch \
--to=john.fastabend@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=haoluo@google.com \
--cc=jolsa@kernel.org \
--cc=kafai@fb.com \
--cc=kpsingh@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=sdf@google.com \
--cc=songliubraving@fb.com \
--cc=yhs@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox