* [GIT PULL 00/27] perf tools: filtering events using eBPF programs
@ 2015-09-06 7:13 Wang Nan
2015-09-06 7:13 ` [PATCH 01/27] perf tools: Don't write to evsel if parser doesn't collect evsel Wang Nan
` (26 more replies)
0 siblings, 27 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
(I forget to CC mailing list when sending previous pull req. Sorry for the noisy.)
Hi Arnaldo,
I rebased my code on your perf/core and send a new pull request. You
can find main changes in the tag message. The most biggest changes
are the resesigning of dummy placeholder and the new testcase for BPF
prologue. They are patch 10, patch 11, patch 26 and patch 27. Please
have a look at them.
The following changes since commit 0959e527b1593e662cb99639a587eac39ea1232d:
perf stat: Move sw clock metrics printout to stat-shadow (2015-09-04 20:30:01 -0300)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/pi3orama/linux tags/perf-ebpf-for-acme
for you to fetch changes up to 3af1c7f44478d1545f202bd0054e767d3f0f8130:
perf test: Test BPF prologue (2015-09-06 06:10:12 +0000)
----------------------------------------------------------------
BPF related bugfix and improve
1. Enforce the assumption that event parser never return empty list.
2. Reuse code and API from kernel's regs_query_register_offset().
3. Totally redesign dummy placeholder. Reuse existing dummy event instead
of creating a new one.
4. Introduce new testcases, test LLVM kbuild searching and BPF prologue.
5. Adding a missing '!' in add_perf_probe_events.
6. Use new API ({convert,apply,cleanup}_perf_probe_events) to probe events.
(while fix a bug that non-root user can't trace non-BPF events, like cycles)
Signed-off-by: Wang Nan <wangnan0@huawei.com>
----------------------------------------------------------------
He Kuang (2):
perf tools: Add prologue for BPF programs for fetching arguments
perf record: Support custom vmlinux path
Wang Nan (25):
perf tools: Don't write to evsel if parser doesn't collect evsel
perf tools: Make perf depend on libbpf
perf ebpf: Add the libbpf glue
perf tools: Enable passing bpf object file to --event
perf record, bpf: Parse and create probe points for BPF programs
perf bpf: Collect 'struct perf_probe_event' for bpf_program
perf record: Load all eBPF object into kernel
perf tools: Add bpf_fd field to evsel and config it
perf tools: Attach eBPF program to perf event
perf tools: Allow BPF placeholder dummy events to collect --filter
options
perf tools: Sync setting of real bpf events with placeholder
perf record: Add clang options for compiling BPF scripts
perf tools: Compile scriptlets to BPF objects when passing '.c' to
--event
perf test: Enforce LLVM test for BPF test
perf test: Add 'perf test BPF'
bpf tools: Load a program with different instances using preprocessor
perf probe: Reset args and nargs for probe_trace_event when failure
perf tools: Add BPF_PROLOGUE config options for further patches
perf tools: Introduce regs_query_register_offset() for x86
perf tools: Generate prologue for BPF programs
perf tools: Use same BPF program if arguments are identical
perf probe: Init symbol as kprobe
perf tools: Allow BPF program attach to uprobe events
perf test: Enforce LLVM test, add kbuild test
perf test: Test BPF prologue
tools/build/Makefile.feature | 6 +-
tools/lib/bpf/libbpf.c | 143 +++++-
tools/lib/bpf/libbpf.h | 22 +
tools/perf/MANIFEST | 3 +
tools/perf/Makefile.perf | 19 +-
tools/perf/arch/x86/Makefile | 1 +
tools/perf/arch/x86/util/Build | 1 +
tools/perf/arch/x86/util/dwarf-regs.c | 122 +++--
tools/perf/builtin-record.c | 58 ++-
tools/perf/builtin-stat.c | 8 +-
tools/perf/builtin-top.c | 10 +-
tools/perf/builtin-trace.c | 6 +-
tools/perf/config/Makefile | 36 +-
tools/perf/tests/Build | 24 +-
tools/perf/tests/bpf-script-example.c | 48 ++
tools/perf/tests/bpf-script-test-kbuild.c | 21 +
tools/perf/tests/bpf-script-test-prologue.c | 35 ++
tools/perf/tests/bpf.c | 229 +++++++++
tools/perf/tests/builtin-test.c | 12 +
tools/perf/tests/llvm.c | 210 +++++++-
tools/perf/tests/llvm.h | 29 ++
tools/perf/tests/make | 4 +-
tools/perf/tests/tests.h | 3 +
tools/perf/util/Build | 2 +
tools/perf/util/bpf-loader.c | 756 ++++++++++++++++++++++++++++
tools/perf/util/bpf-loader.h | 94 ++++
tools/perf/util/bpf-prologue.c | 443 ++++++++++++++++
tools/perf/util/bpf-prologue.h | 34 ++
tools/perf/util/evlist.c | 115 +++++
tools/perf/util/evlist.h | 1 +
tools/perf/util/evsel.c | 49 ++
tools/perf/util/evsel.h | 24 +
tools/perf/util/include/dwarf-regs.h | 8 +
tools/perf/util/parse-events.c | 82 ++-
tools/perf/util/parse-events.h | 4 +
tools/perf/util/parse-events.l | 6 +
tools/perf/util/parse-events.y | 29 +-
tools/perf/util/probe-event.c | 2 +-
tools/perf/util/probe-finder.c | 4 +
39 files changed, 2618 insertions(+), 85 deletions(-)
create mode 100644 tools/perf/tests/bpf-script-example.c
create mode 100644 tools/perf/tests/bpf-script-test-kbuild.c
create mode 100644 tools/perf/tests/bpf-script-test-prologue.c
create mode 100644 tools/perf/tests/bpf.c
create mode 100644 tools/perf/tests/llvm.h
create mode 100644 tools/perf/util/bpf-loader.c
create mode 100644 tools/perf/util/bpf-loader.h
create mode 100644 tools/perf/util/bpf-prologue.c
create mode 100644 tools/perf/util/bpf-prologue.h
--
2.1.0
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH 01/27] perf tools: Don't write to evsel if parser doesn't collect evsel
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-23 8:43 ` [tip:perf/core] perf tools: Don' t assume that the parser returns non empty evsel list tip-bot for Wang Nan
2015-09-06 7:13 ` [PATCH 02/27] perf tools: Make perf depend on libbpf Wang Nan
` (25 subsequent siblings)
26 siblings, 1 reply; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
If parse_events__scanner() collects no entry, perf_evlist__last(evlist)
is invalid.
Although it shouldn't happen at this point, before calling
perf_evlist__last(), we should ensure the list is not empty for safety
reason.
There are 3 places need this checking:
1. Before setting cmdline_group_boundary;
2. Before __perf_evlist__set_leader();
3. In foreach_evsel_in_last_glob.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: https://lkml.kernel.org/n/1441176553-116129-1-git-send-email-wangnan0@huawei.com
[wangnan: Enforce the assumption that parser never collect empty list]
---
tools/perf/util/parse-events.c | 23 ++++++++++++++++++++---
1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 3840176..07ce501 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -793,6 +793,11 @@ void parse_events__set_leader(char *name, struct list_head *list)
{
struct perf_evsel *leader;
+ if (list_empty(list)) {
+ WARN_ONCE(true, "WARNING: failed to set leader: empty list");
+ return;
+ }
+
__perf_evlist__set_leader(list);
leader = list_entry(list->next, struct perf_evsel, node);
leader->group_name = name ? strdup(name) : NULL;
@@ -1143,10 +1148,16 @@ int parse_events(struct perf_evlist *evlist, const char *str,
int entries = data.idx - evlist->nr_entries;
struct perf_evsel *last;
+ if (list_empty(&data.list)) {
+ WARN_ONCE(true, "WARNING: event parser found nothing");
+ return -1;
+ }
+
+ last = list_entry(data.list.prev, struct perf_evsel, node);
+ last->cmdline_group_boundary = true;
+
perf_evlist__splice_list_tail(evlist, &data.list, entries);
evlist->nr_groups += data.nr_groups;
- last = perf_evlist__last(evlist);
- last->cmdline_group_boundary = true;
return 0;
}
@@ -1252,7 +1263,13 @@ foreach_evsel_in_last_glob(struct perf_evlist *evlist,
struct perf_evsel *last = NULL;
int err;
- if (evlist->nr_entries > 0)
+ /*
+ * Don't return when list_empty, give func a chance to report
+ * error when it found last == NULL.
+ *
+ * So no need to WARN here, let *func do this.
+ */
+ if (!list_empty(&evlist->entries))
last = perf_evlist__last(evlist);
do {
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 02/27] perf tools: Make perf depend on libbpf
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
2015-09-06 7:13 ` [PATCH 01/27] perf tools: Don't write to evsel if parser doesn't collect evsel Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 03/27] perf ebpf: Add the libbpf glue Wang Nan
` (24 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
By adding libbpf into perf's Makefile, this patch enables perf to build
libbpf during building if libelf is found and neither NO_LIBELF nor
NO_LIBBPF is set. The newly introduced code is similar to libapi and
libtraceevent building in Makefile.perf.
MANIFEST is also updated for 'make perf-*-src-pkg'.
Append make_no_libbpf to tools/perf/tests/make.
'bpf' feature check is appended into default FEATURE_TESTS and
FEATURE_DISPLAY, so perf will check API version of bpf in
/path/to/kernel/include/uapi/linux/bpf.h. Which should not fail except
when we are trying to port this code to an old kernel.
Error messages are also updated to notify users about the disable of BPF
support of 'perf record' if libelf is missed or BPF API check failed.
tools/lib/bpf is added into TAG_FOLDERS to allow us to navigate on
libbpf files when working on perf using tools/perf/tags.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-24-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/build/Makefile.feature | 6 ++++--
tools/perf/MANIFEST | 3 +++
tools/perf/Makefile.perf | 19 +++++++++++++++++--
tools/perf/config/Makefile | 19 ++++++++++++++++++-
tools/perf/tests/make | 4 +++-
5 files changed, 45 insertions(+), 6 deletions(-)
diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
index 2975632..5ec6b37 100644
--- a/tools/build/Makefile.feature
+++ b/tools/build/Makefile.feature
@@ -51,7 +51,8 @@ FEATURE_TESTS ?= \
timerfd \
libdw-dwarf-unwind \
zlib \
- lzma
+ lzma \
+ bpf
FEATURE_DISPLAY ?= \
dwarf \
@@ -67,7 +68,8 @@ FEATURE_DISPLAY ?= \
libunwind \
libdw-dwarf-unwind \
zlib \
- lzma
+ lzma \
+ bpf
# Set FEATURE_CHECK_(C|LD)FLAGS-all for all FEATURE_TESTS features.
# If in the future we need per-feature checks/flags for features not
diff --git a/tools/perf/MANIFEST b/tools/perf/MANIFEST
index 2a958a8..14e8b98 100644
--- a/tools/perf/MANIFEST
+++ b/tools/perf/MANIFEST
@@ -17,6 +17,7 @@ tools/build
tools/arch/x86/include/asm/atomic.h
tools/arch/x86/include/asm/rmwcc.h
tools/lib/traceevent
+tools/lib/bpf
tools/lib/api
tools/lib/bpf
tools/lib/hweight.c
@@ -68,6 +69,8 @@ arch/*/lib/memset*.S
include/linux/poison.h
include/linux/hw_breakpoint.h
include/uapi/linux/perf_event.h
+include/uapi/linux/bpf.h
+include/uapi/linux/bpf_common.h
include/uapi/linux/const.h
include/uapi/linux/swab.h
include/uapi/linux/hw_breakpoint.h
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index d9863cb..a6a789e 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -145,6 +145,7 @@ AWK = awk
LIB_DIR = $(srctree)/tools/lib/api/
TRACE_EVENT_DIR = $(srctree)/tools/lib/traceevent/
+BPF_DIR = $(srctree)/tools/lib/bpf/
# include config/Makefile by default and rule out
# non-config cases
@@ -180,6 +181,7 @@ strip-libs = $(filter-out -l%,$(1))
ifneq ($(OUTPUT),)
TE_PATH=$(OUTPUT)
+ BPF_PATH=$(OUTPUT)
ifneq ($(subdir),)
LIB_PATH=$(OUTPUT)/../lib/api/
else
@@ -188,6 +190,7 @@ endif
else
TE_PATH=$(TRACE_EVENT_DIR)
LIB_PATH=$(LIB_DIR)
+ BPF_PATH=$(BPF_DIR)
endif
LIBTRACEEVENT = $(TE_PATH)libtraceevent.a
@@ -199,6 +202,8 @@ LIBTRACEEVENT_DYNAMIC_LIST_LDFLAGS = -Xlinker --dynamic-list=$(LIBTRACEEVENT_DYN
LIBAPI = $(LIB_PATH)libapi.a
export LIBAPI
+LIBBPF = $(BPF_PATH)libbpf.a
+
# python extension build directories
PYTHON_EXTBUILD := $(OUTPUT)python_ext_build/
PYTHON_EXTBUILD_LIB := $(PYTHON_EXTBUILD)lib/
@@ -251,6 +256,9 @@ export PERL_PATH
LIB_FILE=$(OUTPUT)libperf.a
PERFLIBS = $(LIB_FILE) $(LIBAPI) $(LIBTRACEEVENT)
+ifndef NO_LIBBPF
+ PERFLIBS += $(LIBBPF)
+endif
# We choose to avoid "if .. else if .. else .. endif endif"
# because maintaining the nesting to match is a pain. If
@@ -420,6 +428,13 @@ $(LIBAPI)-clean:
$(call QUIET_CLEAN, libapi)
$(Q)$(MAKE) -C $(LIB_DIR) O=$(OUTPUT) clean >/dev/null
+$(LIBBPF): FORCE
+ $(Q)$(MAKE) -C $(BPF_DIR) O=$(OUTPUT) $(OUTPUT)libbpf.a
+
+$(LIBBPF)-clean:
+ $(call QUIET_CLEAN, libbpf)
+ $(Q)$(MAKE) -C $(BPF_DIR) O=$(OUTPUT) clean >/dev/null
+
help:
@echo 'Perf make targets:'
@echo ' doc - make *all* documentation (see below)'
@@ -459,7 +474,7 @@ INSTALL_DOC_TARGETS += quick-install-doc quick-install-man quick-install-html
$(DOC_TARGETS):
$(QUIET_SUBDIR0)Documentation $(QUIET_SUBDIR1) $(@:doc=all)
-TAG_FOLDERS= . ../lib/traceevent ../lib/api ../lib/symbol
+TAG_FOLDERS= . ../lib/traceevent ../lib/api ../lib/symbol ../lib/bpf
TAG_FILES= ../../include/uapi/linux/perf_event.h
TAGS:
@@ -567,7 +582,7 @@ config-clean:
$(call QUIET_CLEAN, config)
$(Q)$(MAKE) -C $(srctree)/tools/build/feature/ clean >/dev/null
-clean: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean config-clean
+clean: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean $(LIBBPF)-clean config-clean
$(call QUIET_CLEAN, core-objs) $(RM) $(LIB_FILE) $(OUTPUT)perf-archive $(OUTPUT)perf-with-kcore $(LANG_BINDINGS)
$(Q)find . -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete
$(Q)$(RM) $(OUTPUT).config-detected
diff --git a/tools/perf/config/Makefile b/tools/perf/config/Makefile
index 827557f..38a4144 100644
--- a/tools/perf/config/Makefile
+++ b/tools/perf/config/Makefile
@@ -106,6 +106,7 @@ ifdef LIBBABELTRACE
FEATURE_CHECK_LDFLAGS-libbabeltrace := $(LIBBABELTRACE_LDFLAGS) -lbabeltrace-ctf
endif
+FEATURE_CHECK_CFLAGS-bpf = -I. -I$(srctree)/tools/include -I$(srctree)/arch/$(ARCH)/include/uapi -I$(srctree)/include/uapi
# include ARCH specific config
-include $(src-perf)/arch/$(ARCH)/Makefile
@@ -233,6 +234,7 @@ ifdef NO_LIBELF
NO_DEMANGLE := 1
NO_LIBUNWIND := 1
NO_LIBDW_DWARF_UNWIND := 1
+ NO_LIBBPF := 1
else
ifeq ($(feature-libelf), 0)
ifeq ($(feature-glibc), 1)
@@ -242,13 +244,14 @@ else
LIBC_SUPPORT := 1
endif
ifeq ($(LIBC_SUPPORT),1)
- msg := $(warning No libelf found, disables 'probe' tool, please install elfutils-libelf-devel/libelf-dev);
+ msg := $(warning No libelf found, disables 'probe' tool and BPF support in 'perf record', please install elfutils-libelf-devel/libelf-dev);
NO_LIBELF := 1
NO_DWARF := 1
NO_DEMANGLE := 1
NO_LIBUNWIND := 1
NO_LIBDW_DWARF_UNWIND := 1
+ NO_LIBBPF := 1
else
ifneq ($(filter s% -static%,$(LDFLAGS),),)
msg := $(error No static glibc found, please install glibc-static);
@@ -305,6 +308,13 @@ ifndef NO_LIBELF
$(call detected,CONFIG_DWARF)
endif # PERF_HAVE_DWARF_REGS
endif # NO_DWARF
+
+ ifndef NO_LIBBPF
+ ifeq ($(feature-bpf), 1)
+ CFLAGS += -DHAVE_LIBBPF_SUPPORT
+ $(call detected,CONFIG_LIBBPF)
+ endif
+ endif # NO_LIBBPF
endif # NO_LIBELF
ifeq ($(ARCH),powerpc)
@@ -320,6 +330,13 @@ ifndef NO_LIBUNWIND
endif
endif
+ifndef NO_LIBBPF
+ ifneq ($(feature-bpf), 1)
+ msg := $(warning BPF API too old. Please install recent kernel headers. BPF support in 'perf record' is disabled.)
+ NO_LIBBPF := 1
+ endif
+endif
+
dwarf-post-unwind := 1
dwarf-post-unwind-text := BUG
diff --git a/tools/perf/tests/make b/tools/perf/tests/make
index ba31c4b..2cbd0c6 100644
--- a/tools/perf/tests/make
+++ b/tools/perf/tests/make
@@ -44,6 +44,7 @@ make_no_libnuma := NO_LIBNUMA=1
make_no_libaudit := NO_LIBAUDIT=1
make_no_libbionic := NO_LIBBIONIC=1
make_no_auxtrace := NO_AUXTRACE=1
+make_no_libbpf := NO_LIBBPF=1
make_tags := tags
make_cscope := cscope
make_help := help
@@ -66,7 +67,7 @@ make_static := LDFLAGS=-static
make_minimal := NO_LIBPERL=1 NO_LIBPYTHON=1 NO_NEWT=1 NO_GTK2=1
make_minimal += NO_DEMANGLE=1 NO_LIBELF=1 NO_LIBUNWIND=1 NO_BACKTRACE=1
make_minimal += NO_LIBNUMA=1 NO_LIBAUDIT=1 NO_LIBBIONIC=1
-make_minimal += NO_LIBDW_DWARF_UNWIND=1 NO_AUXTRACE=1
+make_minimal += NO_LIBDW_DWARF_UNWIND=1 NO_AUXTRACE=1 NO_LIBBPF=1
# $(run) contains all available tests
run := make_pure
@@ -94,6 +95,7 @@ run += make_no_libnuma
run += make_no_libaudit
run += make_no_libbionic
run += make_no_auxtrace
+run += make_no_libbpf
run += make_help
run += make_doc
run += make_perf_o
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 03/27] perf ebpf: Add the libbpf glue
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
2015-09-06 7:13 ` [PATCH 01/27] perf tools: Don't write to evsel if parser doesn't collect evsel Wang Nan
2015-09-06 7:13 ` [PATCH 02/27] perf tools: Make perf depend on libbpf Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 04/27] perf tools: Enable passing bpf object file to --event Wang Nan
` (23 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
The 'bpf-loader.[ch]' files are introduced in this patch. Which will be
the interface between perf and libbpf. bpf__prepare_load() resides in
bpf-loader.c. Dummy functions should be used because bpf-loader.c is
available only when CONFIG_LIBBPF is on.
Functions in bpf-loader.c should not report error explicitly. Instead,
strerror style error reporting should be used.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1436445342-1402-19-git-send-email-wangnan0@huawei.com
[ split from a larger patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/bpf-loader.c | 92 ++++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/bpf-loader.h | 47 ++++++++++++++++++++++
2 files changed, 139 insertions(+)
create mode 100644 tools/perf/util/bpf-loader.c
create mode 100644 tools/perf/util/bpf-loader.h
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
new file mode 100644
index 0000000..88531ea
--- /dev/null
+++ b/tools/perf/util/bpf-loader.c
@@ -0,0 +1,92 @@
+/*
+ * bpf-loader.c
+ *
+ * Copyright (C) 2015 Wang Nan <wangnan0@huawei.com>
+ * Copyright (C) 2015 Huawei Inc.
+ */
+
+#include <bpf/libbpf.h>
+#include "perf.h"
+#include "debug.h"
+#include "bpf-loader.h"
+
+#define DEFINE_PRINT_FN(name, level) \
+static int libbpf_##name(const char *fmt, ...) \
+{ \
+ va_list args; \
+ int ret; \
+ \
+ va_start(args, fmt); \
+ ret = veprintf(level, verbose, pr_fmt(fmt), args);\
+ va_end(args); \
+ return ret; \
+}
+
+DEFINE_PRINT_FN(warning, 0)
+DEFINE_PRINT_FN(info, 0)
+DEFINE_PRINT_FN(debug, 1)
+
+static bool libbpf_initialized;
+
+int bpf__prepare_load(const char *filename)
+{
+ struct bpf_object *obj;
+
+ if (!libbpf_initialized)
+ libbpf_set_print(libbpf_warning,
+ libbpf_info,
+ libbpf_debug);
+
+ obj = bpf_object__open(filename);
+ if (!obj) {
+ pr_debug("bpf: failed to load %s\n", filename);
+ return -EINVAL;
+ }
+
+ /*
+ * Throw object pointer away: it will be retrived using
+ * bpf_objects iterater.
+ */
+
+ return 0;
+}
+
+void bpf__clear(void)
+{
+ struct bpf_object *obj, *tmp;
+
+ bpf_object__for_each_safe(obj, tmp)
+ bpf_object__close(obj);
+}
+
+#define bpf__strerror_head(err, buf, size) \
+ char sbuf[STRERR_BUFSIZE], *emsg;\
+ if (!size)\
+ return 0;\
+ if (err < 0)\
+ err = -err;\
+ emsg = strerror_r(err, sbuf, sizeof(sbuf));\
+ switch (err) {\
+ default:\
+ scnprintf(buf, size, "%s", emsg);\
+ break;
+
+#define bpf__strerror_entry(val, fmt...)\
+ case val: {\
+ scnprintf(buf, size, fmt);\
+ break;\
+ }
+
+#define bpf__strerror_end(buf, size)\
+ }\
+ buf[size - 1] = '\0';
+
+int bpf__strerror_prepare_load(const char *filename, int err,
+ char *buf, size_t size)
+{
+ bpf__strerror_head(err, buf, size);
+ bpf__strerror_entry(EINVAL, "%s: BPF object file '%s' is invalid",
+ emsg, filename)
+ bpf__strerror_end(buf, size);
+ return 0;
+}
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
new file mode 100644
index 0000000..12be630
--- /dev/null
+++ b/tools/perf/util/bpf-loader.h
@@ -0,0 +1,47 @@
+/*
+ * Copyright (C) 2015, Wang Nan <wangnan0@huawei.com>
+ * Copyright (C) 2015, Huawei Inc.
+ */
+#ifndef __BPF_LOADER_H
+#define __BPF_LOADER_H
+
+#include <linux/compiler.h>
+#include <string.h>
+#include "debug.h"
+
+#ifdef HAVE_LIBBPF_SUPPORT
+int bpf__prepare_load(const char *filename);
+int bpf__strerror_prepare_load(const char *filename, int err,
+ char *buf, size_t size);
+
+void bpf__clear(void);
+#else
+static inline int bpf__prepare_load(const char *filename __maybe_unused)
+{
+ pr_debug("ERROR: eBPF object loading is disabled during compiling.\n");
+ return -1;
+}
+
+static inline void bpf__clear(void) { }
+
+static inline int
+__bpf_strerror(char *buf, size_t size)
+{
+ if (!size)
+ return 0;
+ strncpy(buf,
+ "ERROR: eBPF object loading is disabled during compiling.\n",
+ size);
+ buf[size - 1] = '\0';
+ return 0;
+}
+
+static inline int
+bpf__strerror_prepare_load(const char *filename __maybe_unused,
+ int err __maybe_unused,
+ char *buf, size_t size)
+{
+ return __bpf_strerror(buf, size);
+}
+#endif
+#endif
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 04/27] perf tools: Enable passing bpf object file to --event
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (2 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 03/27] perf ebpf: Add the libbpf glue Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 05/27] perf record, bpf: Parse and create probe points for BPF programs Wang Nan
` (22 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
By introducing new rules in tools/perf/util/parse-events.[ly], this
patch enables 'perf record --event bpf_file.o' to select events by an
eBPF object file. It calls parse_events_load_bpf() to load that file,
which uses bpf__prepare_load() and finally calls bpf_object__open() for
the object files.
Instead of introducing evsel to evlist during parsing, events selected
by eBPF object files are appended separately. The reason is:
1. During parsing, the probing points have not been initialized.
2. Currently we are unable to call add_perf_probe_events() twice,
therefore we have to wait until all such events are collected,
then probe all points by one call.
The real probing and selecting is reside in following patches.
To make it compatible with other events, we insert a dummy event onto
evlist. Its name field can be used to connect path of the object file.
Since bpf__prepare_load() is possible to be called during cmdline
parsing, all builtin commands which are possible to call
parse_events_option() should release bpf resources during cleanup.
Add bpf__clear() to stat, record, top and trace commands, although
currently we are going to support 'perf record' only.
Commiter note:
Testing if the event parsing changes indeed call the BPF loading
routines:
[root@felicio ~]# ls -la foo.o
ls: cannot access foo.o: No such file or directory
[root@felicio ~]# perf record --event foo.o sleep
libbpf: failed to open foo.o: No such file or directory
bpf: failed to load foo.o
invalid or unsupported event: 'foo.o'
Run 'perf list' for a list of valid events
usage: perf record [<options>] [<command>]
or: perf record [<options>] -- <command> [<options>]
-e, --event <event> event selector. use 'perf list' to list available events
[root@felicio ~]#
Yes, it does this time around.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1436445342-1402-19-git-send-email-wangnan0@huawei.com
[ The veprintf() and bpf loader parts were split from this one;
Add bpf__clear() into stat, record, top and trace commands.
Add dummy evsel as place holder of BPF objects.
]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/builtin-record.c | 7 ++++--
tools/perf/builtin-stat.c | 8 ++++--
tools/perf/builtin-top.c | 10 +++++---
tools/perf/builtin-trace.c | 6 ++++-
tools/perf/util/Build | 1 +
tools/perf/util/parse-events.c | 55 ++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/parse-events.h | 3 +++
tools/perf/util/parse-events.l | 3 +++
tools/perf/util/parse-events.y | 18 +++++++++++++-
9 files changed, 102 insertions(+), 9 deletions(-)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 142eeb3..f886706 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -31,6 +31,7 @@
#include "util/auxtrace.h"
#include "util/parse-branch-options.h"
#include "util/parse-regs-options.h"
+#include "util/bpf-loader.h"
#include <unistd.h>
#include <sched.h>
@@ -1132,13 +1133,13 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
if (!rec->itr) {
rec->itr = auxtrace_record__init(rec->evlist, &err);
if (err)
- return err;
+ goto out_bpf_clear;
}
err = auxtrace_parse_snapshot_options(rec->itr, &rec->opts,
rec->opts.auxtrace_snapshot_opts);
if (err)
- return err;
+ goto out_bpf_clear;
err = -ENOMEM;
@@ -1201,6 +1202,8 @@ out_symbol_exit:
perf_evlist__delete(rec->evlist);
symbol__exit();
auxtrace_record__free(rec->itr);
+out_bpf_clear:
+ bpf__clear();
return err;
}
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 77e5781..65506eb 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -59,6 +59,7 @@
#include "util/thread.h"
#include "util/thread_map.h"
#include "util/counts.h"
+#include "util/bpf-loader.h"
#include <stdlib.h>
#include <sys/prctl.h>
@@ -1224,7 +1225,8 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
output = fopen(output_name, mode);
if (!output) {
perror("failed to create output file");
- return -1;
+ status = -1;
+ goto out;
}
clock_gettime(CLOCK_REALTIME, &tm);
fprintf(output, "# started on %s\n", ctime(&tm.tv_sec));
@@ -1233,7 +1235,8 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
output = fdopen(output_fd, mode);
if (!output) {
perror("Failed opening logfd");
- return -errno;
+ status = -errno;
+ goto out;
}
}
@@ -1366,5 +1369,6 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
perf_evlist__free_stats(evsel_list);
out:
perf_evlist__delete(evsel_list);
+ bpf__clear();
return status;
}
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 8c465c8..384ca2e 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -41,6 +41,7 @@
#include "util/sort.h"
#include "util/intlist.h"
#include "util/parse-branch-options.h"
+#include "util/bpf-loader.h"
#include "arch/common.h"
#include "util/debug.h"
@@ -1270,8 +1271,10 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
symbol_conf.priv_size = sizeof(struct annotation);
symbol_conf.try_vmlinux_path = (symbol_conf.vmlinux_name == NULL);
- if (symbol__init(NULL) < 0)
- return -1;
+ if (symbol__init(NULL) < 0) {
+ status = -1;
+ goto out_bpf_clear;
+ }
sort__setup_elide(stdout);
@@ -1289,6 +1292,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
out_delete_evlist:
perf_evlist__delete(top.evlist);
-
+out_bpf_clear:
+ bpf__clear();
return status;
}
diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 2156532..ac96242 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -31,6 +31,7 @@
#include "util/intlist.h"
#include "util/thread_map.h"
#include "util/stat.h"
+#include "util/bpf-loader.h"
#include "trace-event.h"
#include "util/parse-events.h"
@@ -3109,6 +3110,7 @@ int cmd_trace(int argc, const char **argv, const char *prefix __maybe_unused)
if (trace.evlist->nr_entries > 0)
evlist__set_evsel_handler(trace.evlist, trace__event_handler);
+ /* trace__record calls cmd_record, which calls bpf__clear() */
if ((argc >= 1) && (strcmp(argv[0], "record") == 0))
return trace__record(&trace, argc-1, &argv[1]);
@@ -3119,7 +3121,8 @@ int cmd_trace(int argc, const char **argv, const char *prefix __maybe_unused)
if (!trace.trace_syscalls && !trace.trace_pgfaults &&
trace.evlist->nr_entries == 0 /* Was --events used? */) {
pr_err("Please specify something to trace.\n");
- return -1;
+ err = -1;
+ goto out;
}
if (output_name != NULL) {
@@ -3178,5 +3181,6 @@ out_close:
if (output_name != NULL)
fclose(trace.output);
out:
+ bpf__clear();
return err;
}
diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index 349bc96..f5e9569 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -85,6 +85,7 @@ libperf-$(CONFIG_AUXTRACE) += intel-bts.o
libperf-y += parse-branch-options.o
libperf-y += parse-regs-options.o
+libperf-$(CONFIG_LIBBPF) += bpf-loader.o
libperf-$(CONFIG_LIBELF) += symbol-elf.o
libperf-$(CONFIG_LIBELF) += probe-file.o
libperf-$(CONFIG_LIBELF) += probe-event.o
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 07ce501..b560f5f 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -19,6 +19,7 @@
#include "thread_map.h"
#include "cpumap.h"
#include "asm/bug.h"
+#include "bpf-loader.h"
#define MAX_NAME_LEN 100
@@ -481,6 +482,60 @@ int parse_events_add_tracepoint(struct list_head *list, int *idx,
return add_tracepoint_event(list, idx, sys, event);
}
+int parse_events_load_bpf(struct parse_events_evlist *data,
+ struct list_head *list,
+ char *bpf_file_name)
+{
+ int err;
+ char errbuf[BUFSIZ];
+ struct perf_evsel *dummy_evsel;
+
+ /*
+ * Currently don't link useful event to list. BPF object files
+ * should be saved to a seprated list and processed together.
+ *
+ * Things could be changed if we solve perf probe reentering
+ * problem. After that probe events file by file is possible.
+ * However, probing cost is still need to be considered.
+ *
+ * We should still link something onto evlist to make it
+ * compatible with other events, or we have to find another
+ * way to collect --filter options and modifiers (currently
+ * modifiers are not allowed lexicically). evsel->tracking
+ * is another thing needs to be considered. Fortunately we have
+ * dummy evsel, which is originally designed for collecting
+ * evsel->tracking.
+ */
+ err = parse_events_add_numeric(data, list, PERF_TYPE_SOFTWARE,
+ PERF_COUNT_SW_DUMMY, NULL);
+ if (err)
+ return err;
+ if (list_empty(list) || !list_is_singular(list)) {
+ data->error->str = strdup(
+ "Internal error: failed to alloc dummy evsel");
+ return -ENOENT;
+ }
+
+ dummy_evsel = list_entry(list->prev, struct perf_evsel, node);
+
+ /* Give it a better name so we can connect this list to the object */
+ zfree(&dummy_evsel->name);
+ dummy_evsel->name = strdup(bpf_file_name);
+
+ err = bpf__prepare_load(bpf_file_name);
+ if (err) {
+ bpf__strerror_prepare_load(bpf_file_name, err,
+ errbuf, sizeof(errbuf));
+ list_del_init(&dummy_evsel->node);
+ perf_evsel__delete(dummy_evsel);
+ data->error->str = strdup(errbuf);
+ data->error->help = strdup("(add -v to see detail)");
+ return err;
+ }
+
+ return 0;
+}
+
static int
parse_breakpoint_type(const char *type, struct perf_event_attr *attr)
{
diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h
index a09b0e2..3652387 100644
--- a/tools/perf/util/parse-events.h
+++ b/tools/perf/util/parse-events.h
@@ -119,6 +119,9 @@ int parse_events__modifier_group(struct list_head *list, char *event_mod);
int parse_events_name(struct list_head *list, char *name);
int parse_events_add_tracepoint(struct list_head *list, int *idx,
char *sys, char *event);
+int parse_events_load_bpf(struct parse_events_evlist *data,
+ struct list_head *list,
+ char *bpf_file_name);
int parse_events_add_numeric(struct parse_events_evlist *data,
struct list_head *list,
u32 type, u64 config,
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index 936d566..22e8f93 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -115,6 +115,7 @@ do { \
group [^,{}/]*[{][^}]*[}][^,{}/]*
event_pmu [^,{}/]+[/][^/]*[/][^,{}/]*
event [^,{}/]+
+bpf_object .*\.(o|bpf)
num_dec [0-9]+
num_hex 0x[a-fA-F0-9]+
@@ -159,6 +160,7 @@ modifier_bp [rwx]{1,3}
}
{event_pmu} |
+{bpf_object} |
{event} {
BEGIN(INITIAL);
REWIND(1);
@@ -264,6 +266,7 @@ r{num_raw_hex} { return raw(yyscanner); }
{num_hex} { return value(yyscanner, 16); }
{modifier_event} { return str(yyscanner, PE_MODIFIER_EVENT); }
+{bpf_object} { return str(yyscanner, PE_BPF_OBJECT); }
{name} { return pmu_str_check(yyscanner); }
"/" { BEGIN(config); return '/'; }
- { return '-'; }
diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
index 591905a..3ee3a32 100644
--- a/tools/perf/util/parse-events.y
+++ b/tools/perf/util/parse-events.y
@@ -42,6 +42,7 @@ static inc_group_count(struct list_head *list,
%token PE_VALUE PE_VALUE_SYM_HW PE_VALUE_SYM_SW PE_RAW PE_TERM
%token PE_EVENT_NAME
%token PE_NAME
+%token PE_BPF_OBJECT
%token PE_MODIFIER_EVENT PE_MODIFIER_BP
%token PE_NAME_CACHE_TYPE PE_NAME_CACHE_OP_RESULT
%token PE_PREFIX_MEM PE_PREFIX_RAW PE_PREFIX_GROUP
@@ -53,6 +54,7 @@ static inc_group_count(struct list_head *list,
%type <num> PE_RAW
%type <num> PE_TERM
%type <str> PE_NAME
+%type <str> PE_BPF_OBJECT
%type <str> PE_NAME_CACHE_TYPE
%type <str> PE_NAME_CACHE_OP_RESULT
%type <str> PE_MODIFIER_EVENT
@@ -69,6 +71,7 @@ static inc_group_count(struct list_head *list,
%type <head> event_legacy_tracepoint
%type <head> event_legacy_numeric
%type <head> event_legacy_raw
+%type <head> event_bpf_file
%type <head> event_def
%type <head> event_mod
%type <head> event_name
@@ -198,7 +201,8 @@ event_def: event_pmu |
event_legacy_mem |
event_legacy_tracepoint sep_dc |
event_legacy_numeric sep_dc |
- event_legacy_raw sep_dc
+ event_legacy_raw sep_dc |
+ event_bpf_file
event_pmu:
PE_NAME '/' event_config '/'
@@ -420,6 +424,18 @@ PE_RAW
$$ = list;
}
+event_bpf_file:
+PE_BPF_OBJECT
+{
+ struct parse_events_evlist *data = _data;
+ struct list_head *list;
+
+ ALLOC_LIST(list);
+ ABORT_ON(parse_events_load_bpf(data, list, $1));
+ $$ = list;
+}
+
+
start_terms: event_config
{
struct parse_events_terms *data = _data;
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 05/27] perf record, bpf: Parse and create probe points for BPF programs
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (3 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 04/27] perf tools: Enable passing bpf object file to --event Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-16 21:43 ` Arnaldo Carvalho de Melo
2015-09-06 7:13 ` [PATCH 06/27] perf bpf: Collect 'struct perf_probe_event' for bpf_program Wang Nan
` (21 subsequent siblings)
26 siblings, 1 reply; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch introduces bpf__{un,}probe() functions to enable callers to
create kprobe points based on section names of BPF programs. It parses
the section names of each eBPF program and creates corresponding 'struct
perf_probe_event' structures. The parse_perf_probe_command() function is
used to do the main parsing work.
Parsing result is stored into an array to satisify
{convert,apply}_perf_probe_events(). It accepts an array of
'struct perf_probe_event' and do all the work in one call.
Define PERF_BPF_PROBE_GROUP as "perf_bpf_probe", which will be used as
the group name of all eBPF probing points.
probe_conf.max_probes is set to MAX_PROBES to support glob matching.
Before ending of bpf__probe(), data in each 'struct perf_probe_event' is
cleaned. Things will be changed by following patches because they need
'struct probe_trace_event' in them,
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1436445342-1402-21-git-send-email-wangnan0@huawei.com
Link: http://lkml.kernel.org/n/1436445342-1402-23-git-send-email-wangnan0@huawei.com
[Merged by two patches
wangnan: Utilize new perf probe API {convert,apply,cleanup}_perf_probe_events()
]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/builtin-record.c | 19 +++++-
tools/perf/util/bpf-loader.c | 149 +++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/bpf-loader.h | 13 ++++
3 files changed, 180 insertions(+), 1 deletion(-)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index f886706..b56109f 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1141,7 +1141,23 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
if (err)
goto out_bpf_clear;
- err = -ENOMEM;
+ /*
+ * bpf__probe must be called before symbol__init() because we
+ * need init_symbol_maps. If called after symbol__init,
+ * symbol_conf.sort_by_name won't take effect.
+ *
+ * bpf__unprobe() is safe even if bpf__probe() failed, and it
+ * also calls symbol__init. Therefore, goto out_symbol_exit
+ * is safe when probe failed.
+ */
+ err = bpf__probe();
+ if (err) {
+ bpf__strerror_probe(err, errbuf, sizeof(errbuf));
+
+ pr_err("Probing at events in BPF object failed.\n");
+ pr_err("\t%s\n", errbuf);
+ goto out_symbol_exit;
+ }
symbol__init(NULL);
@@ -1202,6 +1218,7 @@ out_symbol_exit:
perf_evlist__delete(rec->evlist);
symbol__exit();
auxtrace_record__free(rec->itr);
+ bpf__unprobe();
out_bpf_clear:
bpf__clear();
return err;
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 88531ea..10505cb 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -9,6 +9,8 @@
#include "perf.h"
#include "debug.h"
#include "bpf-loader.h"
+#include "probe-event.h"
+#include "probe-finder.h"
#define DEFINE_PRINT_FN(name, level) \
static int libbpf_##name(const char *fmt, ...) \
@@ -28,6 +30,58 @@ DEFINE_PRINT_FN(debug, 1)
static bool libbpf_initialized;
+static int
+config_bpf_program(struct bpf_program *prog, struct perf_probe_event *pev)
+{
+ const char *config_str;
+ int err;
+
+ config_str = bpf_program__title(prog, false);
+ if (!config_str) {
+ pr_debug("bpf: unable to get title for program\n");
+ return -EINVAL;
+ }
+
+ pr_debug("bpf: config program '%s'\n", config_str);
+ err = parse_perf_probe_command(config_str, pev);
+ if (err < 0) {
+ pr_debug("bpf: '%s' is not a valid config string\n",
+ config_str);
+ /* parse failed, don't need clear pev. */
+ return -EINVAL;
+ }
+
+ if (pev->group && strcmp(pev->group, PERF_BPF_PROBE_GROUP)) {
+ pr_debug("bpf: '%s': group for event is set and not '%s'.\n",
+ config_str, PERF_BPF_PROBE_GROUP);
+ err = -EINVAL;
+ goto errout;
+ } else if (!pev->group)
+ pev->group = strdup(PERF_BPF_PROBE_GROUP);
+
+ if (!pev->group) {
+ pr_debug("bpf: strdup failed\n");
+ err = -ENOMEM;
+ goto errout;
+ }
+
+ if (!pev->event) {
+ pr_debug("bpf: '%s': event name is missing\n",
+ config_str);
+ err = -EINVAL;
+ goto errout;
+ }
+
+ pr_debug("bpf: config '%s' is ok\n", config_str);
+
+ return 0;
+
+errout:
+ if (pev)
+ clear_perf_probe_event(pev);
+ return err;
+}
+
int bpf__prepare_load(const char *filename)
{
struct bpf_object *obj;
@@ -59,6 +113,90 @@ void bpf__clear(void)
bpf_object__close(obj);
}
+static bool is_probed;
+
+int bpf__unprobe(void)
+{
+ struct strfilter *delfilter;
+ int ret;
+
+ if (!is_probed)
+ return 0;
+
+ delfilter = strfilter__new(PERF_BPF_PROBE_GROUP ":*", NULL);
+ if (!delfilter) {
+ pr_debug("Failed to create delfilter when unprobing\n");
+ return -ENOMEM;
+ }
+
+ ret = del_perf_probe_events(delfilter);
+ strfilter__delete(delfilter);
+ if (ret < 0 && is_probed)
+ pr_debug("Error: failed to delete events: %s\n",
+ strerror(-ret));
+ else
+ is_probed = false;
+ return ret < 0 ? ret : 0;
+}
+
+int bpf__probe(void)
+{
+ int err, nr_events = 0;
+ struct bpf_object *obj, *tmp;
+ struct bpf_program *prog;
+ struct perf_probe_event *pevs;
+
+ pevs = calloc(MAX_PROBES, sizeof(pevs[0]));
+ if (!pevs)
+ return -ENOMEM;
+
+ bpf_object__for_each_safe(obj, tmp) {
+ bpf_object__for_each_program(prog, obj) {
+ err = config_bpf_program(prog, &pevs[nr_events++]);
+ if (err < 0)
+ goto out;
+
+ if (nr_events >= MAX_PROBES) {
+ pr_debug("Too many (more than %d) events\n",
+ MAX_PROBES);
+ err = -ERANGE;
+ goto out;
+ };
+ }
+ }
+
+ if (!nr_events) {
+ /*
+ * Don't call following code to prevent perf report failure
+ * init_symbol_maps can fail when perf is started by non-root
+ * user, which prevent non-root user track normal events even
+ * without using BPF, because bpf__probe() is called by
+ * 'perf record' unconditionally.
+ */
+ err = 0;
+ goto out;
+ }
+
+ probe_conf.max_probes = MAX_PROBES;
+ /* Let convert_perf_probe_events generates probe_trace_event (tevs) */
+ err = convert_perf_probe_events(pevs, nr_events);
+ if (err < 0) {
+ pr_debug("bpf_probe: failed to convert perf probe events");
+ goto out;
+ }
+
+ err = apply_perf_probe_events(pevs, nr_events);
+ if (err < 0)
+ pr_debug("bpf probe: failed to probe events\n");
+ else
+ is_probed = true;
+out_cleanup:
+ cleanup_perf_probe_events(pevs, nr_events);
+out:
+ free(pevs);
+ return err < 0 ? err : 0;
+}
+
#define bpf__strerror_head(err, buf, size) \
char sbuf[STRERR_BUFSIZE], *emsg;\
if (!size)\
@@ -90,3 +228,14 @@ int bpf__strerror_prepare_load(const char *filename, int err,
bpf__strerror_end(buf, size);
return 0;
}
+
+int bpf__strerror_probe(int err, char *buf, size_t size)
+{
+ bpf__strerror_head(err, buf, size);
+ bpf__strerror_entry(ERANGE, "Too many (more than %d) events",
+ MAX_PROBES);
+ bpf__strerror_entry(ENOENT, "Selected kprobe point doesn't exist.");
+ bpf__strerror_entry(EEXIST, "Selected kprobe point already exist, try perf probe -d '*'.");
+ bpf__strerror_end(buf, size);
+ return 0;
+}
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 12be630..6b09a85 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -9,10 +9,15 @@
#include <string.h>
#include "debug.h"
+#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
+
#ifdef HAVE_LIBBPF_SUPPORT
int bpf__prepare_load(const char *filename);
int bpf__strerror_prepare_load(const char *filename, int err,
char *buf, size_t size);
+int bpf__probe(void);
+int bpf__unprobe(void);
+int bpf__strerror_probe(int err, char *buf, size_t size);
void bpf__clear(void);
#else
@@ -22,6 +27,8 @@ static inline int bpf__prepare_load(const char *filename __maybe_unused)
return -1;
}
+static inline int bpf__probe(void) { return 0; }
+static inline int bpf__unprobe(void) { return 0; }
static inline void bpf__clear(void) { }
static inline int
@@ -43,5 +50,11 @@ bpf__strerror_prepare_load(const char *filename __maybe_unused,
{
return __bpf_strerror(buf, size);
}
+
+static inline int bpf__strerror_probe(int err __maybe_unused,
+ char *buf, size_t size)
+{
+ return __bpf_strerror(buf, size);
+}
#endif
#endif
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 06/27] perf bpf: Collect 'struct perf_probe_event' for bpf_program
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (4 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 05/27] perf record, bpf: Parse and create probe points for BPF programs Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 07/27] perf record: Load all eBPF object into kernel Wang Nan
` (20 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch utilizes bpf_program__set_private(), binding perf_probe_event
with bpf program by private field.
Saving those information so 'perf record' knows which kprobe point a program
should be attached.
Since data in 'struct perf_probe_event' is build by 2 stages, pev_ready
is used to mark whether the information (especially tevs) in 'struct
perf_probe_event' is valid or not. It is false at first, and set to true
by sync_bpf_program_pev(), which copies all pointers in original pev into
a program specific memory region. sync_bpf_program_pev() is called after
apply_perf_probe_events() to make sure ready of data.
Remove code which cleans 'struct perf_probe_event' after bpf__probe()
because pointers in pevs are copied to program's private field, calling
cleanup_perf_probe_events() becomes unsafe.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1436445342-1402-21-git-send-email-wangnan0@huawei.com
[Splitted from a larger patch
wangnan: Utilize new perf probe API {convert,apply,cleanup}_perf_probe_events()
]
---
tools/perf/util/bpf-loader.c | 110 +++++++++++++++++++++++++++++++++++++++++--
1 file changed, 106 insertions(+), 4 deletions(-)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 10505cb..ce95db8 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -30,9 +30,47 @@ DEFINE_PRINT_FN(debug, 1)
static bool libbpf_initialized;
+struct bpf_prog_priv {
+ /*
+ * If pev_ready is false, ppev pointes to a local memory which
+ * is only valid inside bpf__probe().
+ * pev is valid only when pev_ready.
+ */
+ bool pev_ready;
+ union {
+ struct perf_probe_event *ppev;
+ struct perf_probe_event pev;
+ };
+};
+
+static void
+bpf_prog_priv__clear(struct bpf_program *prog __maybe_unused,
+ void *_priv)
+{
+ struct bpf_prog_priv *priv = _priv;
+
+ /* check if pev is initialized */
+ if (priv && priv->pev_ready) {
+ int i;
+
+ /*
+ * Similar code with cleanup_perf_probe_events, but without
+ * exit_symbol_maps().
+ */
+ for (i = 0; i < priv->pev.ntevs; i++)
+ clear_probe_trace_event(&priv->pev.tevs[i]);
+ zfree(&priv->pev.tevs);
+ priv->pev.ntevs = 0;
+
+ clear_perf_probe_event(&priv->pev);
+ }
+ free(priv);
+}
+
static int
config_bpf_program(struct bpf_program *prog, struct perf_probe_event *pev)
{
+ struct bpf_prog_priv *priv = NULL;
const char *config_str;
int err;
@@ -74,14 +112,58 @@ config_bpf_program(struct bpf_program *prog, struct perf_probe_event *pev)
pr_debug("bpf: config '%s' is ok\n", config_str);
+ priv = calloc(1, sizeof(*priv));
+ if (!priv) {
+ pr_debug("bpf: failed to alloc memory\n");
+ err = -ENOMEM;
+ goto errout;
+ }
+
+ /*
+ * At this very early stage, tevs inside pev are not ready.
+ * It becomes usable after add_perf_probe_events() is called.
+ * set pev_ready to false so further access read priv->ppev
+ * only.
+ */
+ priv->pev_ready = false;
+ priv->ppev = pev;
+
+ err = bpf_program__set_private(prog, priv,
+ bpf_prog_priv__clear);
+ if (err) {
+ pr_debug("bpf: set program private failed\n");
+ err = -ENOMEM;
+ goto errout;
+ }
return 0;
errout:
if (pev)
clear_perf_probe_event(pev);
+ if (priv)
+ free(priv);
return err;
}
+static int
+sync_bpf_program_pev(struct bpf_program *prog)
+{
+ int err;
+ struct bpf_prog_priv *priv;
+ struct perf_probe_event *ppev;
+
+ err = bpf_program__get_private(prog, (void **)&priv);
+ if (err || !priv || priv->pev_ready) {
+ pr_debug("Internal error: sync_bpf_program_pev\n");
+ return -EINVAL;
+ }
+
+ ppev = priv->ppev;
+ memcpy(&priv->pev, ppev, sizeof(*ppev));
+ priv->pev_ready = true;
+ return 0;
+}
+
int bpf__prepare_load(const char *filename)
{
struct bpf_object *obj;
@@ -186,15 +268,35 @@ int bpf__probe(void)
}
err = apply_perf_probe_events(pevs, nr_events);
- if (err < 0)
+ if (err < 0) {
pr_debug("bpf probe: failed to probe events\n");
- else
+ goto out_cleanup;
+ } else
is_probed = true;
-out_cleanup:
- cleanup_perf_probe_events(pevs, nr_events);
+
+ /*
+ * After add_perf_probe_events, 'struct perf_probe_event' is ready.
+ * Until now copying program's priv->pev field and freeing
+ * the big array allocated before become safe.
+ */
+ bpf_object__for_each_safe(obj, tmp) {
+ bpf_object__for_each_program(prog, obj) {
+ err = sync_bpf_program_pev(prog);
+ if (err)
+ goto out;
+ }
+ }
out:
+ /*
+ * Don't call cleanup_perf_probe_events() for entries of pevs:
+ * they are used by prog's private field.
+ */
free(pevs);
return err < 0 ? err : 0;
+
+out_cleanup:
+ cleanup_perf_probe_events(pevs, nr_events);
+ goto out;
}
#define bpf__strerror_head(err, buf, size) \
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 07/27] perf record: Load all eBPF object into kernel
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (5 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 06/27] perf bpf: Collect 'struct perf_probe_event' for bpf_program Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 08/27] perf tools: Add bpf_fd field to evsel and config it Wang Nan
` (19 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch utilizes bpf_object__load() provided by libbpf to load all
objects into kernel.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-24-git-send-email-wangnan0@huawei.com
---
tools/perf/builtin-record.c | 15 +++++++++++++++
tools/perf/util/bpf-loader.c | 28 ++++++++++++++++++++++++++++
tools/perf/util/bpf-loader.h | 10 ++++++++++
3 files changed, 53 insertions(+)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index b56109f..7335ce0 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1159,6 +1159,21 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
goto out_symbol_exit;
}
+ /*
+ * bpf__probe() also calls symbol__init() if there are probe
+ * events in bpf objects, so calling symbol_exit when failure
+ * is safe. If there is no probe event, bpf__load() always
+ * success.
+ */
+ err = bpf__load();
+ if (err) {
+ pr_err("Loading BPF programs failed:\n");
+
+ bpf__strerror_load(err, errbuf, sizeof(errbuf));
+ pr_err("\t%s\n", errbuf);
+ goto out_symbol_exit;
+ }
+
symbol__init(NULL);
if (symbol_conf.kptr_restrict)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index ce95db8..754ed02 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -299,6 +299,25 @@ out_cleanup:
goto out;
}
+int bpf__load(void)
+{
+ struct bpf_object *obj, *tmp;
+ int err = 0;
+
+ bpf_object__for_each_safe(obj, tmp) {
+ err = bpf_object__load(obj);
+ if (err) {
+ pr_debug("bpf: load objects failed\n");
+ goto errout;
+ }
+ }
+ return 0;
+errout:
+ bpf_object__for_each_safe(obj, tmp)
+ bpf_object__unload(obj);
+ return err;
+}
+
#define bpf__strerror_head(err, buf, size) \
char sbuf[STRERR_BUFSIZE], *emsg;\
if (!size)\
@@ -341,3 +360,12 @@ int bpf__strerror_probe(int err, char *buf, size_t size)
bpf__strerror_end(buf, size);
return 0;
}
+
+int bpf__strerror_load(int err, char *buf, size_t size)
+{
+ bpf__strerror_head(err, buf, size);
+ bpf__strerror_entry(EINVAL, "%s: add -v to see detail. Run a CONFIG_BPF_SYSCALL kernel?",
+ emsg)
+ bpf__strerror_end(buf, size);
+ return 0;
+}
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 6b09a85..4d7552e 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -19,6 +19,9 @@ int bpf__probe(void);
int bpf__unprobe(void);
int bpf__strerror_probe(int err, char *buf, size_t size);
+int bpf__load(void);
+int bpf__strerror_load(int err, char *buf, size_t size);
+
void bpf__clear(void);
#else
static inline int bpf__prepare_load(const char *filename __maybe_unused)
@@ -29,6 +32,7 @@ static inline int bpf__prepare_load(const char *filename __maybe_unused)
static inline int bpf__probe(void) { return 0; }
static inline int bpf__unprobe(void) { return 0; }
+static inline int bpf__load(void) { return 0; }
static inline void bpf__clear(void) { }
static inline int
@@ -56,5 +60,11 @@ static inline int bpf__strerror_probe(int err __maybe_unused,
{
return __bpf_strerror(buf, size);
}
+
+static inline int bpf__strerror_load(int err __maybe_unused,
+ char *buf, size_t size)
+{
+ return __bpf_strerror(buf, size);
+}
#endif
#endif
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 08/27] perf tools: Add bpf_fd field to evsel and config it
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (6 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 07/27] perf record: Load all eBPF object into kernel Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 09/27] perf tools: Attach eBPF program to perf event Wang Nan
` (18 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch adds a bpf_fd field to 'struct evsel' then introduces method
to config it. In bpf-loader, a bpf__foreach_tev() function is added,
which calls the callback function for each 'struct probe_trace_event'
events for each bpf program with their file descriptors. In evlist.c,
perf_evlist__add_bpf() is introduced to add all bpf events into evlist.
The event names are found from probe_trace_event structure.
'perf record' calls perf_evlist__add_bpf().
Since bpf-loader.c will not be built if libbpf is turned off, an empty
bpf__foreach_tev() is defined in bpf-loader.h to avoid compiling
error.
This patch iterates over 'struct probe_trace_event' instead of
'struct probe_trace_event' during the loop for further patches, which
will generate multiple instances form one BPF program and install then
onto different 'struct probe_trace_event'.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-25-git-send-email-wangnan0@huawei.com
---
tools/perf/builtin-record.c | 6 ++++++
tools/perf/util/bpf-loader.c | 41 +++++++++++++++++++++++++++++++++++++++++
tools/perf/util/bpf-loader.h | 13 +++++++++++++
tools/perf/util/evlist.c | 41 +++++++++++++++++++++++++++++++++++++++++
tools/perf/util/evlist.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/evsel.h | 1 +
7 files changed, 104 insertions(+)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 7335ce0..f99d580 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1174,6 +1174,12 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
goto out_symbol_exit;
}
+ err = perf_evlist__add_bpf(rec->evlist);
+ if (err < 0) {
+ pr_err("Failed to add events from BPF object(s)\n");
+ goto out_symbol_exit;
+ }
+
symbol__init(NULL);
if (symbol_conf.kptr_restrict)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 754ed02..2880dbf 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -318,6 +318,47 @@ errout:
return err;
}
+int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg)
+{
+ struct bpf_object *obj, *tmp;
+ struct bpf_program *prog;
+ int err;
+
+ bpf_object__for_each_safe(obj, tmp) {
+ bpf_object__for_each_program(prog, obj) {
+ struct probe_trace_event *tev;
+ struct perf_probe_event *pev;
+ struct bpf_prog_priv *priv;
+ int i, fd;
+
+ err = bpf_program__get_private(prog,
+ (void **)&priv);
+ if (err || !priv) {
+ pr_debug("bpf: failed to get private field\n");
+ return -EINVAL;
+ }
+
+ pev = &priv->pev;
+ for (i = 0; i < pev->ntevs; i++) {
+ tev = &pev->tevs[i];
+
+ fd = bpf_program__fd(prog);
+ if (fd < 0) {
+ pr_debug("bpf: failed to get file descriptor\n");
+ return fd;
+ }
+
+ err = (*func)(tev, fd, arg);
+ if (err) {
+ pr_debug("bpf: call back failed, stop iterate\n");
+ return err;
+ }
+ }
+ }
+ }
+ return 0;
+}
+
#define bpf__strerror_head(err, buf, size) \
char sbuf[STRERR_BUFSIZE], *emsg;\
if (!size)\
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 4d7552e..34656f8 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -7,10 +7,14 @@
#include <linux/compiler.h>
#include <string.h>
+#include "probe-event.h"
#include "debug.h"
#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
+typedef int (*bpf_prog_iter_callback_t)(struct probe_trace_event *tev,
+ int fd, void *arg);
+
#ifdef HAVE_LIBBPF_SUPPORT
int bpf__prepare_load(const char *filename);
int bpf__strerror_prepare_load(const char *filename, int err,
@@ -23,6 +27,8 @@ int bpf__load(void);
int bpf__strerror_load(int err, char *buf, size_t size);
void bpf__clear(void);
+
+int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg);
#else
static inline int bpf__prepare_load(const char *filename __maybe_unused)
{
@@ -36,6 +42,13 @@ static inline int bpf__load(void) { return 0; }
static inline void bpf__clear(void) { }
static inline int
+bpf__foreach_tev(bpf_prog_iter_callback_t func __maybe_unused,
+ void *arg __maybe_unused)
+{
+ return 0;
+}
+
+static inline int
__bpf_strerror(char *buf, size_t size)
{
if (!size)
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index d51a520..93db4c1 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -14,6 +14,7 @@
#include "target.h"
#include "evlist.h"
#include "evsel.h"
+#include "bpf-loader.h"
#include "debug.h"
#include <unistd.h>
@@ -196,6 +197,46 @@ error:
return -ENOMEM;
}
+static int add_bpf_event(struct probe_trace_event *tev, int fd,
+ void *arg)
+{
+ struct perf_evlist *evlist = arg;
+ struct perf_evsel *pos;
+ struct list_head list;
+ int err, idx, entries;
+
+ pr_debug("add bpf event %s:%s and attach bpf program %d\n",
+ tev->group, tev->event, fd);
+ INIT_LIST_HEAD(&list);
+ idx = evlist->nr_entries;
+
+ pr_debug("adding %s:%s\n", tev->group, tev->event);
+ err = parse_events_add_tracepoint(&list, &idx, tev->group,
+ tev->event);
+ if (err) {
+ struct perf_evsel *evsel, *tmp;
+
+ pr_err("Failed to add BPF event %s:%s\n",
+ tev->group, tev->event);
+ list_for_each_entry_safe(evsel, tmp, &list, node) {
+ list_del(&evsel->node);
+ perf_evsel__delete(evsel);
+ }
+ return -EINVAL;
+ }
+
+ list_for_each_entry(pos, &list, node)
+ pos->bpf_fd = fd;
+ entries = idx - evlist->nr_entries;
+ perf_evlist__splice_list_tail(evlist, &list, entries);
+ return 0;
+}
+
+int perf_evlist__add_bpf(struct perf_evlist *evlist)
+{
+ return bpf__foreach_tev(add_bpf_event, evlist);
+}
+
static int perf_evlist__add_attrs(struct perf_evlist *evlist,
struct perf_event_attr *attrs, size_t nr_attrs)
{
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index b39a619..5b88e4e 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -73,6 +73,7 @@ void perf_evlist__delete(struct perf_evlist *evlist);
void perf_evlist__add(struct perf_evlist *evlist, struct perf_evsel *entry);
int perf_evlist__add_default(struct perf_evlist *evlist);
+int perf_evlist__add_bpf(struct perf_evlist *evlist);
int __perf_evlist__add_default_attrs(struct perf_evlist *evlist,
struct perf_event_attr *attrs, size_t nr_attrs);
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 771ade4..1cf7c75 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -207,6 +207,7 @@ void perf_evsel__init(struct perf_evsel *evsel,
evsel->unit = "";
evsel->scale = 1.0;
evsel->evlist = NULL;
+ evsel->bpf_fd = -1;
INIT_LIST_HEAD(&evsel->node);
INIT_LIST_HEAD(&evsel->config_terms);
perf_evsel__object.init(evsel);
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 298e6bb..fd22f83 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -119,6 +119,7 @@ struct perf_evsel {
char *group_name;
bool cmdline_group_boundary;
struct list_head config_terms;
+ int bpf_fd;
};
union u64_swap {
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 09/27] perf tools: Attach eBPF program to perf event
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (7 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 08/27] perf tools: Add bpf_fd field to evsel and config it Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 10/27] perf tools: Allow BPF placeholder dummy events to collect --filter options Wang Nan
` (17 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This is the final patch which makes basic BPF filter work. After
applying this patch, users are allowed to use BPF filter like:
# perf record --event ./hello_world.o ls
In this patch PERF_EVENT_IOC_SET_BPF ioctl is used to attach eBPF
program to a newly created perf event. The file descriptor of the
eBPF program is passed to perf record using previous patches, and
stored into evsel->bpf_fd.
It is possible that different perf event are created for one kprobe
events for different CPUs. In this case, when trying to call the
ioctl, EEXIST will be return. This patch doesn't treat it as an error.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-26-git-send-email-wangnan0@huawei.com
---
tools/perf/util/evsel.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 1cf7c75..73cf9fc 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1333,6 +1333,22 @@ retry_open:
err);
goto try_fallback;
}
+
+ if (evsel->bpf_fd >= 0) {
+ int evt_fd = FD(evsel, cpu, thread);
+ int bpf_fd = evsel->bpf_fd;
+
+ err = ioctl(evt_fd,
+ PERF_EVENT_IOC_SET_BPF,
+ bpf_fd);
+ if (err && errno != EEXIST) {
+ pr_err("failed to attach bpf fd %d: %s\n",
+ bpf_fd, strerror(errno));
+ err = -EINVAL;
+ goto out_close;
+ }
+ }
+
set_rlimit = NO_CHANGE;
/*
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 10/27] perf tools: Allow BPF placeholder dummy events to collect --filter options
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (8 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 09/27] perf tools: Attach eBPF program to perf event Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 11/27] perf tools: Sync setting of real bpf events with placeholder Wang Nan
` (16 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch improves collection and setting of filters, allows --filter
be set to BPF placeholder events (which is a software dummy event).
perf_evsel__is_dummy(), perf_evsel__is_bpf_placeholder() and
perf_evsel__can_filter() are introduced for this.
Test result:
# perf record --event dummy --exclude-perf
--exclude-perf option should follow a -e tracepoint option
# perf record --event dummy:u --exclude-perf ls
--exclude-perf option should follow a -e tracepoint option
# perf record --event test.o --exclude-perf ls
Added new event:
perf_bpf_probe:func_vfs_write (on vfs_write)
...
# perf record --event dummy.o --exclude-perf ls
Added new event:
perf_bpf_probe:func_write (on sys_write)
...
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1441518918-149316-1-git-send-email-wangnan0@huawei.com
---
tools/perf/util/evlist.c | 7 +++++++
tools/perf/util/evsel.c | 32 ++++++++++++++++++++++++++++++++
tools/perf/util/evsel.h | 23 +++++++++++++++++++++++
tools/perf/util/parse-events.c | 4 ++--
4 files changed, 64 insertions(+), 2 deletions(-)
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 93db4c1..29212dc 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1223,6 +1223,13 @@ int perf_evlist__apply_filters(struct perf_evlist *evlist, struct perf_evsel **e
continue;
/*
+ * Filters are allowed to be set to dummy event for BPF object
+ * placeholder. Don't really apply them.
+ */
+ if (perf_evsel__is_dummy(evsel))
+ continue;
+
+ /*
* filters only work for tracepoint event, which doesn't have cpu limit.
* So evlist and evsel should always be same.
*/
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 73cf9fc..e307ea2 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -2344,3 +2344,35 @@ int perf_evsel__open_strerror(struct perf_evsel *evsel, struct target *target,
err, strerror_r(err, sbuf, sizeof(sbuf)),
perf_evsel__name(evsel));
}
+
+bool perf_evsel__is_bpf_placeholder(struct perf_evsel *evsel)
+{
+ if (!perf_evsel__is_dummy(evsel))
+ return false;
+ if (!evsel->name)
+ return false;
+ /*
+ * If evsel->name doesn't starts with 'dummy', it must be a BPF
+ * place holder.
+ */
+ if (strncmp(evsel->name, perf_evsel__sw_names[PERF_COUNT_SW_DUMMY],
+ strlen(perf_evsel__sw_names[PERF_COUNT_SW_DUMMY])))
+ return true;
+ /*
+ * Very rare case: evsel->name is 'dummy_crazy.bpf'.
+ *
+ * Let's check name suffix. A bpf file should ends with one of:
+ * '.o', '.c' or '.bpf'.
+ */
+#define SUFFIX_CMP(s)\
+ strcmp(evsel->name + strlen(evsel->name) - (sizeof(s) - 1), s)
+
+ if (SUFFIX_CMP(".o") == 0)
+ return true;
+ if (SUFFIX_CMP(".c") == 0)
+ return true;
+ if (SUFFIX_CMP(".bpf") == 0)
+ return true;
+ return false;
+#undef SUFFIX_CMP
+}
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index fd22f83..864fd3f 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -372,11 +372,34 @@ bool perf_evsel__fallback(struct perf_evsel *evsel, int err,
int perf_evsel__open_strerror(struct perf_evsel *evsel, struct target *target,
int err, char *msg, size_t size);
+bool perf_evsel__is_bpf_placeholder(struct perf_evsel *evsel);
+
static inline int perf_evsel__group_idx(struct perf_evsel *evsel)
{
return evsel->idx - evsel->leader->idx;
}
+static inline bool perf_evsel__is_dummy(struct perf_evsel *evsel)
+{
+ if (!evsel)
+ return false;
+ if (evsel->attr.type != PERF_TYPE_SOFTWARE)
+ return false;
+ if (evsel->attr.config != PERF_COUNT_SW_DUMMY)
+ return false;
+ return true;
+}
+
+static inline int perf_evsel__can_filter(struct perf_evsel *evsel)
+{
+ if (!evsel)
+ return false;
+ if (evsel->attr.type == PERF_TYPE_TRACEPOINT)
+ return true;
+
+ return perf_evsel__is_bpf_placeholder(evsel);
+}
+
#define for_each_group_member(_evsel, _leader) \
for ((_evsel) = list_entry((_leader)->node.next, struct perf_evsel, node); \
(_evsel) && (_evsel)->leader == (_leader); \
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index b560f5f..d961e90 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -1346,7 +1346,7 @@ static int set_filter(struct perf_evsel *evsel, const void *arg)
{
const char *str = arg;
- if (evsel == NULL || evsel->attr.type != PERF_TYPE_TRACEPOINT) {
+ if (!perf_evsel__can_filter(evsel)) {
fprintf(stderr,
"--filter option should follow a -e tracepoint option\n");
return -1;
@@ -1375,7 +1375,7 @@ static int add_exclude_perf_filter(struct perf_evsel *evsel,
{
char new_filter[64];
- if (evsel == NULL || evsel->attr.type != PERF_TYPE_TRACEPOINT) {
+ if (!perf_evsel__can_filter(evsel)) {
fprintf(stderr,
"--exclude-perf option should follow a -e tracepoint option\n");
return -1;
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 11/27] perf tools: Sync setting of real bpf events with placeholder
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (9 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 10/27] perf tools: Allow BPF placeholder dummy events to collect --filter options Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 12/27] perf record: Add clang options for compiling BPF scripts Wang Nan
` (15 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
In this patch, when adding real events described in BPF objects, sync
filter and tracking settings with previous dummy placeholder. After
this patch, command like:
# perf record --test-filter.o --exclude-perf ls
work as we expect.
After all settings are synced, we remove those placeholder from evlist
so they won't appear in the final perf.data.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: Link: http://lkml.kernel.org/n/1441518995-149427-1-git-send-email-wangnan0@huawei.com
---
tools/perf/util/bpf-loader.c | 8 ++++-
tools/perf/util/bpf-loader.h | 1 +
tools/perf/util/evlist.c | 75 +++++++++++++++++++++++++++++++++++++++++---
3 files changed, 79 insertions(+), 5 deletions(-)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 2880dbf..3400538 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -325,6 +325,12 @@ int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg)
int err;
bpf_object__for_each_safe(obj, tmp) {
+ const char *obj_name;
+
+ obj_name = bpf_object__get_name(obj);
+ if (!obj_name)
+ obj_name = "[unknown].o";
+
bpf_object__for_each_program(prog, obj) {
struct probe_trace_event *tev;
struct perf_probe_event *pev;
@@ -348,7 +354,7 @@ int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg)
return fd;
}
- err = (*func)(tev, fd, arg);
+ err = (*func)(tev, obj_name, fd, arg);
if (err) {
pr_debug("bpf: call back failed, stop iterate\n");
return err;
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 34656f8..5bac423 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -13,6 +13,7 @@
#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
typedef int (*bpf_prog_iter_callback_t)(struct probe_trace_event *tev,
+ const char *obj_name,
int fd, void *arg);
#ifdef HAVE_LIBBPF_SUPPORT
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 29212dc..7e36563 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -197,7 +197,54 @@ error:
return -ENOMEM;
}
-static int add_bpf_event(struct probe_trace_event *tev, int fd,
+static void sync_with_bpf_placeholder(struct perf_evlist *evlist,
+ const char *obj_name,
+ struct list_head *list)
+{
+ struct perf_evsel *dummy_evsel, *pos;
+
+ const char *filter;
+ bool tracking_set = false;
+ bool found = false;
+
+ evlist__for_each(evlist, dummy_evsel) {
+ if (!perf_evsel__is_bpf_placeholder(dummy_evsel))
+ continue;
+
+ if (strcmp(dummy_evsel->name, obj_name) == 0) {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found) {
+ pr_debug("Failed to find dummy event of '%s'\n",
+ obj_name);
+ return;
+ }
+
+ filter = dummy_evsel->filter;
+
+ list_for_each_entry(pos, list, node) {
+ if (filter && perf_evsel__set_filter(pos, filter)) {
+ pr_debug("Failed to set filter '%s' to evsel %s\n",
+ filter, pos->name);
+ }
+
+ /* Sync tracking */
+ if (dummy_evsel->tracking && !tracking_set)
+ pos->tracking = tracking_set = true;
+
+ /*
+ * If someday we allow to add config terms or modifiers
+ * to placeholder, we should sync them with real events
+ * here. Currently only tracking needs to be considered.
+ */
+ }
+}
+
+static int add_bpf_event(struct probe_trace_event *tev,
+ const char *obj_name, int fd,
void *arg)
{
struct perf_evlist *evlist = arg;
@@ -205,8 +252,8 @@ static int add_bpf_event(struct probe_trace_event *tev, int fd,
struct list_head list;
int err, idx, entries;
- pr_debug("add bpf event %s:%s and attach bpf program %d\n",
- tev->group, tev->event, fd);
+ pr_debug("add bpf event %s:%s and attach bpf program %d (from %s)\n",
+ tev->group, tev->event, fd, obj_name);
INIT_LIST_HEAD(&list);
idx = evlist->nr_entries;
@@ -228,13 +275,33 @@ static int add_bpf_event(struct probe_trace_event *tev, int fd,
list_for_each_entry(pos, &list, node)
pos->bpf_fd = fd;
entries = idx - evlist->nr_entries;
+
+ sync_with_bpf_placeholder(evlist, obj_name, &list);
+ /*
+ * Currectly we don't need to link those new events at the
+ * same place where dummy node reside because order of
+ * events in cmdline won't be used after
+ * 'perf_evlist__add_bpf'.
+ */
perf_evlist__splice_list_tail(evlist, &list, entries);
return 0;
}
int perf_evlist__add_bpf(struct perf_evlist *evlist)
{
- return bpf__foreach_tev(add_bpf_event, evlist);
+ struct perf_evsel *pos, *n;
+ int err;
+
+ err = bpf__foreach_tev(add_bpf_event, evlist);
+
+ evlist__for_each_safe(evlist, n, pos) {
+ if (perf_evsel__is_bpf_placeholder(pos)) {
+ list_del_init(&pos->node);
+ perf_evsel__delete(pos);
+ evlist->nr_entries--;
+ }
+ }
+ return err;
}
static int perf_evlist__add_attrs(struct perf_evlist *evlist,
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 12/27] perf record: Add clang options for compiling BPF scripts
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (10 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 11/27] perf tools: Sync setting of real bpf events with placeholder Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 13/27] perf tools: Compile scriptlets to BPF objects when passing '.c' to --event Wang Nan
` (14 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
Although previous patch allows setting BPF compiler related options in
perfconfig, on some ad-hoc situation it still requires passing options
through cmdline. This patch introduces 2 options to 'perf record' for
this propose: --clang-path and --clang-opt.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-28-git-send-email-wangnan0@huawei.com
---
tools/perf/builtin-record.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index f99d580..11e7d63 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -32,6 +32,7 @@
#include "util/parse-branch-options.h"
#include "util/parse-regs-options.h"
#include "util/bpf-loader.h"
+#include "util/llvm-utils.h"
#include <unistd.h>
#include <sched.h>
@@ -1097,6 +1098,12 @@ struct option __record_options[] = {
"per thread proc mmap processing timeout in ms"),
OPT_BOOLEAN(0, "switch-events", &record.opts.record_switch_events,
"Record context switch events"),
+#ifdef HAVE_LIBBPF_SUPPORT
+ OPT_STRING(0, "clang-path", &llvm_param.clang_path, "clang path",
+ "clang binary to use for compiling BPF scriptlets"),
+ OPT_STRING(0, "clang-opt", &llvm_param.clang_opt, "clang options",
+ "options passed to clang when compiling BPF scriptlets"),
+#endif
OPT_END()
};
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 13/27] perf tools: Compile scriptlets to BPF objects when passing '.c' to --event
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (11 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 12/27] perf record: Add clang options for compiling BPF scripts Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 14/27] perf test: Enforce LLVM test for BPF test Wang Nan
` (13 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch provides infrastructure for passing source files to --event
directly using:
# perf record --event bpf-file.c command
This patch does following works:
1) Allow passing '.c' file to '--event'. parse_events_load_bpf() is
expanded to allow caller tell it whether the passed file is source
file or object.
2) llvm__compile_bpf() is called to compile the '.c' file, the result
is saved into memory. Use bpf_object__open_buffer() to load the
in-memory object.
Introduces a bpf-script-example.c so we can manually test it:
# perf record --clang-opt "-DLINUX_VERSION_CODE=0x40200" --event ./bpf-script-example.c sleep 1
Note that '--clang-opt' must put before '--event'.
Futher patches will merge it into a testcase so can be tested automatically.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Signed-off-by: He Kuang <hekuang@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1436445342-1402-20-git-send-email-wangnan0@huawei.com
[ wangnan: Pass name of source file to bpf_object__open_buffer(). ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/tests/bpf-script-example.c | 44 +++++++++++++++++++++++++++++++++++
tools/perf/util/bpf-loader.c | 25 +++++++++++++++-----
tools/perf/util/bpf-loader.h | 10 ++++----
tools/perf/util/parse-events.c | 6 ++---
tools/perf/util/parse-events.h | 3 ++-
tools/perf/util/parse-events.l | 3 +++
tools/perf/util/parse-events.y | 15 ++++++++++--
7 files changed, 90 insertions(+), 16 deletions(-)
create mode 100644 tools/perf/tests/bpf-script-example.c
diff --git a/tools/perf/tests/bpf-script-example.c b/tools/perf/tests/bpf-script-example.c
new file mode 100644
index 0000000..410a70b
--- /dev/null
+++ b/tools/perf/tests/bpf-script-example.c
@@ -0,0 +1,44 @@
+#ifndef LINUX_VERSION_CODE
+# error Need LINUX_VERSION_CODE
+# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
+#endif
+#define BPF_ANY 0
+#define BPF_MAP_TYPE_ARRAY 2
+#define BPF_FUNC_map_lookup_elem 1
+#define BPF_FUNC_map_update_elem 2
+
+static void *(*bpf_map_lookup_elem)(void *map, void *key) =
+ (void *) BPF_FUNC_map_lookup_elem;
+static void *(*bpf_map_update_elem)(void *map, void *key, void *value, int flags) =
+ (void *) BPF_FUNC_map_update_elem;
+
+struct bpf_map_def {
+ unsigned int type;
+ unsigned int key_size;
+ unsigned int value_size;
+ unsigned int max_entries;
+};
+
+#define SEC(NAME) __attribute__((section(NAME), used))
+struct bpf_map_def SEC("maps") flip_table = {
+ .type = BPF_MAP_TYPE_ARRAY,
+ .key_size = sizeof(int),
+ .value_size = sizeof(int),
+ .max_entries = 1,
+};
+
+SEC("func=sys_epoll_pwait")
+int bpf_func__sys_epoll_pwait(void *ctx)
+{
+ int ind =0;
+ int *flag = bpf_map_lookup_elem(&flip_table, &ind);
+ int new_flag;
+ if (!flag)
+ return 0;
+ /* flip flag and store back */
+ new_flag = !*flag;
+ bpf_map_update_elem(&flip_table, &ind, &new_flag, BPF_ANY);
+ return new_flag;
+}
+char _license[] SEC("license") = "GPL";
+int _version SEC("version") = LINUX_VERSION_CODE;
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 3400538..7229f8e 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -11,6 +11,7 @@
#include "bpf-loader.h"
#include "probe-event.h"
#include "probe-finder.h"
+#include "llvm-utils.h"
#define DEFINE_PRINT_FN(name, level) \
static int libbpf_##name(const char *fmt, ...) \
@@ -164,16 +165,28 @@ sync_bpf_program_pev(struct bpf_program *prog)
return 0;
}
-int bpf__prepare_load(const char *filename)
+int bpf__prepare_load(const char *filename, bool source)
{
struct bpf_object *obj;
+ int err;
if (!libbpf_initialized)
libbpf_set_print(libbpf_warning,
libbpf_info,
libbpf_debug);
- obj = bpf_object__open(filename);
+ if (source) {
+ void *obj_buf;
+ size_t obj_buf_sz;
+
+ err = llvm__compile_bpf(filename, &obj_buf, &obj_buf_sz);
+ if (err)
+ return err;
+ obj = bpf_object__open_buffer(obj_buf, obj_buf_sz, filename);
+ free(obj_buf);
+ } else
+ obj = bpf_object__open(filename);
+
if (!obj) {
pr_debug("bpf: failed to load %s\n", filename);
return -EINVAL;
@@ -387,12 +400,12 @@ int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg)
}\
buf[size - 1] = '\0';
-int bpf__strerror_prepare_load(const char *filename, int err,
- char *buf, size_t size)
+int bpf__strerror_prepare_load(const char *filename, bool source,
+ int err, char *buf, size_t size)
{
bpf__strerror_head(err, buf, size);
- bpf__strerror_entry(EINVAL, "%s: BPF object file '%s' is invalid",
- emsg, filename)
+ bpf__strerror_entry(EINVAL, "%s: BPF %s file '%s' is invalid",
+ emsg, source ? "source" : "object", filename);
bpf__strerror_end(buf, size);
return 0;
}
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 5bac423..9a2358c 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -17,9 +17,9 @@ typedef int (*bpf_prog_iter_callback_t)(struct probe_trace_event *tev,
int fd, void *arg);
#ifdef HAVE_LIBBPF_SUPPORT
-int bpf__prepare_load(const char *filename);
-int bpf__strerror_prepare_load(const char *filename, int err,
- char *buf, size_t size);
+int bpf__prepare_load(const char *filename, bool source);
+int bpf__strerror_prepare_load(const char *filename, bool source,
+ int err, char *buf, size_t size);
int bpf__probe(void);
int bpf__unprobe(void);
int bpf__strerror_probe(int err, char *buf, size_t size);
@@ -31,7 +31,8 @@ void bpf__clear(void);
int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg);
#else
-static inline int bpf__prepare_load(const char *filename __maybe_unused)
+static inline int bpf__prepare_load(const char *filename __maybe_unused,
+ bool source __maybe_unused)
{
pr_debug("ERROR: eBPF object loading is disabled during compiling.\n");
return -1;
@@ -63,6 +64,7 @@ __bpf_strerror(char *buf, size_t size)
static inline int
bpf__strerror_prepare_load(const char *filename __maybe_unused,
+ bool source __maybe_unused,
int err __maybe_unused,
char *buf, size_t size)
{
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index d961e90..4976c71 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -484,7 +484,7 @@ int parse_events_add_tracepoint(struct list_head *list, int *idx,
int parse_events_load_bpf(struct parse_events_evlist *data,
struct list_head *list,
- char *bpf_file_name)
+ char *bpf_file_name, bool source)
{
int err;
char errbuf[BUFSIZ];
@@ -522,9 +522,9 @@ int parse_events_load_bpf(struct parse_events_evlist *data,
zfree(&dummy_evsel->name);
dummy_evsel->name = strdup(bpf_file_name);
- err = bpf__prepare_load(bpf_file_name);
+ err = bpf__prepare_load(bpf_file_name, source);
if (err) {
- bpf__strerror_prepare_load(bpf_file_name, err,
+ bpf__strerror_prepare_load(bpf_file_name, source, err,
errbuf, sizeof(errbuf));
list_del_init(&dummy_evsel->node);
perf_evsel__delete(dummy_evsel);
diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h
index 3652387..728a424 100644
--- a/tools/perf/util/parse-events.h
+++ b/tools/perf/util/parse-events.h
@@ -121,7 +121,8 @@ int parse_events_add_tracepoint(struct list_head *list, int *idx,
char *sys, char *event);
int parse_events_load_bpf(struct parse_events_evlist *data,
struct list_head *list,
- char *bpf_file_name);
+ char *bpf_file_name,
+ bool source);
int parse_events_add_numeric(struct parse_events_evlist *data,
struct list_head *list,
u32 type, u64 config,
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index 22e8f93..8033890 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -116,6 +116,7 @@ group [^,{}/]*[{][^}]*[}][^,{}/]*
event_pmu [^,{}/]+[/][^/]*[/][^,{}/]*
event [^,{}/]+
bpf_object .*\.(o|bpf)
+bpf_source .*\.c
num_dec [0-9]+
num_hex 0x[a-fA-F0-9]+
@@ -161,6 +162,7 @@ modifier_bp [rwx]{1,3}
{event_pmu} |
{bpf_object} |
+{bpf_source} |
{event} {
BEGIN(INITIAL);
REWIND(1);
@@ -267,6 +269,7 @@ r{num_raw_hex} { return raw(yyscanner); }
{modifier_event} { return str(yyscanner, PE_MODIFIER_EVENT); }
{bpf_object} { return str(yyscanner, PE_BPF_OBJECT); }
+{bpf_source} { return str(yyscanner, PE_BPF_SOURCE); }
{name} { return pmu_str_check(yyscanner); }
"/" { BEGIN(config); return '/'; }
- { return '-'; }
diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
index 3ee3a32..90d2458 100644
--- a/tools/perf/util/parse-events.y
+++ b/tools/perf/util/parse-events.y
@@ -42,7 +42,7 @@ static inc_group_count(struct list_head *list,
%token PE_VALUE PE_VALUE_SYM_HW PE_VALUE_SYM_SW PE_RAW PE_TERM
%token PE_EVENT_NAME
%token PE_NAME
-%token PE_BPF_OBJECT
+%token PE_BPF_OBJECT PE_BPF_SOURCE
%token PE_MODIFIER_EVENT PE_MODIFIER_BP
%token PE_NAME_CACHE_TYPE PE_NAME_CACHE_OP_RESULT
%token PE_PREFIX_MEM PE_PREFIX_RAW PE_PREFIX_GROUP
@@ -55,6 +55,7 @@ static inc_group_count(struct list_head *list,
%type <num> PE_TERM
%type <str> PE_NAME
%type <str> PE_BPF_OBJECT
+%type <str> PE_BPF_SOURCE
%type <str> PE_NAME_CACHE_TYPE
%type <str> PE_NAME_CACHE_OP_RESULT
%type <str> PE_MODIFIER_EVENT
@@ -431,7 +432,17 @@ PE_BPF_OBJECT
struct list_head *list;
ALLOC_LIST(list);
- ABORT_ON(parse_events_load_bpf(data, list, $1));
+ ABORT_ON(parse_events_load_bpf(data, list, $1, false));
+ $$ = list;
+}
+|
+PE_BPF_SOURCE
+{
+ struct parse_events_evlist *data = _data;
+ struct list_head *list;
+
+ ALLOC_LIST(list);
+ ABORT_ON(parse_events_load_bpf(data, list, $1, true));
$$ = list;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 14/27] perf test: Enforce LLVM test for BPF test
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (12 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 13/27] perf tools: Compile scriptlets to BPF objects when passing '.c' to --event Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 15/27] perf test: Add 'perf test BPF' Wang Nan
` (12 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch replaces the original toy BPF program with previous introduced
bpf-script-example.c. Dynamically embedded it into 'llvm-src.c'.
The newly introduced BPF program attaches a BPF program at
'sys_epoll_pwait()', and collect half samples from it. perf itself never
use that syscall, so further test can verify their result with it.
Since BPF program require LINUX_VERSION_CODE of runtime kernel, this
patch computes that code from uname.
Since the resuling BPF object is useful for further testcases, this patch
introduces 'prepare' and 'cleanup' method to tests, and makes test__llvm()
create a MAP_SHARED memory array to hold the resulting object.
Signed-off-by: He Kuang <hekuang@huawei.com>
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1440151770-129878-15-git-send-email-wangnan0@huawei.com
[wangnan:
- Fix random building error by add $(call rule_mkdir) for tests/llvm-src.c.
]
---
tools/perf/tests/Build | 9 +++-
tools/perf/tests/builtin-test.c | 8 ++++
tools/perf/tests/llvm.c | 104 +++++++++++++++++++++++++++++++++++-----
tools/perf/tests/llvm.h | 14 ++++++
tools/perf/tests/tests.h | 2 +
5 files changed, 123 insertions(+), 14 deletions(-)
create mode 100644 tools/perf/tests/llvm.h
diff --git a/tools/perf/tests/Build b/tools/perf/tests/Build
index c6f198ae..e80f787 100644
--- a/tools/perf/tests/Build
+++ b/tools/perf/tests/Build
@@ -32,9 +32,16 @@ perf-y += sample-parsing.o
perf-y += parse-no-sample-id-all.o
perf-y += kmod-path.o
perf-y += thread-map.o
-perf-y += llvm.o
+perf-y += llvm.o llvm-src.o
perf-y += topology.o
+$(OUTPUT)tests/llvm-src.c: tests/bpf-script-example.c
+ $(call rule_mkdir)
+ $(Q)echo '#include <tests/llvm.h>' > $@
+ $(Q)echo 'const char test_llvm__bpf_prog[] =' >> $@
+ $(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
+ $(Q)echo ';' >> $@
+
perf-$(CONFIG_X86) += perf-time-to-tsc.o
ifdef CONFIG_AUXTRACE
perf-$(CONFIG_X86) += insn-x86.o
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 98b0b24..ba3cf10 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -17,6 +17,8 @@
static struct test {
const char *desc;
int (*func)(void);
+ void (*prepare)(void);
+ void (*cleanup)(void);
} tests[] = {
{
.desc = "vmlinux symtab matches kallsyms",
@@ -177,6 +179,8 @@ static struct test {
{
.desc = "Test LLVM searching and compiling",
.func = test__llvm,
+ .prepare = test__llvm_prepare,
+ .cleanup = test__llvm_cleanup,
},
#ifdef HAVE_AUXTRACE_SUPPORT
#if defined(__x86_64__) || defined(__i386__)
@@ -277,7 +281,11 @@ static int __cmd_test(int argc, const char *argv[], struct intlist *skiplist)
}
pr_debug("\n--- start ---\n");
+ if (tests[curr].prepare)
+ tests[curr].prepare();
err = run_test(&tests[curr]);
+ if (tests[curr].cleanup)
+ tests[curr].cleanup();
pr_debug("---- end ----\n%s:", tests[curr].desc);
switch (err) {
diff --git a/tools/perf/tests/llvm.c b/tools/perf/tests/llvm.c
index 52d5597..236bf39 100644
--- a/tools/perf/tests/llvm.c
+++ b/tools/perf/tests/llvm.c
@@ -1,9 +1,13 @@
#include <stdio.h>
+#include <sys/utsname.h>
#include <bpf/libbpf.h>
#include <util/llvm-utils.h>
#include <util/cache.h>
+#include <util/util.h>
+#include <sys/mman.h>
#include "tests.h"
#include "debug.h"
+#include "llvm.h"
static int perf_config_cb(const char *var, const char *val,
void *arg __maybe_unused)
@@ -11,16 +15,6 @@ static int perf_config_cb(const char *var, const char *val,
return perf_default_config(var, val, arg);
}
-/*
- * Randomly give it a "version" section since we don't really load it
- * into kernel
- */
-static const char test_bpf_prog[] =
- "__attribute__((section(\"do_fork\"), used)) "
- "int fork(void *ctx) {return 0;} "
- "char _license[] __attribute__((section(\"license\"), used)) = \"GPL\";"
- "int _version __attribute__((section(\"version\"), used)) = 0x40100;";
-
#ifdef HAVE_LIBBPF_SUPPORT
static int test__bpf_parsing(void *obj_buf, size_t obj_buf_sz)
{
@@ -41,12 +35,44 @@ static int test__bpf_parsing(void *obj_buf __maybe_unused,
}
#endif
+static char *
+compose_source(void)
+{
+ struct utsname utsname;
+ int version, patchlevel, sublevel, err;
+ unsigned long version_code;
+ char *code;
+
+ if (uname(&utsname))
+ return NULL;
+
+ err = sscanf(utsname.release, "%d.%d.%d",
+ &version, &patchlevel, &sublevel);
+ if (err != 3) {
+ fprintf(stderr, " (Can't get kernel version from uname '%s')",
+ utsname.release);
+ return NULL;
+ }
+
+ version_code = (version << 16) + (patchlevel << 8) + sublevel;
+ err = asprintf(&code, "#define LINUX_VERSION_CODE 0x%08lx;\n%s",
+ version_code, test_llvm__bpf_prog);
+ if (err < 0)
+ return NULL;
+
+ return code;
+}
+
+#define SHARED_BUF_INIT_SIZE (1 << 20)
+struct test_llvm__bpf_result *p_test_llvm__bpf_result;
+
int test__llvm(void)
{
char *tmpl_new, *clang_opt_new;
void *obj_buf;
size_t obj_buf_sz;
int err, old_verbose;
+ char *source;
perf_config(perf_config_cb, NULL);
@@ -73,10 +99,22 @@ int test__llvm(void)
if (!llvm_param.clang_opt)
llvm_param.clang_opt = strdup("");
- err = asprintf(&tmpl_new, "echo '%s' | %s", test_bpf_prog,
- llvm_param.clang_bpf_cmd_template);
- if (err < 0)
+ source = compose_source();
+ if (!source) {
+ pr_err("Failed to compose source code\n");
+ return -1;
+ }
+
+ /* Quote __EOF__ so strings in source won't be expanded by shell */
+ err = asprintf(&tmpl_new, "cat << '__EOF__' | %s\n%s\n__EOF__\n",
+ llvm_param.clang_bpf_cmd_template, source);
+ free(source);
+ source = NULL;
+ if (err < 0) {
+ pr_err("Failed to alloc new template\n");
return -1;
+ }
+
err = asprintf(&clang_opt_new, "-xc %s", llvm_param.clang_opt);
if (err < 0)
return -1;
@@ -93,6 +131,46 @@ int test__llvm(void)
}
err = test__bpf_parsing(obj_buf, obj_buf_sz);
+ if (!err && p_test_llvm__bpf_result) {
+ if (obj_buf_sz > SHARED_BUF_INIT_SIZE) {
+ pr_err("Resulting object too large\n");
+ } else {
+ p_test_llvm__bpf_result->size = obj_buf_sz;
+ memcpy(p_test_llvm__bpf_result->object,
+ obj_buf, obj_buf_sz);
+ }
+ }
free(obj_buf);
return err;
}
+
+void test__llvm_prepare(void)
+{
+ p_test_llvm__bpf_result = mmap(NULL, SHARED_BUF_INIT_SIZE,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+ if (!p_test_llvm__bpf_result)
+ return;
+ memset((void *)p_test_llvm__bpf_result, '\0', SHARED_BUF_INIT_SIZE);
+}
+
+void test__llvm_cleanup(void)
+{
+ unsigned long boundary, buf_end;
+
+ if (!p_test_llvm__bpf_result)
+ return;
+ if (p_test_llvm__bpf_result->size == 0) {
+ munmap((void *)p_test_llvm__bpf_result, SHARED_BUF_INIT_SIZE);
+ p_test_llvm__bpf_result = NULL;
+ return;
+ }
+
+ buf_end = (unsigned long)p_test_llvm__bpf_result + SHARED_BUF_INIT_SIZE;
+
+ boundary = (unsigned long)(p_test_llvm__bpf_result);
+ boundary += p_test_llvm__bpf_result->size;
+ boundary = (boundary + (page_size - 1)) &
+ (~((unsigned long)page_size - 1));
+ munmap((void *)boundary, buf_end - boundary);
+}
diff --git a/tools/perf/tests/llvm.h b/tools/perf/tests/llvm.h
new file mode 100644
index 0000000..1e89e46
--- /dev/null
+++ b/tools/perf/tests/llvm.h
@@ -0,0 +1,14 @@
+#ifndef PERF_TEST_LLVM_H
+#define PERF_TEST_LLVM_H
+
+#include <stddef.h> /* for size_t */
+
+struct test_llvm__bpf_result {
+ size_t size;
+ char object[];
+};
+
+extern struct test_llvm__bpf_result *p_test_llvm__bpf_result;
+extern const char test_llvm__bpf_prog[];
+
+#endif
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index 0b35496..a5e6f641 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -63,6 +63,8 @@ int test__fdarray__add(void);
int test__kmod_path__parse(void);
int test__thread_map(void);
int test__llvm(void);
+void test__llvm_prepare(void);
+void test__llvm_cleanup(void);
int test__insn_x86(void);
int test_session_topology(void);
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 15/27] perf test: Add 'perf test BPF'
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (13 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 14/27] perf test: Enforce LLVM test for BPF test Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 16/27] bpf tools: Load a program with different instances using preprocessor Wang Nan
` (11 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch adds BPF testcase for testing BPF event filtering.
By utilizing the result of 'perf test LLVM', this patch compiles the
eBPF sample program then test it ability. The BPF script in 'perf test
LLVM' collects half of execution of epoll_pwait(). This patch runs 111
times of it, so the resule should contains 56 samples.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1440151770-129878-16-git-send-email-wangnan0@huawei.com
[wangnan: skip BPF test if not run by root]
---
tools/perf/tests/Build | 1 +
tools/perf/tests/bpf.c | 173 ++++++++++++++++++++++++++++++++++++++++
tools/perf/tests/builtin-test.c | 4 +
tools/perf/tests/llvm.c | 19 +++++
tools/perf/tests/llvm.h | 1 +
tools/perf/tests/tests.h | 1 +
tools/perf/util/bpf-loader.c | 14 ++++
tools/perf/util/bpf-loader.h | 8 ++
8 files changed, 221 insertions(+)
create mode 100644 tools/perf/tests/bpf.c
diff --git a/tools/perf/tests/Build b/tools/perf/tests/Build
index e80f787..5cfb420 100644
--- a/tools/perf/tests/Build
+++ b/tools/perf/tests/Build
@@ -33,6 +33,7 @@ perf-y += parse-no-sample-id-all.o
perf-y += kmod-path.o
perf-y += thread-map.o
perf-y += llvm.o llvm-src.o
+perf-y += bpf.o
perf-y += topology.o
$(OUTPUT)tests/llvm-src.c: tests/bpf-script-example.c
diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c
new file mode 100644
index 0000000..e256c12
--- /dev/null
+++ b/tools/perf/tests/bpf.c
@@ -0,0 +1,173 @@
+#include <stdio.h>
+#include <sys/epoll.h>
+#include <util/bpf-loader.h>
+#include <util/evlist.h>
+#include "tests.h"
+#include "llvm.h"
+#include "debug.h"
+#define NR_ITERS 111
+
+#ifdef HAVE_LIBBPF_SUPPORT
+
+static int epoll_pwait_loop(void)
+{
+ int i;
+
+ /* Should fail NR_ITERS times */
+ for (i = 0; i < NR_ITERS; i++)
+ epoll_pwait(-(i + 1), NULL, 0, 0, NULL);
+ return 0;
+}
+
+static int prepare_bpf(void *obj_buf, size_t obj_buf_sz)
+{
+ int err;
+ char errbuf[BUFSIZ];
+
+ err = bpf__prepare_load_buffer(obj_buf, obj_buf_sz, NULL);
+ if (err) {
+ bpf__strerror_prepare_load("[buffer]", false, err, errbuf,
+ sizeof(errbuf));
+ fprintf(stderr, " (%s)", errbuf);
+ return TEST_FAIL;
+ }
+
+ err = bpf__probe();
+ if (err) {
+ bpf__strerror_load(err, errbuf, sizeof(errbuf));
+ fprintf(stderr, " (%s)", errbuf);
+ return TEST_FAIL;
+ }
+
+ err = bpf__load();
+ if (err) {
+ bpf__strerror_load(err, errbuf, sizeof(errbuf));
+ fprintf(stderr, " (%s)", errbuf);
+ return TEST_FAIL;
+ }
+
+ return 0;
+}
+
+static int do_test(void)
+{
+ struct record_opts opts = {
+ .target = {
+ .uid = UINT_MAX,
+ .uses_mmap = true,
+ },
+ .freq = 0,
+ .mmap_pages = 256,
+ .default_interval = 1,
+ };
+
+ int err, i, count = 0;
+ char pid[16];
+ char sbuf[STRERR_BUFSIZE];
+ struct perf_evlist *evlist;
+
+ snprintf(pid, sizeof(pid), "%d", getpid());
+ pid[sizeof(pid) - 1] = '\0';
+ opts.target.tid = opts.target.pid = pid;
+
+ /* Instead of perf_evlist__new_default, don't add default events */
+ evlist = perf_evlist__new();
+ if (!evlist) {
+ pr_debug("No ehough memory to create evlist\n");
+ return -ENOMEM;
+ }
+
+ err = perf_evlist__create_maps(evlist, &opts.target);
+ if (err < 0) {
+ pr_debug("Not enough memory to create thread/cpu maps\n");
+ goto out_delete_evlist;
+ }
+
+ err = perf_evlist__add_bpf(evlist);
+ if (err) {
+ fprintf(stderr, " (Failed to add events selected by BPF)");
+ goto out_delete_evlist;
+ }
+
+ perf_evlist__config(evlist, &opts);
+
+ err = perf_evlist__open(evlist);
+ if (err < 0) {
+ pr_debug("perf_evlist__open: %s\n",
+ strerror_r(errno, sbuf, sizeof(sbuf)));
+ goto out_delete_evlist;
+ }
+
+ err = perf_evlist__mmap(evlist, opts.mmap_pages, false);
+ if (err < 0) {
+ pr_debug("perf_evlist__mmap: %s\n",
+ strerror_r(errno, sbuf, sizeof(sbuf)));
+ goto out_delete_evlist;
+ }
+
+ perf_evlist__enable(evlist);
+ epoll_pwait_loop();
+ perf_evlist__disable(evlist);
+
+ for (i = 0; i < evlist->nr_mmaps; i++) {
+ union perf_event *event;
+
+ while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) {
+ const u32 type = event->header.type;
+
+ if (type == PERF_RECORD_SAMPLE)
+ count ++;
+ }
+ }
+
+ if (count != (NR_ITERS + 1) / 2) {
+ fprintf(stderr, " (filter result incorrect)");
+ err = -EBADF;
+ }
+
+out_delete_evlist:
+ perf_evlist__delete(evlist);
+ if (err)
+ return TEST_FAIL;
+ return 0;
+}
+
+int test__bpf(void)
+{
+ int err;
+ void *obj_buf;
+ size_t obj_buf_sz;
+
+ if (geteuid() != 0) {
+ fprintf(stderr, " (try run as root)");
+ return TEST_SKIP;
+ }
+
+ test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz);
+ if (!obj_buf || !obj_buf_sz) {
+ if (verbose == 0)
+ fprintf(stderr, " (fix 'perf test LLVM' first)");
+ return TEST_SKIP;
+ }
+
+ err = prepare_bpf(obj_buf, obj_buf_sz);
+ if (err)
+ goto out;
+
+ err = do_test();
+ if (err)
+ goto out;
+out:
+ bpf__unprobe();
+ bpf__clear();
+ if (err)
+ return TEST_FAIL;
+ return 0;
+}
+
+#else
+int test__bpf(void)
+{
+ return TEST_SKIP;
+}
+#endif
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index ba3cf10..80cee5c 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -195,6 +195,10 @@ static struct test {
.func = test_session_topology,
},
{
+ .desc = "Test BPF filter",
+ .func = test__bpf,
+ },
+ {
.func = NULL,
},
};
diff --git a/tools/perf/tests/llvm.c b/tools/perf/tests/llvm.c
index 236bf39..fd5fdb0 100644
--- a/tools/perf/tests/llvm.c
+++ b/tools/perf/tests/llvm.c
@@ -174,3 +174,22 @@ void test__llvm_cleanup(void)
(~((unsigned long)page_size - 1));
munmap((void *)boundary, buf_end - boundary);
}
+
+void
+test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz)
+{
+ *p_obj_buf = NULL;
+ *p_obj_buf_sz = 0;
+
+ if (!p_test_llvm__bpf_result) {
+ test__llvm_prepare();
+ test__llvm();
+ test__llvm_cleanup();
+ }
+
+ if (!p_test_llvm__bpf_result)
+ return;
+
+ *p_obj_buf = p_test_llvm__bpf_result->object;
+ *p_obj_buf_sz = p_test_llvm__bpf_result->size;
+}
diff --git a/tools/perf/tests/llvm.h b/tools/perf/tests/llvm.h
index 1e89e46..2fd7ed6 100644
--- a/tools/perf/tests/llvm.h
+++ b/tools/perf/tests/llvm.h
@@ -10,5 +10,6 @@ struct test_llvm__bpf_result {
extern struct test_llvm__bpf_result *p_test_llvm__bpf_result;
extern const char test_llvm__bpf_prog[];
+void test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz);
#endif
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index a5e6f641..b413f08 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -65,6 +65,7 @@ int test__thread_map(void);
int test__llvm(void);
void test__llvm_prepare(void);
void test__llvm_cleanup(void);
+int test__bpf(void);
int test__insn_x86(void);
int test_session_topology(void);
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 7229f8e..1994552 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -165,6 +165,20 @@ sync_bpf_program_pev(struct bpf_program *prog)
return 0;
}
+int bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz,
+ const char *name)
+{
+ struct bpf_object *obj;
+
+ obj = bpf_object__open_buffer(obj_buf, obj_buf_sz, name);
+ if (!obj) {
+ pr_debug("bpf: failed to load buffer\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int bpf__prepare_load(const char *filename, bool source)
{
struct bpf_object *obj;
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 9a2358c..d91d015 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -18,6 +18,8 @@ typedef int (*bpf_prog_iter_callback_t)(struct probe_trace_event *tev,
#ifdef HAVE_LIBBPF_SUPPORT
int bpf__prepare_load(const char *filename, bool source);
+int bpf__prepare_load_buffer(void *obj_buf, size_t obj_buf_sz,
+ const char *name);
int bpf__strerror_prepare_load(const char *filename, bool source,
int err, char *buf, size_t size);
int bpf__probe(void);
@@ -38,6 +40,12 @@ static inline int bpf__prepare_load(const char *filename __maybe_unused,
return -1;
}
+static inline int bpf__prepare_load_buffer(void *obj_buf __maybe_unused,
+ size_t obj_buf_sz __maybe_unused)
+{
+ return bpf__prepare_load(NULL, false);
+}
+
static inline int bpf__probe(void) { return 0; }
static inline int bpf__unprobe(void) { return 0; }
static inline int bpf__load(void) { return 0; }
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 16/27] bpf tools: Load a program with different instances using preprocessor
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (14 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 15/27] perf test: Add 'perf test BPF' Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 17/27] perf probe: Reset args and nargs for probe_trace_event when failure Wang Nan
` (10 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
In this patch, caller of libbpf is able to control the loaded programs
by installing a preprocessor callback for a BPF program. With
preprocessor, different instances can be created from one BPF program.
This patch will be used by perf to generate different prologue for
different 'struct probe_trace_event' instances matched by one
'struct perf_probe_event'.
bpf_program__set_prep() is added to support this feature. Caller
should pass libbpf the number of instances should be created and a
preprocessor function which will be called when doing real loading.
The callback should return instructions arrays for each instances.
fd field in bpf_programs is replaced by instance, which has an nr field
and fds array. bpf_program__nth_fd() is introduced for read fd of
instances. Old interface bpf_program__fd() is reimplemented by
returning the first fd.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Signed-off-by: He Kuang <hekuang@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-29-git-send-email-wangnan0@huawei.com
[wangnan: Add missing '!',
allows bpf_program__unload() when prog->instance.nr == -1
]
---
tools/lib/bpf/libbpf.c | 143 +++++++++++++++++++++++++++++++++++++++++++++----
tools/lib/bpf/libbpf.h | 22 ++++++++
2 files changed, 156 insertions(+), 9 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 4252fc2..6a07b26 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -98,7 +98,11 @@ struct bpf_program {
} *reloc_desc;
int nr_reloc;
- int fd;
+ struct {
+ int nr;
+ int *fds;
+ } instance;
+ bpf_program_prep_t preprocessor;
struct bpf_object *obj;
void *priv;
@@ -152,10 +156,24 @@ struct bpf_object {
static void bpf_program__unload(struct bpf_program *prog)
{
+ int i;
+
if (!prog)
return;
- zclose(prog->fd);
+ /*
+ * If the object is opened but the program is never loaded,
+ * it is possible that prog->instance.nr == -1.
+ */
+ if (prog->instance.nr > 0) {
+ for (i = 0; i < prog->instance.nr; i++)
+ zclose(prog->instance.fds[i]);
+ } else if (prog->instance.nr != -1)
+ pr_warning("Internal error: instance.nr is %d\n",
+ prog->instance.nr);
+
+ prog->instance.nr = -1;
+ zfree(&prog->instance.fds);
}
static void bpf_program__exit(struct bpf_program *prog)
@@ -206,7 +224,8 @@ bpf_program__init(void *data, size_t size, char *name, int idx,
memcpy(prog->insns, data,
prog->insns_cnt * sizeof(struct bpf_insn));
prog->idx = idx;
- prog->fd = -1;
+ prog->instance.fds = NULL;
+ prog->instance.nr = -1;
return 0;
errout:
@@ -795,13 +814,71 @@ static int
bpf_program__load(struct bpf_program *prog,
char *license, u32 kern_version)
{
- int err, fd;
+ int err = 0, fd, i;
+
+ if (prog->instance.nr < 0 || !prog->instance.fds) {
+ if (prog->preprocessor) {
+ pr_warning("Internal error: can't load program '%s'\n",
+ prog->section_name);
+ return -EINVAL;
+ }
+
+ prog->instance.fds = malloc(sizeof(int));
+ if (!prog->instance.fds) {
+ pr_warning("No enough memory for fds\n");
+ return -ENOMEM;
+ }
+ prog->instance.nr = 1;
+ prog->instance.fds[0] = -1;
+ }
+
+ if (!prog->preprocessor) {
+ if (prog->instance.nr != 1)
+ pr_warning("Program '%s' inconsistent: nr(%d) not 1\n",
+ prog->section_name, prog->instance.nr);
- err = load_program(prog->insns, prog->insns_cnt,
- license, kern_version, &fd);
- if (!err)
- prog->fd = fd;
+ err = load_program(prog->insns, prog->insns_cnt,
+ license, kern_version, &fd);
+ if (!err)
+ prog->instance.fds[0] = fd;
+ goto out;
+ }
+
+ for (i = 0; i < prog->instance.nr; i++) {
+ struct bpf_prog_prep_result result;
+ bpf_program_prep_t preprocessor = prog->preprocessor;
+
+ bzero(&result, sizeof(result));
+ err = preprocessor(prog, i, prog->insns,
+ prog->insns_cnt, &result);
+ if (err) {
+ pr_warning("Preprocessing %dth instance of program '%s' failed\n",
+ i, prog->section_name);
+ goto out;
+ }
+
+ if (!result.new_insn_ptr || !result.new_insn_cnt) {
+ pr_debug("Skip loading %dth instance of program '%s'\n",
+ i, prog->section_name);
+ prog->instance.fds[i] = -1;
+ continue;
+ }
+
+ err = load_program(result.new_insn_ptr,
+ result.new_insn_cnt,
+ license, kern_version, &fd);
+
+ if (err) {
+ pr_warning("Loading %dth instance of program '%s' failed\n",
+ i, prog->section_name);
+ goto out;
+ }
+ if (result.pfd)
+ *result.pfd = fd;
+ prog->instance.fds[i] = fd;
+ }
+out:
if (err)
pr_warning("failed to load program '%s'\n",
prog->section_name);
@@ -1052,5 +1129,53 @@ const char *bpf_program__title(struct bpf_program *prog, bool dup)
int bpf_program__fd(struct bpf_program *prog)
{
- return prog->fd;
+ return bpf_program__nth_fd(prog, 0);
+}
+
+int bpf_program__set_prep(struct bpf_program *prog, int nr_instance,
+ bpf_program_prep_t prep)
+{
+ int *instance_fds;
+
+ if (nr_instance <= 0 || !prep)
+ return -EINVAL;
+
+ if (prog->instance.nr > 0 || prog->instance.fds) {
+ pr_warning("Can't set pre-processor after loading\n");
+ return -EINVAL;
+ }
+
+ instance_fds = malloc(sizeof(int) * nr_instance);
+ if (!instance_fds) {
+ pr_warning("alloc memory failed for instance of fds\n");
+ return -ENOMEM;
+ }
+
+ /* fill all fd with -1 */
+ memset(instance_fds, 0xff, sizeof(int) * nr_instance);
+
+ prog->instance.nr = nr_instance;
+ prog->instance.fds = instance_fds;
+ prog->preprocessor = prep;
+ return 0;
+}
+
+int bpf_program__nth_fd(struct bpf_program *prog, int n)
+{
+ int fd;
+
+ if (n >= prog->instance.nr || n < 0) {
+ pr_warning("Can't get the %dth fd from program %s: only %d instances\n",
+ n, prog->section_name, prog->instance.nr);
+ return -EINVAL;
+ }
+
+ fd = prog->instance.fds[n];
+ if (fd < 0) {
+ pr_warning("%dth instance of program '%s' is invalid\n",
+ n, prog->section_name);
+ return -ENOENT;
+ }
+
+ return fd;
}
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index f16170c..d82b89e 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -67,6 +67,28 @@ const char *bpf_program__title(struct bpf_program *prog, bool dup);
int bpf_program__fd(struct bpf_program *prog);
+struct bpf_insn;
+struct bpf_prog_prep_result {
+ /*
+ * If not NULL, load new instruction array.
+ * If set to NULL, don't load this instance.
+ */
+ struct bpf_insn *new_insn_ptr;
+ int new_insn_cnt;
+
+ /* If not NULL, result fd is set to it */
+ int *pfd;
+};
+
+typedef int (*bpf_program_prep_t)(struct bpf_program *, int n,
+ struct bpf_insn *, int insn_cnt,
+ struct bpf_prog_prep_result *res);
+
+int bpf_program__set_prep(struct bpf_program *prog, int nr_instance,
+ bpf_program_prep_t prep);
+
+int bpf_program__nth_fd(struct bpf_program *prog, int n);
+
/*
* We don't need __attribute__((packed)) now since it is
* unnecessary for 'bpf_map_def' because they are all aligned.
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 17/27] perf probe: Reset args and nargs for probe_trace_event when failure
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (15 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 16/27] bpf tools: Load a program with different instances using preprocessor Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 18/27] perf tools: Add BPF_PROLOGUE config options for further patches Wang Nan
` (9 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
When failure occures in add_probe_trace_event(), args in
probe_trace_event is incomplete. Since information in it may be used
in further, this patch frees the allocated memory and set it to NULL
to avoid dangling pointer.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-31-git-send-email-wangnan0@huawei.com
---
tools/perf/util/probe-finder.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
index 29c43c068..5ab9cd6 100644
--- a/tools/perf/util/probe-finder.c
+++ b/tools/perf/util/probe-finder.c
@@ -1228,6 +1228,10 @@ static int add_probe_trace_event(Dwarf_Die *sc_die, struct probe_finder *pf)
end:
free(args);
+ if (ret) {
+ tev->nargs = 0;
+ zfree(&tev->args);
+ }
return ret;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 18/27] perf tools: Add BPF_PROLOGUE config options for further patches
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (16 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 17/27] perf probe: Reset args and nargs for probe_trace_event when failure Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-16 7:30 ` [tip:perf/core] perf tools: regs_query_register_offset() infrastructure tip-bot for Wang Nan
2015-09-06 7:13 ` [PATCH 19/27] perf tools: Introduce regs_query_register_offset() for x86 Wang Nan
` (8 subsequent siblings)
26 siblings, 1 reply; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
If both LIBBPF and DWARF are detected, it is possible to create prologue
for eBPF programs to help them accessing kernel data. HAVE_BPF_PROLOGUE
and CONFIG_BPF_PROLOGUE is added as flags for this feature.
PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET indicates an architecture
supports converting name of a register to its offset in
'struct pt_regs'. Without this support, BPF_PROLOGUE should be turned off.
HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET is introduced as the corresponding
CFLAGS of PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1441090789-108382-2-git-send-email-wangnan0@huawei.com
[wangnan:
- Simplify Makefile based on Namhyung Kim's suggestion.
]
---
tools/perf/config/Makefile | 17 +++++++++++++++++
tools/perf/util/include/dwarf-regs.h | 8 ++++++++
2 files changed, 25 insertions(+)
diff --git a/tools/perf/config/Makefile b/tools/perf/config/Makefile
index 38a4144..25c9ffc 100644
--- a/tools/perf/config/Makefile
+++ b/tools/perf/config/Makefile
@@ -110,6 +110,11 @@ FEATURE_CHECK_CFLAGS-bpf = -I. -I$(srctree)/tools/include -I$(srctree)/arch/$(AR
# include ARCH specific config
-include $(src-perf)/arch/$(ARCH)/Makefile
+ifdef PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
+ CFLAGS += -DHAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
+endif
+
+
include $(src-perf)/config/utilities.mak
ifeq ($(call get-executable,$(FLEX)),)
@@ -314,6 +319,18 @@ ifndef NO_LIBELF
CFLAGS += -DHAVE_LIBBPF_SUPPORT
$(call detected,CONFIG_LIBBPF)
endif
+
+ ifndef NO_DWARF
+ ifdef PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
+ CFLAGS += -DHAVE_BPF_PROLOGUE
+ $(call detected,CONFIG_BPF_PROLOGUE)
+ else
+ msg := $(warning BPF prologue is not supported by architecture $(ARCH), missing regs_query_register_offset());
+ endif
+ else
+ msg := $(warning DWARF support is off, BPF prologue is disabled);
+ endif
+
endif # NO_LIBBPF
endif # NO_LIBELF
diff --git a/tools/perf/util/include/dwarf-regs.h b/tools/perf/util/include/dwarf-regs.h
index 8f14965..07c644e 100644
--- a/tools/perf/util/include/dwarf-regs.h
+++ b/tools/perf/util/include/dwarf-regs.h
@@ -5,4 +5,12 @@
const char *get_arch_regstr(unsigned int n);
#endif
+#ifdef HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
+/*
+ * Arch should support fetching the offset of a register in pt_regs
+ * by its name. See kernel's regs_query_register_offset in
+ * arch/xxx/kernel/ptrace.c.
+ */
+int regs_query_register_offset(const char *name);
+#endif
#endif
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 19/27] perf tools: Introduce regs_query_register_offset() for x86
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (17 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 18/27] perf tools: Add BPF_PROLOGUE config options for further patches Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-14 21:37 ` Arnaldo Carvalho de Melo
2015-09-16 7:30 ` [tip:perf/core] perf tools: Introduce regs_query_register_offset( ) " tip-bot for Wang Nan
2015-09-06 7:13 ` [PATCH 20/27] perf tools: Add prologue for BPF programs for fetching arguments Wang Nan
` (7 subsequent siblings)
26 siblings, 2 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
regs_query_register_offset() is a helper function which converts
register name like "%rax" to offset of a register in 'struct pt_regs',
which is required by BPF prologue generator. Since the function is
identical, try to reuse the code in arch/x86/kernel/ptrace.c.
Comment inside dwarf-regs.c list the differences between this
implementation and kernel code.
get_arch_regstr() switches to regoffset_table and the old string table
is dropped.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Signed-off-by: He Kuang <hekuang@huawei.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1441090789-108382-2-git-send-email-wangnan0@huawei.com
---
tools/perf/arch/x86/Makefile | 1 +
tools/perf/arch/x86/util/Build | 1 +
tools/perf/arch/x86/util/dwarf-regs.c | 122 ++++++++++++++++++++++++----------
3 files changed, 90 insertions(+), 34 deletions(-)
diff --git a/tools/perf/arch/x86/Makefile b/tools/perf/arch/x86/Makefile
index 21322e0..09ba923 100644
--- a/tools/perf/arch/x86/Makefile
+++ b/tools/perf/arch/x86/Makefile
@@ -2,3 +2,4 @@ ifndef NO_DWARF
PERF_HAVE_DWARF_REGS := 1
endif
HAVE_KVM_STAT_SUPPORT := 1
+PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
diff --git a/tools/perf/arch/x86/util/Build b/tools/perf/arch/x86/util/Build
index ff63649..4659703 100644
--- a/tools/perf/arch/x86/util/Build
+++ b/tools/perf/arch/x86/util/Build
@@ -5,6 +5,7 @@ libperf-y += kvm-stat.o
libperf-y += perf_regs.o
libperf-$(CONFIG_DWARF) += dwarf-regs.o
+libperf-$(CONFIG_BPF_PROLOGUE) += dwarf-regs.o
libperf-$(CONFIG_LIBUNWIND) += unwind-libunwind.o
libperf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
diff --git a/tools/perf/arch/x86/util/dwarf-regs.c b/tools/perf/arch/x86/util/dwarf-regs.c
index a08de0a..9223c16 100644
--- a/tools/perf/arch/x86/util/dwarf-regs.c
+++ b/tools/perf/arch/x86/util/dwarf-regs.c
@@ -21,55 +21,109 @@
*/
#include <stddef.h>
+#include <errno.h> /* for EINVAL */
+#include <string.h> /* for strcmp */
+#include <linux/ptrace.h> /* for struct pt_regs */
+#include <linux/kernel.h> /* for offsetof */
#include <dwarf-regs.h>
/*
- * Generic dwarf analysis helpers
+ * See arch/x86/kernel/ptrace.c.
+ * Different from it:
+ *
+ * - Since struct pt_regs is defined differently for user and kernel,
+ * but we want to use 'ax, bx' instead of 'rax, rbx' (which is struct
+ * field name of user's pt_regs), we make REG_OFFSET_NAME to accept
+ * both string name and reg field name.
+ *
+ * - Since accessing x86_32's pt_regs from x86_64 building is difficult
+ * and vise versa, we simply fill offset with -1, so
+ * get_arch_regstr() still works but regs_query_register_offset()
+ * returns error.
+ * The only inconvenience caused by it now is that we are not allowed
+ * to generate BPF prologue for a x86_64 kernel if perf is built for
+ * x86_32. This is really a rare usecase.
+ *
+ * - Order is different from kernel's ptrace.c for get_arch_regstr(). Use
+ * the order defined by dwarf.
*/
-#define X86_32_MAX_REGS 8
-const char *x86_32_regs_table[X86_32_MAX_REGS] = {
- "%ax",
- "%cx",
- "%dx",
- "%bx",
- "$stack", /* Stack address instead of %sp */
- "%bp",
- "%si",
- "%di",
+struct pt_regs_offset {
+ const char *name;
+ int offset;
+};
+
+#define REG_OFFSET_END {.name = NULL, .offset = 0}
+
+#ifdef __x86_64__
+# define REG_OFFSET_NAME_64(n, r) {.name = n, .offset = offsetof(struct pt_regs, r)}
+# define REG_OFFSET_NAME_32(n, r) {.name = n, .offset = -1}
+#else
+# define REG_OFFSET_NAME_64(n, r) {.name = n, .offset = -1}
+# define REG_OFFSET_NAME_32(n, r) {.name = n, .offset = offsetof(struct pt_regs, r)}
+#endif
+
+static const struct pt_regs_offset x86_32_regoffset_table[] = {
+ REG_OFFSET_NAME_32("%ax", eax),
+ REG_OFFSET_NAME_32("%cx", ecx),
+ REG_OFFSET_NAME_32("%dx", edx),
+ REG_OFFSET_NAME_32("%bx", ebx),
+ REG_OFFSET_NAME_32("$stack", esp), /* Stack address instead of %sp */
+ REG_OFFSET_NAME_32("%bp", ebp),
+ REG_OFFSET_NAME_32("%si", esi),
+ REG_OFFSET_NAME_32("%di", edi),
+ REG_OFFSET_END,
};
-#define X86_64_MAX_REGS 16
-const char *x86_64_regs_table[X86_64_MAX_REGS] = {
- "%ax",
- "%dx",
- "%cx",
- "%bx",
- "%si",
- "%di",
- "%bp",
- "%sp",
- "%r8",
- "%r9",
- "%r10",
- "%r11",
- "%r12",
- "%r13",
- "%r14",
- "%r15",
+static const struct pt_regs_offset x86_64_regoffset_table[] = {
+ REG_OFFSET_NAME_64("%ax", rax),
+ REG_OFFSET_NAME_64("%dx", rdx),
+ REG_OFFSET_NAME_64("%cx", rcx),
+ REG_OFFSET_NAME_64("%bx", rbx),
+ REG_OFFSET_NAME_64("%si", rsi),
+ REG_OFFSET_NAME_64("%di", rdi),
+ REG_OFFSET_NAME_64("%bp", rbp),
+ REG_OFFSET_NAME_64("%sp", rsp),
+ REG_OFFSET_NAME_64("%r8", r8),
+ REG_OFFSET_NAME_64("%r9", r9),
+ REG_OFFSET_NAME_64("%r10", r10),
+ REG_OFFSET_NAME_64("%r11", r11),
+ REG_OFFSET_NAME_64("%r12", r12),
+ REG_OFFSET_NAME_64("%r13", r13),
+ REG_OFFSET_NAME_64("%r14", r14),
+ REG_OFFSET_NAME_64("%r15", r15),
+ REG_OFFSET_END,
};
/* TODO: switching by dwarf address size */
#ifdef __x86_64__
-#define ARCH_MAX_REGS X86_64_MAX_REGS
-#define arch_regs_table x86_64_regs_table
+#define regoffset_table x86_64_regoffset_table
#else
-#define ARCH_MAX_REGS X86_32_MAX_REGS
-#define arch_regs_table x86_32_regs_table
+#define regoffset_table x86_32_regoffset_table
#endif
+/* Minus 1 for the ending REG_OFFSET_END */
+#define ARCH_MAX_REGS ((sizeof(regoffset_table) / sizeof(regoffset_table[0])) - 1)
+
/* Return architecture dependent register string (for kprobe-tracer) */
const char *get_arch_regstr(unsigned int n)
{
- return (n < ARCH_MAX_REGS) ? arch_regs_table[n] : NULL;
+ return (n < ARCH_MAX_REGS) ? regoffset_table[n].name : NULL;
+}
+
+/* Reuse code from arch/x86/kernel/ptrace.c */
+/**
+ * regs_query_register_offset() - query register offset from its name
+ * @name: the name of a register
+ *
+ * regs_query_register_offset() returns the offset of a register in struct
+ * pt_regs from its name. If the name is invalid, this returns -EINVAL;
+ */
+int regs_query_register_offset(const char *name)
+{
+ const struct pt_regs_offset *roff;
+ for (roff = regoffset_table; roff->name != NULL; roff++)
+ if (!strcmp(roff->name, name))
+ return roff->offset;
+ return -EINVAL;
}
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 20/27] perf tools: Add prologue for BPF programs for fetching arguments
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (18 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 19/27] perf tools: Introduce regs_query_register_offset() for x86 Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 21/27] perf tools: Generate prologue for BPF programs Wang Nan
` (6 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
From: He Kuang <hekuang@huawei.com>
This patch generates prologue for a BPF program which fetch arguments
for it. With this patch, the program can have arguments as follow:
SEC("lock_page=__lock_page page->flags")
int lock_page(struct pt_regs *ctx, int err, unsigned long flags)
{
return 1;
}
This patch passes at most 3 arguments from r3, r4 and r5. r1 is still
the ctx pointer. r2 is used to indicate the successfulness of
dereferencing.
This patch uses r6 to hold ctx (struct pt_regs) and r7 to hold stack
pointer for result. Result of each arguments first store on stack:
low address
BPF_REG_FP - 24 ARG3
BPF_REG_FP - 16 ARG2
BPF_REG_FP - 8 ARG1
BPF_REG_FP
high address
Then loaded into r3, r4 and r5.
The output prologue for offn(...off2(off1(reg)))) should be:
r6 <- r1 // save ctx into a callee saved register
r7 <- fp
r7 <- r7 - stack_offset // pointer to result slot
/* load r3 with the offset in pt_regs of 'reg' */
(r7) <- r3 // make slot valid
r3 <- r3 + off1 // prepare to read unsafe pointer
r2 <- 8
r1 <- r7 // result put onto stack
call probe_read // read unsafe pointer
jnei r0, 0, err // error checking
r3 <- (r7) // read result
r3 <- r3 + off2 // prepare to read unsafe pointer
r2 <- 8
r1 <- r7
call probe_read
jnei r0, 0, err
...
/* load r2, r3, r4 from stack */
goto success
err:
r2 <- 1
/* load r3, r4, r5 with 0 */
goto usercode
success:
r2 <- 0
usercode:
r1 <- r6 // restore ctx
// original user code
If all of arguments reside in register (dereferencing is not
required), gen_prologue_fastpath() will be used to create
fast prologue:
r3 <- (r1 + offset of reg1)
r4 <- (r1 + offset of reg2)
r5 <- (r1 + offset of reg3)
r2 <- 0
P.S.
eBPF calling convention is defined as:
* r0 - return value from in-kernel function, and exit value
for eBPF program
* r1 - r5 - arguments from eBPF program to in-kernel function
* r6 - r9 - callee saved registers that in-kernel function will
preserve
* r10 - read-only frame pointer to access stack
Signed-off-by: He Kuang <hekuang@huawei.com>
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-35-git-send-email-wangnan0@huawei.com
---
tools/perf/util/Build | 1 +
tools/perf/util/bpf-prologue.c | 443 +++++++++++++++++++++++++++++++++++++++++
tools/perf/util/bpf-prologue.h | 34 ++++
3 files changed, 478 insertions(+)
create mode 100644 tools/perf/util/bpf-prologue.c
create mode 100644 tools/perf/util/bpf-prologue.h
diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index f5e9569..edc16e4 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -86,6 +86,7 @@ libperf-y += parse-branch-options.o
libperf-y += parse-regs-options.o
libperf-$(CONFIG_LIBBPF) += bpf-loader.o
+libperf-$(CONFIG_BPF_PROLOGUE) += bpf-prologue.o
libperf-$(CONFIG_LIBELF) += symbol-elf.o
libperf-$(CONFIG_LIBELF) += probe-file.o
libperf-$(CONFIG_LIBELF) += probe-event.o
diff --git a/tools/perf/util/bpf-prologue.c b/tools/perf/util/bpf-prologue.c
new file mode 100644
index 0000000..e4adb18
--- /dev/null
+++ b/tools/perf/util/bpf-prologue.c
@@ -0,0 +1,443 @@
+/*
+ * bpf-prologue.c
+ *
+ * Copyright (C) 2015 He Kuang <hekuang@huawei.com>
+ * Copyright (C) 2015 Wang Nan <wangnan0@huawei.com>
+ * Copyright (C) 2015 Huawei Inc.
+ */
+
+#include <bpf/libbpf.h>
+#include "perf.h"
+#include "debug.h"
+#include "bpf-prologue.h"
+#include "probe-finder.h"
+#include <dwarf-regs.h>
+#include <linux/filter.h>
+
+#define BPF_REG_SIZE 8
+
+#define JMP_TO_ERROR_CODE -1
+#define JMP_TO_SUCCESS_CODE -2
+#define JMP_TO_USER_CODE -3
+
+struct bpf_insn_pos {
+ struct bpf_insn *begin;
+ struct bpf_insn *end;
+ struct bpf_insn *pos;
+};
+
+static inline int
+pos_get_cnt(struct bpf_insn_pos *pos)
+{
+ return pos->pos - pos->begin;
+}
+
+static int
+append_insn(struct bpf_insn new_insn, struct bpf_insn_pos *pos)
+{
+ if (!pos->pos)
+ return -ERANGE;
+
+ if (pos->pos + 1 >= pos->end) {
+ pr_err("bpf prologue: prologue too long\n");
+ pos->pos = NULL;
+ return -ERANGE;
+ }
+
+ *(pos->pos)++ = new_insn;
+ return 0;
+}
+
+static int
+check_pos(struct bpf_insn_pos *pos)
+{
+ if (!pos->pos || pos->pos >= pos->end)
+ return -ERANGE;
+ return 0;
+}
+
+/* Give it a shorter name */
+#define ins(i, p) append_insn((i), (p))
+
+/*
+ * Give a register name (in 'reg'), generate instruction to
+ * load register into an eBPF register rd:
+ * 'ldd target_reg, offset(ctx_reg)', where:
+ * ctx_reg is pre initialized to pointer of 'struct pt_regs'.
+ */
+static int
+gen_ldx_reg_from_ctx(struct bpf_insn_pos *pos, int ctx_reg,
+ const char *reg, int target_reg)
+{
+ int offset = regs_query_register_offset(reg);
+
+ if (offset < 0) {
+ pr_err("bpf: prologue: failed to get register %s\n",
+ reg);
+ return -1;
+ }
+ ins(BPF_LDX_MEM(BPF_DW, target_reg, ctx_reg, offset), pos);
+
+ if (check_pos(pos))
+ return -ERANGE;
+ return 0;
+}
+
+/*
+ * Generate a BPF_FUNC_probe_read function call.
+ *
+ * src_base_addr_reg is a register holding base address,
+ * dst_addr_reg is a register holding dest address (on stack),
+ * result is:
+ *
+ * *[dst_addr_reg] = *([src_base_addr_reg] + offset)
+ *
+ * Arguments of BPF_FUNC_probe_read:
+ * ARG1: ptr to stack (dest)
+ * ARG2: size (8)
+ * ARG3: unsafe ptr (src)
+ */
+static int
+gen_read_mem(struct bpf_insn_pos *pos,
+ int src_base_addr_reg,
+ int dst_addr_reg,
+ long offset)
+{
+ /* mov arg3, src_base_addr_reg */
+ if (src_base_addr_reg != BPF_REG_ARG3)
+ ins(BPF_MOV64_REG(BPF_REG_ARG3, src_base_addr_reg), pos);
+ /* add arg3, #offset */
+ if (offset)
+ ins(BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG3, offset), pos);
+
+ /* mov arg2, #reg_size */
+ ins(BPF_ALU64_IMM(BPF_MOV, BPF_REG_ARG2, BPF_REG_SIZE), pos);
+
+ /* mov arg1, dst_addr_reg */
+ if (dst_addr_reg != BPF_REG_ARG1)
+ ins(BPF_MOV64_REG(BPF_REG_ARG1, dst_addr_reg), pos);
+
+ /* Call probe_read */
+ ins(BPF_EMIT_CALL(BPF_FUNC_probe_read), pos);
+ /*
+ * Error processing: if read fail, goto error code,
+ * will be relocated. Target should be the start of
+ * error processing code.
+ */
+ ins(BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, JMP_TO_ERROR_CODE),
+ pos);
+
+ if (check_pos(pos))
+ return -ERANGE;
+ return 0;
+}
+
+/*
+ * Each arg should be bare register. Fetch and save them into argument
+ * registers (r3 - r5).
+ *
+ * BPF_REG_1 should have been initialized with pointer to
+ * 'struct pt_regs'.
+ */
+static int
+gen_prologue_fastpath(struct bpf_insn_pos *pos,
+ struct probe_trace_arg *args, int nargs)
+{
+ int i;
+
+ for (i = 0; i < nargs; i++)
+ if (gen_ldx_reg_from_ctx(pos, BPF_REG_1, args[i].value,
+ BPF_PROLOGUE_START_ARG_REG + i))
+ goto errout;
+
+ if (check_pos(pos))
+ goto errout;
+ return 0;
+errout:
+ return -1;
+}
+
+/*
+ * Slow path:
+ * At least one argument has the form of 'offset($rx)'.
+ *
+ * Following code first stores them into stack, then loads all of then
+ * to r2 - r5.
+ * Before final loading, the final result should be:
+ *
+ * low address
+ * BPF_REG_FP - 24 ARG3
+ * BPF_REG_FP - 16 ARG2
+ * BPF_REG_FP - 8 ARG1
+ * BPF_REG_FP
+ * high address
+ *
+ * For each argument (described as: offn(...off2(off1(reg)))),
+ * generates following code:
+ *
+ * r7 <- fp
+ * r7 <- r7 - stack_offset // Ideal code should initialize r7 using
+ * // fp before generating args. However,
+ * // eBPF won't regard r7 as stack pointer
+ * // if it is generated by minus 8 from
+ * // another stack pointer except fp.
+ * // This is why we have to set r7
+ * // to fp for each variable.
+ * r3 <- value of 'reg'-> generated using gen_ldx_reg_from_ctx()
+ * (r7) <- r3 // skip following instructions for bare reg
+ * r3 <- r3 + off1 . // skip if off1 == 0
+ * r2 <- 8 \
+ * r1 <- r7 |-> generated by gen_read_mem()
+ * call probe_read /
+ * jnei r0, 0, err ./
+ * r3 <- (r7)
+ * r3 <- r3 + off2 . // skip if off2 == 0
+ * r2 <- 8 \ // r2 may be broken by probe_read, so set again
+ * r1 <- r7 |-> generated by gen_read_mem()
+ * call probe_read /
+ * jnei r0, 0, err ./
+ * ...
+ */
+static int
+gen_prologue_slowpath(struct bpf_insn_pos *pos,
+ struct probe_trace_arg *args, int nargs)
+{
+ int i;
+
+ for (i = 0; i < nargs; i++) {
+ struct probe_trace_arg *arg = &args[i];
+ const char *reg = arg->value;
+ struct probe_trace_arg_ref *ref = NULL;
+ int stack_offset = (i + 1) * -8;
+
+ pr_debug("prologue: fetch arg %d, base reg is %s\n",
+ i, reg);
+
+ /* value of base register is stored into ARG3 */
+ if (gen_ldx_reg_from_ctx(pos, BPF_REG_CTX, reg,
+ BPF_REG_ARG3)) {
+ pr_err("prologue: failed to get offset of register %s\n",
+ reg);
+ goto errout;
+ }
+
+ /* Make r7 the stack pointer. */
+ ins(BPF_MOV64_REG(BPF_REG_7, BPF_REG_FP), pos);
+ /* r7 += -8 */
+ ins(BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, stack_offset), pos);
+ /*
+ * Store r3 (base register) onto stack
+ * Ensure fp[offset] is set.
+ * fp is the only valid base register when storing
+ * into stack. We are not allowed to use r7 as base
+ * register here.
+ */
+ ins(BPF_STX_MEM(BPF_DW, BPF_REG_FP, BPF_REG_ARG3,
+ stack_offset), pos);
+
+ ref = arg->ref;
+ while (ref) {
+ pr_debug("prologue: arg %d: offset %ld\n",
+ i, ref->offset);
+ if (gen_read_mem(pos, BPF_REG_3, BPF_REG_7,
+ ref->offset)) {
+ pr_err("prologue: failed to generate probe_read function call\n");
+ goto errout;
+ }
+
+ ref = ref->next;
+ /*
+ * Load previous result into ARG3. Use
+ * BPF_REG_FP instead of r7 because verifier
+ * allows FP based addressing only.
+ */
+ if (ref)
+ ins(BPF_LDX_MEM(BPF_DW, BPF_REG_ARG3,
+ BPF_REG_FP, stack_offset), pos);
+ }
+ }
+
+ /* Final pass: read to registers */
+ for (i = 0; i < nargs; i++)
+ ins(BPF_LDX_MEM(BPF_DW, BPF_PROLOGUE_START_ARG_REG + i,
+ BPF_REG_FP, -BPF_REG_SIZE * (i + 1)), pos);
+
+ ins(BPF_JMP_IMM(BPF_JA, BPF_REG_0, 0, JMP_TO_SUCCESS_CODE), pos);
+
+ if (check_pos(pos))
+ goto errout;
+ return 0;
+errout:
+ return -1;
+}
+
+static int
+prologue_relocate(struct bpf_insn_pos *pos, struct bpf_insn *error_code,
+ struct bpf_insn *success_code, struct bpf_insn *user_code)
+{
+ struct bpf_insn *insn;
+
+ if (check_pos(pos))
+ return -ERANGE;
+
+ for (insn = pos->begin; insn < pos->pos; insn++) {
+ u8 class = BPF_CLASS(insn->code);
+ u8 opcode;
+
+ if (class != BPF_JMP)
+ continue;
+ opcode = BPF_OP(insn->code);
+ if (opcode == BPF_CALL)
+ continue;
+
+ switch (insn->off) {
+ case JMP_TO_ERROR_CODE:
+ insn->off = error_code - (insn + 1);
+ break;
+ case JMP_TO_SUCCESS_CODE:
+ insn->off = success_code - (insn + 1);
+ break;
+ case JMP_TO_USER_CODE:
+ insn->off = user_code - (insn + 1);
+ break;
+ default:
+ pr_err("bpf prologue: internal error: relocation failed\n");
+ return -1;
+ }
+ }
+ return 0;
+}
+
+int bpf__gen_prologue(struct probe_trace_arg *args, int nargs,
+ struct bpf_insn *new_prog, size_t *new_cnt,
+ size_t cnt_space)
+{
+ struct bpf_insn *success_code = NULL;
+ struct bpf_insn *error_code = NULL;
+ struct bpf_insn *user_code = NULL;
+ struct bpf_insn_pos pos;
+ bool fastpath = true;
+ int i;
+
+ if (!new_prog || !new_cnt)
+ return -EINVAL;
+
+ pos.begin = new_prog;
+ pos.end = new_prog + cnt_space;
+ pos.pos = new_prog;
+
+ if (!nargs) {
+ ins(BPF_ALU64_IMM(BPF_MOV, BPF_PROLOGUE_FETCH_RESULT_REG, 0),
+ &pos);
+
+ if (check_pos(&pos))
+ goto errout;
+
+ *new_cnt = pos_get_cnt(&pos);
+ return 0;
+ }
+
+ if (nargs > BPF_PROLOGUE_MAX_ARGS)
+ nargs = BPF_PROLOGUE_MAX_ARGS;
+ if (cnt_space > BPF_MAXINSNS)
+ cnt_space = BPF_MAXINSNS;
+
+ /* First pass: validation */
+ for (i = 0; i < nargs; i++) {
+ struct probe_trace_arg_ref *ref = args[i].ref;
+
+ if (args[i].value[0] == '@') {
+ /* TODO: fetch global variable */
+ pr_err("bpf: prologue: global %s%+ld not support\n",
+ args[i].value, ref ? ref->offset : 0);
+ return -ENOTSUP;
+ }
+
+ while (ref) {
+ /* fastpath is true if all args has ref == NULL */
+ fastpath = false;
+
+ /*
+ * Instruction encodes immediate value using
+ * s32, ref->offset is long. On systems which
+ * can't fill long in s32, refuse to process if
+ * ref->offset too large (or small).
+ */
+#ifdef __LP64__
+#define OFFSET_MAX ((1LL << 31) - 1)
+#define OFFSET_MIN ((1LL << 31) * -1)
+ if (ref->offset > OFFSET_MAX ||
+ ref->offset < OFFSET_MIN) {
+ pr_err("bpf: prologue: offset out of bound: %ld\n",
+ ref->offset);
+ return -E2BIG;
+ }
+#endif
+ ref = ref->next;
+ }
+ }
+ pr_debug("prologue: pass validation\n");
+
+ if (fastpath) {
+ /* If all variables are registers... */
+ pr_debug("prologue: fast path\n");
+ if (gen_prologue_fastpath(&pos, args, nargs))
+ goto errout;
+ } else {
+ pr_debug("prologue: slow path\n");
+
+ /* Initialization: move ctx to a callee saved register. */
+ ins(BPF_MOV64_REG(BPF_REG_CTX, BPF_REG_ARG1), &pos);
+
+ if (gen_prologue_slowpath(&pos, args, nargs))
+ goto errout;
+ /*
+ * start of ERROR_CODE (only slow pass needs error code)
+ * mov r2 <- 1
+ * goto usercode
+ */
+ error_code = pos.pos;
+ ins(BPF_ALU64_IMM(BPF_MOV, BPF_PROLOGUE_FETCH_RESULT_REG, 1),
+ &pos);
+
+ for (i = 0; i < nargs; i++)
+ ins(BPF_ALU64_IMM(BPF_MOV,
+ BPF_PROLOGUE_START_ARG_REG + i,
+ 0),
+ &pos);
+ ins(BPF_JMP_IMM(BPF_JA, BPF_REG_0, 0, JMP_TO_USER_CODE),
+ &pos);
+ }
+
+ /*
+ * start of SUCCESS_CODE:
+ * mov r2 <- 0
+ * goto usercode // skip
+ */
+ success_code = pos.pos;
+ ins(BPF_ALU64_IMM(BPF_MOV, BPF_PROLOGUE_FETCH_RESULT_REG, 0), &pos);
+
+ /*
+ * start of USER_CODE:
+ * Restore ctx to r1
+ */
+ user_code = pos.pos;
+ if (!fastpath) {
+ /*
+ * Only slow path needs restoring of ctx. In fast path,
+ * register are loaded directly from r1.
+ */
+ ins(BPF_MOV64_REG(BPF_REG_ARG1, BPF_REG_CTX), &pos);
+ if (prologue_relocate(&pos, error_code, success_code,
+ user_code))
+ goto errout;
+ }
+
+ if (check_pos(&pos))
+ goto errout;
+
+ *new_cnt = pos_get_cnt(&pos);
+ return 0;
+errout:
+ return -ERANGE;
+}
diff --git a/tools/perf/util/bpf-prologue.h b/tools/perf/util/bpf-prologue.h
new file mode 100644
index 0000000..f1e4c5d
--- /dev/null
+++ b/tools/perf/util/bpf-prologue.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright (C) 2015, He Kuang <hekuang@huawei.com>
+ * Copyright (C) 2015, Huawei Inc.
+ */
+#ifndef __BPF_PROLOGUE_H
+#define __BPF_PROLOGUE_H
+
+#include <linux/compiler.h>
+#include <linux/filter.h>
+#include "probe-event.h"
+
+#define BPF_PROLOGUE_MAX_ARGS 3
+#define BPF_PROLOGUE_START_ARG_REG BPF_REG_3
+#define BPF_PROLOGUE_FETCH_RESULT_REG BPF_REG_2
+
+#ifdef HAVE_BPF_PROLOGUE
+int bpf__gen_prologue(struct probe_trace_arg *args, int nargs,
+ struct bpf_insn *new_prog, size_t *new_cnt,
+ size_t cnt_space);
+#else
+static inline int
+bpf__gen_prologue(struct probe_trace_arg *args __maybe_unused,
+ int nargs __maybe_unused,
+ struct bpf_insn *new_prog __maybe_unused,
+ size_t *new_cnt,
+ size_t cnt_space __maybe_unused)
+{
+ if (!new_cnt)
+ return -EINVAL;
+ *new_cnt = 0;
+ return 0;
+}
+#endif
+#endif /* __BPF_PROLOGUE_H */
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 21/27] perf tools: Generate prologue for BPF programs
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (19 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 20/27] perf tools: Add prologue for BPF programs for fetching arguments Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 22/27] perf tools: Use same BPF program if arguments are identical Wang Nan
` (5 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch generates prologue for each 'struct probe_trace_event' for
fetching arguments for BPF programs.
After bpf__probe(), iterate over each programs to check whether
prologue is required. If none of 'struct perf_probe_event' a program
will attach to has at least one argument, simply skip preprocessor
hooking. For those who prologue is required, calls bpf__gen_prologue()
and paste original instruction after prologue.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-36-git-send-email-wangnan0@huawei.com
---
tools/perf/util/bpf-loader.c | 120 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 119 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 1994552..d3d40f3 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -5,10 +5,13 @@
* Copyright (C) 2015 Huawei Inc.
*/
+#include <linux/bpf.h>
#include <bpf/libbpf.h>
#include "perf.h"
#include "debug.h"
#include "bpf-loader.h"
+#include "bpf-prologue.h"
+#include "llvm-utils.h"
#include "probe-event.h"
#include "probe-finder.h"
#include "llvm-utils.h"
@@ -42,6 +45,8 @@ struct bpf_prog_priv {
struct perf_probe_event *ppev;
struct perf_probe_event pev;
};
+ bool need_prologue;
+ struct bpf_insn *insns_buf;
};
static void
@@ -65,6 +70,7 @@ bpf_prog_priv__clear(struct bpf_program *prog __maybe_unused,
clear_perf_probe_event(&priv->pev);
}
+ zfree(&priv->insns_buf);
free(priv);
}
@@ -248,6 +254,103 @@ int bpf__unprobe(void)
return ret < 0 ? ret : 0;
}
+static int
+preproc_gen_prologue(struct bpf_program *prog, int n,
+ struct bpf_insn *orig_insns, int orig_insns_cnt,
+ struct bpf_prog_prep_result *res)
+{
+ struct probe_trace_event *tev;
+ struct perf_probe_event *pev;
+ struct bpf_prog_priv *priv;
+ struct bpf_insn *buf;
+ size_t prologue_cnt = 0;
+ int err;
+
+ err = bpf_program__get_private(prog, (void **)&priv);
+ if (err || !priv || !priv->pev_ready)
+ goto errout;
+
+ pev = &priv->pev;
+
+ if (n < 0 || n >= pev->ntevs)
+ goto errout;
+
+ tev = &pev->tevs[n];
+
+ buf = priv->insns_buf;
+ err = bpf__gen_prologue(tev->args, tev->nargs,
+ buf, &prologue_cnt,
+ BPF_MAXINSNS - orig_insns_cnt);
+ if (err) {
+ const char *title;
+
+ title = bpf_program__title(prog, false);
+ if (!title)
+ title = "??";
+
+ pr_debug("Failed to generate prologue for program %s\n",
+ title);
+ return err;
+ }
+
+ memcpy(&buf[prologue_cnt], orig_insns,
+ sizeof(struct bpf_insn) * orig_insns_cnt);
+
+ res->new_insn_ptr = buf;
+ res->new_insn_cnt = prologue_cnt + orig_insns_cnt;
+ res->pfd = NULL;
+ return 0;
+
+errout:
+ pr_debug("Internal error in preproc_gen_prologue\n");
+ return -EINVAL;
+}
+
+static int hook_load_preprocessor(struct bpf_program *prog)
+{
+ struct perf_probe_event *pev;
+ struct bpf_prog_priv *priv;
+ bool need_prologue = false;
+ int err, i;
+
+ err = bpf_program__get_private(prog, (void **)&priv);
+ if (err || !priv) {
+ pr_debug("Internal error when hook preprocessor\n");
+ return -EINVAL;
+ }
+
+ pev = &priv->pev;
+ for (i = 0; i < pev->ntevs; i++) {
+ struct probe_trace_event *tev = &pev->tevs[i];
+
+ if (tev->nargs > 0) {
+ need_prologue = true;
+ break;
+ }
+ }
+
+ /*
+ * Since all tev doesn't have argument, we don't need generate
+ * prologue.
+ */
+ if (!need_prologue) {
+ priv->need_prologue = false;
+ return 0;
+ }
+
+ priv->need_prologue = true;
+ priv->insns_buf = malloc(sizeof(struct bpf_insn) *
+ BPF_MAXINSNS);
+ if (!priv->insns_buf) {
+ pr_debug("No enough memory: alloc insns_buf failed\n");
+ return -ENOMEM;
+ }
+
+ err = bpf_program__set_prep(prog, pev->ntevs,
+ preproc_gen_prologue);
+ return err;
+}
+
int bpf__probe(void)
{
int err, nr_events = 0;
@@ -311,6 +414,17 @@ int bpf__probe(void)
err = sync_bpf_program_pev(prog);
if (err)
goto out;
+ /*
+ * After probing, let's consider prologue, which
+ * adds program fetcher to BPF programs.
+ *
+ * hook_load_preprocessorr() hooks pre-processor
+ * to bpf_program, let it generate prologue
+ * dynamically during loading.
+ */
+ err = hook_load_preprocessor(prog);
+ if (err)
+ goto out;
}
}
out:
@@ -375,7 +489,11 @@ int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg)
for (i = 0; i < pev->ntevs; i++) {
tev = &pev->tevs[i];
- fd = bpf_program__fd(prog);
+ if (priv->need_prologue)
+ fd = bpf_program__nth_fd(prog, i);
+ else
+ fd = bpf_program__fd(prog);
+
if (fd < 0) {
pr_debug("bpf: failed to get file descriptor\n");
return fd;
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 22/27] perf tools: Use same BPF program if arguments are identical
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (20 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 21/27] perf tools: Generate prologue for BPF programs Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 23/27] perf record: Support custom vmlinux path Wang Nan
` (4 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch allows creating only one BPF program for different
'probe_trace_event'(tev) generated by one 'perf_probe_event'(pev), if
their prologues are identical.
This is done by comparing argument list of different tev, and maps type
of prologue and tev using a mapping array. This patch utilizes qsort to
sort tevs. After sorting, tevs with identical argument list will group
together.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-37-git-send-email-wangnan0@huawei.com
---
tools/perf/util/bpf-loader.c | 133 ++++++++++++++++++++++++++++++++++++++++---
1 file changed, 126 insertions(+), 7 deletions(-)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index d3d40f3..67193d2 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -47,6 +47,8 @@ struct bpf_prog_priv {
};
bool need_prologue;
struct bpf_insn *insns_buf;
+ int nr_types;
+ int *type_mapping;
};
static void
@@ -71,6 +73,7 @@ bpf_prog_priv__clear(struct bpf_program *prog __maybe_unused,
clear_perf_probe_event(&priv->pev);
}
zfree(&priv->insns_buf);
+ zfree(&priv->type_mapping);
free(priv);
}
@@ -264,7 +267,7 @@ preproc_gen_prologue(struct bpf_program *prog, int n,
struct bpf_prog_priv *priv;
struct bpf_insn *buf;
size_t prologue_cnt = 0;
- int err;
+ int i, err;
err = bpf_program__get_private(prog, (void **)&priv);
if (err || !priv || !priv->pev_ready)
@@ -272,10 +275,20 @@ preproc_gen_prologue(struct bpf_program *prog, int n,
pev = &priv->pev;
- if (n < 0 || n >= pev->ntevs)
+ if (n < 0 || n >= priv->nr_types)
goto errout;
- tev = &pev->tevs[n];
+ /* Find a tev belongs to that type */
+ for (i = 0; i < pev->ntevs; i++)
+ if (priv->type_mapping[i] == n)
+ break;
+
+ if (i >= pev->ntevs) {
+ pr_debug("Internal error: prologue type %d not found\n", n);
+ return -ENOENT;
+ }
+
+ tev = &pev->tevs[i];
buf = priv->insns_buf;
err = bpf__gen_prologue(tev->args, tev->nargs,
@@ -306,6 +319,98 @@ errout:
return -EINVAL;
}
+/*
+ * compare_tev_args is reflexive, transitive and antisymmetric.
+ * I can show that but this margin is too narrow to contain.
+ */
+static int compare_tev_args(const void *ptev1, const void *ptev2)
+{
+ int i, ret;
+ const struct probe_trace_event *tev1 =
+ *(const struct probe_trace_event **)ptev1;
+ const struct probe_trace_event *tev2 =
+ *(const struct probe_trace_event **)ptev2;
+
+ ret = tev2->nargs - tev1->nargs;
+ if (ret)
+ return ret;
+
+ for (i = 0; i < tev1->nargs; i++) {
+ struct probe_trace_arg *arg1, *arg2;
+ struct probe_trace_arg_ref *ref1, *ref2;
+
+ arg1 = &tev1->args[i];
+ arg2 = &tev2->args[i];
+
+ ret = strcmp(arg1->value, arg2->value);
+ if (ret)
+ return ret;
+
+ ref1 = arg1->ref;
+ ref2 = arg2->ref;
+
+ while (ref1 && ref2) {
+ ret = ref2->offset - ref1->offset;
+ if (ret)
+ return ret;
+
+ ref1 = ref1->next;
+ ref2 = ref2->next;
+ }
+
+ if (ref1 || ref2)
+ return ref2 ? 1 : -1;
+ }
+
+ return 0;
+}
+
+static int map_prologue(struct perf_probe_event *pev, int *mapping,
+ int *nr_types)
+{
+ int i, type = 0;
+ struct {
+ struct probe_trace_event *tev;
+ int idx;
+ } *stevs;
+ size_t array_sz = sizeof(*stevs) * pev->ntevs;
+
+ stevs = malloc(array_sz);
+ if (!stevs) {
+ pr_debug("No ehough memory: alloc stevs failed\n");
+ return -ENOMEM;
+ }
+
+ pr_debug("In map_prologue, ntevs=%d\n", pev->ntevs);
+ for (i = 0; i < pev->ntevs; i++) {
+ stevs[i].tev = &pev->tevs[i];
+ stevs[i].idx = i;
+ }
+ qsort(stevs, pev->ntevs, sizeof(*stevs),
+ compare_tev_args);
+
+ for (i = 0; i < pev->ntevs; i++) {
+ if (i == 0) {
+ mapping[stevs[i].idx] = type;
+ pr_debug("mapping[%d]=%d\n", stevs[i].idx,
+ type);
+ continue;
+ }
+
+ if (compare_tev_args(stevs + i, stevs + i - 1) == 0)
+ mapping[stevs[i].idx] = type;
+ else
+ mapping[stevs[i].idx] = ++type;
+
+ pr_debug("mapping[%d]=%d\n", stevs[i].idx,
+ mapping[stevs[i].idx]);
+ }
+ free(stevs);
+ *nr_types = type + 1;
+
+ return 0;
+}
+
static int hook_load_preprocessor(struct bpf_program *prog)
{
struct perf_probe_event *pev;
@@ -346,7 +451,19 @@ static int hook_load_preprocessor(struct bpf_program *prog)
return -ENOMEM;
}
- err = bpf_program__set_prep(prog, pev->ntevs,
+ priv->type_mapping = malloc(sizeof(int) * pev->ntevs);
+ if (!priv->type_mapping) {
+ pr_debug("No enough memory: alloc type_mapping failed\n");
+ return -ENOMEM;
+ }
+ memset(priv->type_mapping, 0xff,
+ sizeof(int) * pev->ntevs);
+
+ err = map_prologue(pev, priv->type_mapping, &priv->nr_types);
+ if (err)
+ return err;
+
+ err = bpf_program__set_prep(prog, priv->nr_types,
preproc_gen_prologue);
return err;
}
@@ -489,9 +606,11 @@ int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg)
for (i = 0; i < pev->ntevs; i++) {
tev = &pev->tevs[i];
- if (priv->need_prologue)
- fd = bpf_program__nth_fd(prog, i);
- else
+ if (priv->need_prologue) {
+ int type = priv->type_mapping[i];
+
+ fd = bpf_program__nth_fd(prog, type);
+ } else
fd = bpf_program__fd(prog);
if (fd < 0) {
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 23/27] perf record: Support custom vmlinux path
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (21 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 22/27] perf tools: Use same BPF program if arguments are identical Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 24/27] perf probe: Init symbol as kprobe Wang Nan
` (3 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
From: He Kuang <hekuang@huawei.com>
Make perf-record command support --vmlinux option if BPF_PROLOGUE is on.
'perf record' needs vmlinux as the source of DWARF info to generate
prologue for BPF programs, so path of vmlinux should be specified.
Short name 'k' has been taken by 'clockid'. This patch skips the short
option name and use '--vmlinux' for vmlinux path.
Signed-off-by: He Kuang <hekuang@huawei.com>
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-38-git-send-email-wangnan0@huawei.com
---
tools/perf/builtin-record.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 11e7d63..43ff231 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1103,6 +1103,10 @@ struct option __record_options[] = {
"clang binary to use for compiling BPF scriptlets"),
OPT_STRING(0, "clang-opt", &llvm_param.clang_opt, "clang options",
"options passed to clang when compiling BPF scriptlets"),
+#ifdef HAVE_BPF_PROLOGUE
+ OPT_STRING(0, "vmlinux", &symbol_conf.vmlinux_name,
+ "file", "vmlinux pathname"),
+#endif
#endif
OPT_END()
};
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 24/27] perf probe: Init symbol as kprobe
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (22 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 23/27] perf record: Support custom vmlinux path Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 25/27] perf tools: Allow BPF program attach to uprobe events Wang Nan
` (2 subsequent siblings)
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
Before this patch, add_perf_probe_events() init symbol maps only for
uprobe if the first 'struct perf_probe_event' passed to it is a uprobe
event. This is a trick because 'perf probe''s command line syntax
constrains the first elements of the probe_event arrays must be kprobes
if there is one kprobe there.
However, with the incoming BPF uprobe support, that constrain is not
hold since 'perf record' will also probe on k/u probes through BPF
object, and is possible to pass an array with kprobe but the first
element is uprobe.
This patch init symbol maps for kprobes even if all of events are
uprobes, because the extra cost should be small enough.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-39-git-send-email-wangnan0@huawei.com
---
tools/perf/util/probe-event.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 5964ecc..7f65197 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -2746,7 +2746,7 @@ int convert_perf_probe_events(struct perf_probe_event *pevs, int npevs)
{
int i, ret;
- ret = init_symbol_maps(pevs->uprobes);
+ ret = init_symbol_maps(false);
if (ret < 0)
return ret;
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 25/27] perf tools: Allow BPF program attach to uprobe events
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (23 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 24/27] perf probe: Init symbol as kprobe Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 26/27] perf test: Enforce LLVM test, add kbuild test Wang Nan
2015-09-06 7:13 ` [PATCH 27/27] perf test: Test BPF prologue Wang Nan
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch appends new syntax to BPF object section name to support
probing at uprobe event. Now we can use BPF program like this:
SEC(
"target=/lib64/libc.so.6\n"
"libcwrite=__write"
)
int libcwrite(void *ctx)
{
return 1;
}
Where, in section name of a program, before the main config string,
we can use 'key=value' style options. Now the only option key "target"
is for uprobe probing.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/n/1436445342-1402-40-git-send-email-wangnan0@huawei.com
---
tools/perf/util/bpf-loader.c | 88 ++++++++++++++++++++++++++++++++++++++++----
1 file changed, 81 insertions(+), 7 deletions(-)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 67193d2..1b7742b 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -78,6 +78,84 @@ bpf_prog_priv__clear(struct bpf_program *prog __maybe_unused,
}
static int
+do_config(const char *key, const char *value,
+ struct perf_probe_event *pev)
+{
+ pr_debug("config bpf program: %s=%s\n", key, value);
+ if (strcmp(key, "target") == 0) {
+ pev->uprobes = true;
+ pev->target = strdup(value);
+ return 0;
+ }
+
+ pr_warning("BPF: WARNING: invalid config option in object: %s=%s\n",
+ key, value);
+ pr_warning("\tHint: Currently only valid option is 'target=<file>'\n");
+ return 0;
+}
+
+static const char *
+parse_config_kvpair(const char *config_str, struct perf_probe_event *pev)
+{
+ char *text = strdup(config_str);
+ char *sep, *line;
+ const char *main_str = NULL;
+ int err = 0;
+
+ if (!text) {
+ pr_debug("No enough memory: dup config_str failed\n");
+ return NULL;
+ }
+
+ line = text;
+ while ((sep = strchr(line, '\n'))) {
+ char *equ;
+
+ *sep = '\0';
+ equ = strchr(line, '=');
+ if (!equ) {
+ pr_warning("WARNING: invalid config in BPF object: %s\n",
+ line);
+ pr_warning("\tShould be 'key=value'.\n");
+ goto nextline;
+ }
+ *equ = '\0';
+
+ err = do_config(line, equ + 1, pev);
+ if (err)
+ break;
+nextline:
+ line = sep + 1;
+ }
+
+ if (!err)
+ main_str = config_str + (line - text);
+ free(text);
+
+ return main_str;
+}
+
+static int
+parse_config(const char *config_str, struct perf_probe_event *pev)
+{
+ const char *main_str;
+ int err;
+
+ main_str = parse_config_kvpair(config_str, pev);
+ if (!main_str)
+ return -EINVAL;
+
+ err = parse_perf_probe_command(main_str, pev);
+ if (err < 0) {
+ pr_debug("bpf: '%s' is not a valid config string\n",
+ config_str);
+ /* parse failed, don't need clear pev. */
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int
config_bpf_program(struct bpf_program *prog, struct perf_probe_event *pev)
{
struct bpf_prog_priv *priv = NULL;
@@ -91,13 +169,9 @@ config_bpf_program(struct bpf_program *prog, struct perf_probe_event *pev)
}
pr_debug("bpf: config program '%s'\n", config_str);
- err = parse_perf_probe_command(config_str, pev);
- if (err < 0) {
- pr_debug("bpf: '%s' is not a valid config string\n",
- config_str);
- /* parse failed, don't need clear pev. */
- return -EINVAL;
- }
+ err = parse_config(config_str, pev);
+ if (err)
+ return err;
if (pev->group && strcmp(pev->group, PERF_BPF_PROBE_GROUP)) {
pr_debug("bpf: '%s': group for event is set and not '%s'.\n",
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 26/27] perf test: Enforce LLVM test, add kbuild test
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (24 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 25/27] perf tools: Allow BPF program attach to uprobe events Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
2015-09-06 7:13 ` [PATCH 27/27] perf test: Test BPF prologue Wang Nan
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch enforces existing LLVM test, makes it compile more than one
BPF source file. The compiled results are stored, can be used for other
testcases. Except the first testcase (named LLVM_TESTCASE_BASE), failures
of other test cases are not considered as failure of the whole test.
Adds a kbuild testcase to check whether kernel headers can be correctly
found.
For example:
# perf test LLVM
38: Test LLVM searching and compiling : (llvm.kbuild-dir can be fixed) Ok
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1441519463-149781-1-git-send-email-wangnan0@huawei.com
---
tools/perf/tests/Build | 11 ++-
tools/perf/tests/bpf-script-example.c | 4 +
tools/perf/tests/bpf-script-test-kbuild.c | 21 ++++
tools/perf/tests/bpf.c | 3 +-
tools/perf/tests/llvm.c | 154 ++++++++++++++++++++++--------
tools/perf/tests/llvm.h | 10 +-
6 files changed, 156 insertions(+), 47 deletions(-)
create mode 100644 tools/perf/tests/bpf-script-test-kbuild.c
diff --git a/tools/perf/tests/Build b/tools/perf/tests/Build
index 5cfb420..2bd5f37 100644
--- a/tools/perf/tests/Build
+++ b/tools/perf/tests/Build
@@ -32,17 +32,24 @@ perf-y += sample-parsing.o
perf-y += parse-no-sample-id-all.o
perf-y += kmod-path.o
perf-y += thread-map.o
-perf-y += llvm.o llvm-src.o
+perf-y += llvm.o llvm-src-base.o llvm-src-kbuild.o
perf-y += bpf.o
perf-y += topology.o
-$(OUTPUT)tests/llvm-src.c: tests/bpf-script-example.c
+$(OUTPUT)tests/llvm-src-base.c: tests/bpf-script-example.c
$(call rule_mkdir)
$(Q)echo '#include <tests/llvm.h>' > $@
$(Q)echo 'const char test_llvm__bpf_prog[] =' >> $@
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
+$(OUTPUT)tests/llvm-src-kbuild.c: tests/bpf-script-test-kbuild.c
+ $(call rule_mkdir)
+ $(Q)echo '#include <tests/llvm.h>' > $@
+ $(Q)echo 'const char test_llvm__bpf_test_kbuild_prog[] =' >> $@
+ $(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
+ $(Q)echo ';' >> $@
+
perf-$(CONFIG_X86) += perf-time-to-tsc.o
ifdef CONFIG_AUXTRACE
perf-$(CONFIG_X86) += insn-x86.o
diff --git a/tools/perf/tests/bpf-script-example.c b/tools/perf/tests/bpf-script-example.c
index 410a70b..0ec9c2c 100644
--- a/tools/perf/tests/bpf-script-example.c
+++ b/tools/perf/tests/bpf-script-example.c
@@ -1,3 +1,7 @@
+/*
+ * bpf-script-example.c
+ * Test basic LLVM building
+ */
#ifndef LINUX_VERSION_CODE
# error Need LINUX_VERSION_CODE
# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
diff --git a/tools/perf/tests/bpf-script-test-kbuild.c b/tools/perf/tests/bpf-script-test-kbuild.c
new file mode 100644
index 0000000..a11f589
--- /dev/null
+++ b/tools/perf/tests/bpf-script-test-kbuild.c
@@ -0,0 +1,21 @@
+/*
+ * bpf-script-test-kbuild.c
+ * Test include from kernel header
+ */
+#ifndef LINUX_VERSION_CODE
+# error Need LINUX_VERSION_CODE
+# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
+#endif
+#define SEC(NAME) __attribute__((section(NAME), used))
+
+#include <uapi/linux/fs.h>
+#include <uapi/asm/ptrace.h>
+
+SEC("func=vfs_llseek")
+int bpf_func__vfs_llseek(struct pt_regs *ctx)
+{
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
+int _version SEC("version") = LINUX_VERSION_CODE;
diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c
index e256c12..64aaab68 100644
--- a/tools/perf/tests/bpf.c
+++ b/tools/perf/tests/bpf.c
@@ -143,7 +143,8 @@ int test__bpf(void)
return TEST_SKIP;
}
- test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz);
+ test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz, LLVM_TESTCASE_BASE);
+
if (!obj_buf || !obj_buf_sz) {
if (verbose == 0)
fprintf(stderr, " (fix 'perf test LLVM' first)");
diff --git a/tools/perf/tests/llvm.c b/tools/perf/tests/llvm.c
index fd5fdb0..75cd99f 100644
--- a/tools/perf/tests/llvm.c
+++ b/tools/perf/tests/llvm.c
@@ -9,6 +9,22 @@
#include "debug.h"
#include "llvm.h"
+#define SHARED_BUF_INIT_SIZE (1 << 20)
+struct llvm_testcase {
+ const char *source;
+ const char *errmsg;
+ struct test_llvm__bpf_result *result;
+ bool tried;
+} llvm_testcases[NR_LLVM_TESTCASES + 1] = {
+ [LLVM_TESTCASE_BASE] = {.source = test_llvm__bpf_prog,
+ .errmsg = "Basic LLVM compiling failed",
+ .tried = false},
+ [LLVM_TESTCASE_KBUILD] = {.source = test_llvm__bpf_test_kbuild_prog,
+ .errmsg = "llvm.kbuild-dir can be fixed",
+ .tried = false},
+ {.source = NULL}
+};
+
static int perf_config_cb(const char *var, const char *val,
void *arg __maybe_unused)
{
@@ -36,7 +52,7 @@ static int test__bpf_parsing(void *obj_buf __maybe_unused,
#endif
static char *
-compose_source(void)
+compose_source(const char *raw_source)
{
struct utsname utsname;
int version, patchlevel, sublevel, err;
@@ -56,25 +72,27 @@ compose_source(void)
version_code = (version << 16) + (patchlevel << 8) + sublevel;
err = asprintf(&code, "#define LINUX_VERSION_CODE 0x%08lx;\n%s",
- version_code, test_llvm__bpf_prog);
+ version_code, raw_source);
if (err < 0)
return NULL;
return code;
}
-#define SHARED_BUF_INIT_SIZE (1 << 20)
-struct test_llvm__bpf_result *p_test_llvm__bpf_result;
-int test__llvm(void)
+static int __test__llvm(int i)
{
- char *tmpl_new, *clang_opt_new;
void *obj_buf;
size_t obj_buf_sz;
int err, old_verbose;
- char *source;
+ const char *tmpl_old, *clang_opt_old;
+ char *tmpl_new, *clang_opt_new, *source;
+ const char *raw_source = llvm_testcases[i].source;
+ struct test_llvm__bpf_result *result = llvm_testcases[i].result;
perf_config(perf_config_cb, NULL);
+ clang_opt_old = llvm_param.clang_opt;
+ tmpl_old = llvm_param.clang_bpf_cmd_template;
/*
* Skip this test if user's .perfconfig doesn't set [llvm] section
@@ -99,15 +117,17 @@ int test__llvm(void)
if (!llvm_param.clang_opt)
llvm_param.clang_opt = strdup("");
- source = compose_source();
+ source = compose_source(raw_source);
if (!source) {
pr_err("Failed to compose source code\n");
return -1;
}
/* Quote __EOF__ so strings in source won't be expanded by shell */
- err = asprintf(&tmpl_new, "cat << '__EOF__' | %s\n%s\n__EOF__\n",
- llvm_param.clang_bpf_cmd_template, source);
+ err = asprintf(&tmpl_new, "cat << '__EOF__' | %s %s \n%s\n__EOF__\n",
+ llvm_param.clang_bpf_cmd_template,
+ !old_verbose ? "2>/dev/null" : "",
+ source);
free(source);
source = NULL;
if (err < 0) {
@@ -123,73 +143,123 @@ int test__llvm(void)
llvm_param.clang_opt = clang_opt_new;
err = llvm__compile_bpf("-", &obj_buf, &obj_buf_sz);
+ free((void *)llvm_param.clang_bpf_cmd_template);
+ free((void *)llvm_param.clang_opt);
+ llvm_param.clang_bpf_cmd_template = tmpl_old;
+ llvm_param.clang_opt = clang_opt_old;
+
verbose = old_verbose;
- if (err) {
- if (!verbose)
- fprintf(stderr, " (use -v to see error message)");
+ if (err)
return -1;
- }
err = test__bpf_parsing(obj_buf, obj_buf_sz);
- if (!err && p_test_llvm__bpf_result) {
+ if (!err && result) {
if (obj_buf_sz > SHARED_BUF_INIT_SIZE) {
pr_err("Resulting object too large\n");
} else {
- p_test_llvm__bpf_result->size = obj_buf_sz;
- memcpy(p_test_llvm__bpf_result->object,
- obj_buf, obj_buf_sz);
+ result->size = obj_buf_sz;
+ memcpy(result->object, obj_buf, obj_buf_sz);
}
}
free(obj_buf);
return err;
}
+int test__llvm(void)
+{
+ int i, ret;
+
+ for (i = 0; llvm_testcases[i].source; i++) {
+ ret = __test__llvm(i);
+ if (i == 0 && ret) {
+ /*
+ * First testcase tests basic LLVM compiling. If it
+ * fails, no need to check others.
+ */
+ if (!verbose)
+ fprintf(stderr, " (use -v to see error message)");
+ return ret;
+ } else if (ret) {
+ if (!verbose && llvm_testcases[i].errmsg)
+ fprintf(stderr, " (%s)", llvm_testcases[i].errmsg);
+ return 0;
+ }
+ }
+ return 0;
+}
+
void test__llvm_prepare(void)
{
- p_test_llvm__bpf_result = mmap(NULL, SHARED_BUF_INIT_SIZE,
- PROT_READ | PROT_WRITE,
- MAP_SHARED | MAP_ANONYMOUS, -1, 0);
- if (!p_test_llvm__bpf_result)
- return;
- memset((void *)p_test_llvm__bpf_result, '\0', SHARED_BUF_INIT_SIZE);
+ int i;
+
+ for (i = 0; llvm_testcases[i].source; i++) {
+ struct test_llvm__bpf_result *result;
+
+ result = mmap(NULL, SHARED_BUF_INIT_SIZE,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+ if (!result)
+ return;
+ memset((void *)result, '\0', SHARED_BUF_INIT_SIZE);
+
+ llvm_testcases[i].result = result;
+ }
}
void test__llvm_cleanup(void)
{
- unsigned long boundary, buf_end;
+ int i;
- if (!p_test_llvm__bpf_result)
- return;
- if (p_test_llvm__bpf_result->size == 0) {
- munmap((void *)p_test_llvm__bpf_result, SHARED_BUF_INIT_SIZE);
- p_test_llvm__bpf_result = NULL;
- return;
- }
+ for (i = 0; llvm_testcases[i].source; i++) {
+ struct test_llvm__bpf_result *result;
+ unsigned long boundary, buf_end;
- buf_end = (unsigned long)p_test_llvm__bpf_result + SHARED_BUF_INIT_SIZE;
+ result = llvm_testcases[i].result;
+ llvm_testcases[i].tried = true;
- boundary = (unsigned long)(p_test_llvm__bpf_result);
- boundary += p_test_llvm__bpf_result->size;
- boundary = (boundary + (page_size - 1)) &
+ if (!result)
+ continue;
+
+ if (result->size == 0) {
+ munmap((void *)result, SHARED_BUF_INIT_SIZE);
+ result = NULL;
+ llvm_testcases[i].result = NULL;
+ continue;
+ }
+
+ buf_end = (unsigned long)result + SHARED_BUF_INIT_SIZE;
+
+ boundary = (unsigned long)(result);
+ boundary += result->size;
+ boundary = (boundary + (page_size - 1)) &
(~((unsigned long)page_size - 1));
- munmap((void *)boundary, buf_end - boundary);
+ munmap((void *)boundary, buf_end - boundary);
+ }
}
void
-test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz)
+test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz, int index)
{
+ struct test_llvm__bpf_result *result;
+
*p_obj_buf = NULL;
*p_obj_buf_sz = 0;
- if (!p_test_llvm__bpf_result) {
+ if (index > NR_LLVM_TESTCASES)
+ return;
+
+ result = llvm_testcases[index].result;
+
+ if (!result && !llvm_testcases[index].tried) {
test__llvm_prepare();
test__llvm();
test__llvm_cleanup();
}
- if (!p_test_llvm__bpf_result)
+ result = llvm_testcases[index].result;
+ if (!result)
return;
- *p_obj_buf = p_test_llvm__bpf_result->object;
- *p_obj_buf_sz = p_test_llvm__bpf_result->size;
+ *p_obj_buf = result->object;
+ *p_obj_buf_sz = result->size;
}
diff --git a/tools/perf/tests/llvm.h b/tools/perf/tests/llvm.h
index 2fd7ed6..78ec01d 100644
--- a/tools/perf/tests/llvm.h
+++ b/tools/perf/tests/llvm.h
@@ -8,8 +8,14 @@ struct test_llvm__bpf_result {
char object[];
};
-extern struct test_llvm__bpf_result *p_test_llvm__bpf_result;
extern const char test_llvm__bpf_prog[];
-void test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz);
+extern const char test_llvm__bpf_test_kbuild_prog[];
+
+enum test_llvm__testcase {
+ LLVM_TESTCASE_BASE,
+ LLVM_TESTCASE_KBUILD,
+ NR_LLVM_TESTCASES,
+};
+void test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz, int index);
#endif
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH 27/27] perf test: Test BPF prologue
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
` (25 preceding siblings ...)
2015-09-06 7:13 ` [PATCH 26/27] perf test: Enforce LLVM test, add kbuild test Wang Nan
@ 2015-09-06 7:13 ` Wang Nan
26 siblings, 0 replies; 35+ messages in thread
From: Wang Nan @ 2015-09-06 7:13 UTC (permalink / raw)
To: acme, ast, masami.hiramatsu.pt, namhyung
Cc: a.p.zijlstra, brendan.d.gregg, daniel, dsahern, hekuang, jolsa,
lizefan, paulus, wangnan0, xiakaixu, pi3orama, linux-kernel
This patch introduces a new BPF script to test BPF prologue. The new
script probes at null_lseek, which is the function pointer when we try
to lseek on '/dev/null'.
null_lseek is chosen because it is a function pointer, so we don't need
to consider inlining and LTP.
By extracting file->f_mode, bpf-script-test-prologue.c should know whether
the file is writable or readonly. According to llseek_loop() and
bpf-script-test-prologue.c, one forth of total lseeks should be collected.
This patch improve test__bpf so it can run multiple BPF programs on
different test functions.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1441519491-149819-1-git-send-email-wangnan0@huawei.com
---
tools/perf/tests/Build | 9 ++-
tools/perf/tests/bpf-script-test-prologue.c | 35 +++++++++++
tools/perf/tests/bpf.c | 93 +++++++++++++++++++++++------
tools/perf/tests/llvm.c | 5 ++
tools/perf/tests/llvm.h | 8 +++
5 files changed, 130 insertions(+), 20 deletions(-)
create mode 100644 tools/perf/tests/bpf-script-test-prologue.c
diff --git a/tools/perf/tests/Build b/tools/perf/tests/Build
index 2bd5f37..3e98a97 100644
--- a/tools/perf/tests/Build
+++ b/tools/perf/tests/Build
@@ -32,7 +32,7 @@ perf-y += sample-parsing.o
perf-y += parse-no-sample-id-all.o
perf-y += kmod-path.o
perf-y += thread-map.o
-perf-y += llvm.o llvm-src-base.o llvm-src-kbuild.o
+perf-y += llvm.o llvm-src-base.o llvm-src-kbuild.o llvm-src-prologue.o
perf-y += bpf.o
perf-y += topology.o
@@ -50,6 +50,13 @@ $(OUTPUT)tests/llvm-src-kbuild.c: tests/bpf-script-test-kbuild.c
$(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
$(Q)echo ';' >> $@
+$(OUTPUT)tests/llvm-src-prologue.c: tests/bpf-script-test-prologue.c
+ $(call rule_mkdir)
+ $(Q)echo '#include <tests/llvm.h>' > $@
+ $(Q)echo 'const char test_llvm__bpf_test_prologue_prog[] =' >> $@
+ $(Q)sed -e 's/"/\\"/g' -e 's/\(.*\)/"\1\\n"/g' $< >> $@
+ $(Q)echo ';' >> $@
+
perf-$(CONFIG_X86) += perf-time-to-tsc.o
ifdef CONFIG_AUXTRACE
perf-$(CONFIG_X86) += insn-x86.o
diff --git a/tools/perf/tests/bpf-script-test-prologue.c b/tools/perf/tests/bpf-script-test-prologue.c
new file mode 100644
index 0000000..7230e62
--- /dev/null
+++ b/tools/perf/tests/bpf-script-test-prologue.c
@@ -0,0 +1,35 @@
+/*
+ * bpf-script-test-prologue.c
+ * Test BPF prologue
+ */
+#ifndef LINUX_VERSION_CODE
+# error Need LINUX_VERSION_CODE
+# error Example: for 4.2 kernel, put 'clang-opt="-DLINUX_VERSION_CODE=0x40200" into llvm section of ~/.perfconfig'
+#endif
+#define SEC(NAME) __attribute__((section(NAME), used))
+
+#include <uapi/linux/fs.h>
+
+#define FMODE_READ 0x1
+#define FMODE_WRITE 0x2
+
+static void (*bpf_trace_printk)(const char *fmt, int fmt_size, ...) =
+ (void *) 6;
+
+SEC("func=null_lseek file->f_mode offset orig")
+int bpf_func__null_lseek(void *ctx, int err, unsigned long f_mode,
+ unsigned long offset, unsigned long orig)
+{
+ if (err)
+ return 0;
+ if (f_mode & FMODE_WRITE)
+ return 0;
+ if (offset & 1)
+ return 0;
+ if (orig == SEEK_CUR)
+ return 0;
+ return 1;
+}
+
+char _license[] SEC("license") = "GPL";
+int _version SEC("version") = LINUX_VERSION_CODE;
diff --git a/tools/perf/tests/bpf.c b/tools/perf/tests/bpf.c
index 64aaab68..6305b3d 100644
--- a/tools/perf/tests/bpf.c
+++ b/tools/perf/tests/bpf.c
@@ -19,14 +19,37 @@ static int epoll_pwait_loop(void)
return 0;
}
-static int prepare_bpf(void *obj_buf, size_t obj_buf_sz)
+#ifdef HAVE_BPF_PROLOGUE
+
+static int llseek_loop(void)
+{
+ int fds[2], i;
+
+ fds[0] = open("/dev/null", O_RDONLY);
+ fds[1] = open("/dev/null", O_RDWR);
+
+ if (fds[0] < 0 || fds[1] < 0)
+ return -1;
+
+ for (i = 0; i < NR_ITERS; i++) {
+ lseek(fds[i % 2], i, (i / 2) % 2 ? SEEK_CUR : SEEK_SET);
+ lseek(fds[(i + 1) % 2], i, (i / 2) % 2 ? SEEK_CUR : SEEK_SET);
+ }
+ close(fds[0]);
+ close(fds[1]);
+ return 0;
+}
+
+#endif
+
+static int prepare_bpf(const char *name, void *obj_buf, size_t obj_buf_sz)
{
int err;
char errbuf[BUFSIZ];
- err = bpf__prepare_load_buffer(obj_buf, obj_buf_sz, NULL);
+ err = bpf__prepare_load_buffer(obj_buf, obj_buf_sz, name);
if (err) {
- bpf__strerror_prepare_load("[buffer]", false, err, errbuf,
+ bpf__strerror_prepare_load(name, false, err, errbuf,
sizeof(errbuf));
fprintf(stderr, " (%s)", errbuf);
return TEST_FAIL;
@@ -49,7 +72,7 @@ static int prepare_bpf(void *obj_buf, size_t obj_buf_sz)
return 0;
}
-static int do_test(void)
+static int do_test(int (*func)(void), int expect)
{
struct record_opts opts = {
.target = {
@@ -106,7 +129,7 @@ static int do_test(void)
}
perf_evlist__enable(evlist);
- epoll_pwait_loop();
+ (*func)();
perf_evlist__disable(evlist);
for (i = 0; i < evlist->nr_mmaps; i++) {
@@ -120,8 +143,8 @@ static int do_test(void)
}
}
- if (count != (NR_ITERS + 1) / 2) {
- fprintf(stderr, " (filter result incorrect)");
+ if (count != expect) {
+ fprintf(stderr, " (filter result incorrect: %d != %d)", count, expect);
err = -EBADF;
}
@@ -132,30 +155,30 @@ out_delete_evlist:
return 0;
}
-int test__bpf(void)
+static int __test__bpf(int index, const char *name,
+ const char *message_compile,
+ const char *message_load,
+ int (*func)(void), int expect)
{
int err;
void *obj_buf;
size_t obj_buf_sz;
- if (geteuid() != 0) {
- fprintf(stderr, " (try run as root)");
- return TEST_SKIP;
- }
-
- test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz, LLVM_TESTCASE_BASE);
-
+ test_llvm__fetch_bpf_obj(&obj_buf, &obj_buf_sz, index);
if (!obj_buf || !obj_buf_sz) {
if (verbose == 0)
- fprintf(stderr, " (fix 'perf test LLVM' first)");
+ fprintf(stderr, " (%s)", message_compile);
return TEST_SKIP;
}
- err = prepare_bpf(obj_buf, obj_buf_sz);
- if (err)
+ err = prepare_bpf(name, obj_buf, obj_buf_sz);
+ if (err) {
+ if ((verbose == 0) && (message_load[0] != '\0'))
+ fprintf(stderr, " (%s)", message_load);
goto out;
+ }
- err = do_test();
+ err = do_test(func, expect);
if (err)
goto out;
out:
@@ -166,6 +189,38 @@ out:
return 0;
}
+int test__bpf(void)
+{
+ int err;
+
+ if (geteuid() != 0) {
+ fprintf(stderr, " (try run as root)");
+ return TEST_SKIP;
+ }
+
+ err = __test__bpf(LLVM_TESTCASE_BASE,
+ "[basic_bpf_test]",
+ "fix 'perf test LLVM' first",
+ "load bpf object failed",
+ &epoll_pwait_loop,
+ (NR_ITERS + 1) / 2);
+ if (err)
+ return err;
+
+#ifdef HAVE_BPF_PROLOGUE
+ err = __test__bpf(LLVM_TESTCASE_BPF_PROLOGUE,
+ "[bpf_prologue_test]",
+ "fix kbuild first",
+ "check your vmlinux setting?",
+ &llseek_loop,
+ (NR_ITERS + 1) / 4);
+ return err;
+#else
+ fprintf(stderr, " (skip BPF prologue test)");
+ return TEST_OK;
+#endif
+}
+
#else
int test__bpf(void)
{
diff --git a/tools/perf/tests/llvm.c b/tools/perf/tests/llvm.c
index 75cd99f..e722e8a 100644
--- a/tools/perf/tests/llvm.c
+++ b/tools/perf/tests/llvm.c
@@ -22,6 +22,11 @@ struct llvm_testcase {
[LLVM_TESTCASE_KBUILD] = {.source = test_llvm__bpf_test_kbuild_prog,
.errmsg = "llvm.kbuild-dir can be fixed",
.tried = false},
+ /* Don't output if this one fail. */
+ [LLVM_TESTCASE_BPF_PROLOGUE] = {
+ .source = test_llvm__bpf_test_prologue_prog,
+ .errmsg = "failed for unknown reason",
+ .tried = false},
{.source = NULL}
};
diff --git a/tools/perf/tests/llvm.h b/tools/perf/tests/llvm.h
index 78ec01d..c00c1be 100644
--- a/tools/perf/tests/llvm.h
+++ b/tools/perf/tests/llvm.h
@@ -10,10 +10,18 @@ struct test_llvm__bpf_result {
extern const char test_llvm__bpf_prog[];
extern const char test_llvm__bpf_test_kbuild_prog[];
+extern const char test_llvm__bpf_test_prologue_prog[];
enum test_llvm__testcase {
LLVM_TESTCASE_BASE,
LLVM_TESTCASE_KBUILD,
+ /*
+ * We must put LLVM_TESTCASE_BPF_PROLOGUE after
+ * LLVM_TESTCASE_KBUILD, so if kbuild test failed,
+ * don't need to try this one, because it depend on
+ * kernel header.
+ */
+ LLVM_TESTCASE_BPF_PROLOGUE,
NR_LLVM_TESTCASES,
};
void test_llvm__fetch_bpf_obj(void **p_obj_buf, size_t *p_obj_buf_sz, int index);
--
2.1.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH 19/27] perf tools: Introduce regs_query_register_offset() for x86
2015-09-06 7:13 ` [PATCH 19/27] perf tools: Introduce regs_query_register_offset() for x86 Wang Nan
@ 2015-09-14 21:37 ` Arnaldo Carvalho de Melo
2015-09-15 1:36 ` Wangnan (F)
2015-09-16 7:30 ` [tip:perf/core] perf tools: Introduce regs_query_register_offset( ) " tip-bot for Wang Nan
1 sibling, 1 reply; 35+ messages in thread
From: Arnaldo Carvalho de Melo @ 2015-09-14 21:37 UTC (permalink / raw)
To: Wang Nan
Cc: ast, masami.hiramatsu.pt, namhyung, a.p.zijlstra, brendan.d.gregg,
acme, daniel, dsahern, hekuang, jolsa, lizefan, paulus, xiakaixu,
pi3orama, linux-kernel
Em Sun, Sep 06, 2015 at 07:13:35AM +0000, Wang Nan escreveu:
> regs_query_register_offset() is a helper function which converts
> register name like "%rax" to offset of a register in 'struct pt_regs',
> which is required by BPF prologue generator. Since the function is
> identical, try to reuse the code in arch/x86/kernel/ptrace.c.
>
> Comment inside dwarf-regs.c list the differences between this
> implementation and kernel code.
>
> get_arch_regstr() switches to regoffset_table and the old string table
> is dropped.
Trying to cherry pick this one, but found this problem, trying to fix by
adding the prototype somewhere...
[acme@zoo linux]$ m
make: Entering directory '/home/git/linux/tools/perf'
BUILD: Doing 'make -j4' parallel build
CC /tmp/build/perf/arch/x86/util/dwarf-regs.o
CC /tmp/build/perf/arch/x86/util/intel-pt.o
arch/x86/util/dwarf-regs.c:122:5: error: no previous prototype for
‘regs_query_register_offset’ [-Werror=missing-prototypes]
int regs_query_register_offset(const char *name)
^
cc1: all warnings being treated as errors
/home/git/linux/tools/build/Makefile.build:70: recipe for target
'/tmp/build/perf/arch/x86/util/dwarf-regs.o' failed
make[5]: *** [/tmp/build/perf/arch/x86/util/dwarf-regs.o] Error 1
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 19/27] perf tools: Introduce regs_query_register_offset() for x86
2015-09-14 21:37 ` Arnaldo Carvalho de Melo
@ 2015-09-15 1:36 ` Wangnan (F)
0 siblings, 0 replies; 35+ messages in thread
From: Wangnan (F) @ 2015-09-15 1:36 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo
Cc: ast, masami.hiramatsu.pt, namhyung, a.p.zijlstra, brendan.d.gregg,
acme, daniel, dsahern, hekuang, jolsa, lizefan, paulus, xiakaixu,
pi3orama, linux-kernel
On 2015/9/15 5:37, Arnaldo Carvalho de Melo wrote:
> Em Sun, Sep 06, 2015 at 07:13:35AM +0000, Wang Nan escreveu:
>> regs_query_register_offset() is a helper function which converts
>> register name like "%rax" to offset of a register in 'struct pt_regs',
>> which is required by BPF prologue generator. Since the function is
>> identical, try to reuse the code in arch/x86/kernel/ptrace.c.
>>
>> Comment inside dwarf-regs.c list the differences between this
>> implementation and kernel code.
>>
>> get_arch_regstr() switches to regoffset_table and the old string table
>> is dropped.
> Trying to cherry pick this one, but found this problem, trying to fix by
> adding the prototype somewhere...
>
>
> [acme@zoo linux]$ m
> make: Entering directory '/home/git/linux/tools/perf'
> BUILD: Doing 'make -j4' parallel build
> CC /tmp/build/perf/arch/x86/util/dwarf-regs.o
> CC /tmp/build/perf/arch/x86/util/intel-pt.o
> arch/x86/util/dwarf-regs.c:122:5: error: no previous prototype for
> ‘regs_query_register_offset’ [-Werror=missing-prototypes]
> int regs_query_register_offset(const char *name)
> ^
> cc1: all warnings being treated as errors
> /home/git/linux/tools/build/Makefile.build:70: recipe for target
> '/tmp/build/perf/arch/x86/util/dwarf-regs.o' failed
> make[5]: *** [/tmp/build/perf/arch/x86/util/dwarf-regs.o] Error 1
This patch depends on patch 18/27: "perf tools: Add BPF_PROLOGUE
config options for further patches" because I wanted to first create
the infrastructure then add regs_qurey_register_offset() arch by arch.
I think split 18/27 into two patch should be better. First one adds
HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET, let the second one consider bpf
prologue.
Thanks.
^ permalink raw reply [flat|nested] 35+ messages in thread
* [tip:perf/core] perf tools: regs_query_register_offset() infrastructure
2015-09-06 7:13 ` [PATCH 18/27] perf tools: Add BPF_PROLOGUE config options for further patches Wang Nan
@ 2015-09-16 7:30 ` tip-bot for Wang Nan
0 siblings, 0 replies; 35+ messages in thread
From: tip-bot for Wang Nan @ 2015-09-16 7:30 UTC (permalink / raw)
To: linux-tip-commits
Cc: xiakaixu, masami.hiramatsu.pt, tglx, brendan.d.gregg, mingo,
dsahern, linux-kernel, jolsa, hpa, hekuang, acme, namhyung, ast,
lizefan, daniel, wangnan0, paulus, a.p.zijlstra
Commit-ID: 63ab024a5b6f295ca17a293ad81b7c728f49a89a
Gitweb: http://git.kernel.org/tip/63ab024a5b6f295ca17a293ad81b7c728f49a89a
Author: Wang Nan <wangnan0@huawei.com>
AuthorDate: Mon, 14 Sep 2015 23:02:49 -0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Tue, 15 Sep 2015 09:48:33 -0300
perf tools: regs_query_register_offset() infrastructure
regs_query_register_offset() is a helper function which converts
register name like "%rax" to offset of a register in 'struct pt_regs',
which is required by BPF prologue generator.
PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET indicates an architecture
supports converting name of a register to its offset in 'struct
pt_regs'.
HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET is introduced as the corresponding
CFLAGS of PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1441523623-152703-19-git-send-email-wangnan0@huawei.com
Signed-off-by: He Kuang <hekuang@huawei.com>
[ Extracted from eBPF patches ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/config/Makefile | 4 ++++
tools/perf/util/include/dwarf-regs.h | 8 ++++++++
2 files changed, 12 insertions(+)
diff --git a/tools/perf/config/Makefile b/tools/perf/config/Makefile
index 827557f..0435ac4 100644
--- a/tools/perf/config/Makefile
+++ b/tools/perf/config/Makefile
@@ -109,6 +109,10 @@ endif
# include ARCH specific config
-include $(src-perf)/arch/$(ARCH)/Makefile
+ifdef PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
+ CFLAGS += -DHAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
+endif
+
include $(src-perf)/config/utilities.mak
ifeq ($(call get-executable,$(FLEX)),)
diff --git a/tools/perf/util/include/dwarf-regs.h b/tools/perf/util/include/dwarf-regs.h
index 8f14965..07c644e 100644
--- a/tools/perf/util/include/dwarf-regs.h
+++ b/tools/perf/util/include/dwarf-regs.h
@@ -5,4 +5,12 @@
const char *get_arch_regstr(unsigned int n);
#endif
+#ifdef HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET
+/*
+ * Arch should support fetching the offset of a register in pt_regs
+ * by its name. See kernel's regs_query_register_offset in
+ * arch/xxx/kernel/ptrace.c.
+ */
+int regs_query_register_offset(const char *name);
+#endif
#endif
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [tip:perf/core] perf tools: Introduce regs_query_register_offset( ) for x86
2015-09-06 7:13 ` [PATCH 19/27] perf tools: Introduce regs_query_register_offset() for x86 Wang Nan
2015-09-14 21:37 ` Arnaldo Carvalho de Melo
@ 2015-09-16 7:30 ` tip-bot for Wang Nan
1 sibling, 0 replies; 35+ messages in thread
From: tip-bot for Wang Nan @ 2015-09-16 7:30 UTC (permalink / raw)
To: linux-tip-commits
Cc: wangnan0, ast, hpa, tglx, acme, jolsa, mingo, dsahern, namhyung,
masami.hiramatsu.pt, lizefan, daniel, a.p.zijlstra, hekuang,
xiakaixu, paulus, linux-kernel, brendan.d.gregg
Commit-ID: bbbe6bf6037d77816c4a19aaf35f4cecf662b49a
Gitweb: http://git.kernel.org/tip/bbbe6bf6037d77816c4a19aaf35f4cecf662b49a
Author: Wang Nan <wangnan0@huawei.com>
AuthorDate: Sun, 6 Sep 2015 07:13:35 +0000
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Tue, 15 Sep 2015 09:48:33 -0300
perf tools: Introduce regs_query_register_offset() for x86
regs_query_register_offset() is a helper function which converts
register name like "%rax" to offset of a register in 'struct pt_regs',
which is required by BPF prologue generator. Since the function is
identical, try to reuse the code in arch/x86/kernel/ptrace.c.
Comment inside dwarf-regs.c list the differences between this
implementation and kernel code.
get_arch_regstr() switches to regoffset_table and the old string table
is dropped.
Signed-off-by: He Kuang <hekuang@huawei.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1441523623-152703-20-git-send-email-wangnan0@huawei.com
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/arch/x86/Makefile | 1 +
tools/perf/arch/x86/util/dwarf-regs.c | 122 ++++++++++++++++++++++++----------
2 files changed, 89 insertions(+), 34 deletions(-)
diff --git a/tools/perf/arch/x86/Makefile b/tools/perf/arch/x86/Makefile
index 21322e0..09ba923 100644
--- a/tools/perf/arch/x86/Makefile
+++ b/tools/perf/arch/x86/Makefile
@@ -2,3 +2,4 @@ ifndef NO_DWARF
PERF_HAVE_DWARF_REGS := 1
endif
HAVE_KVM_STAT_SUPPORT := 1
+PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET := 1
diff --git a/tools/perf/arch/x86/util/dwarf-regs.c b/tools/perf/arch/x86/util/dwarf-regs.c
index a08de0a..9223c16 100644
--- a/tools/perf/arch/x86/util/dwarf-regs.c
+++ b/tools/perf/arch/x86/util/dwarf-regs.c
@@ -21,55 +21,109 @@
*/
#include <stddef.h>
+#include <errno.h> /* for EINVAL */
+#include <string.h> /* for strcmp */
+#include <linux/ptrace.h> /* for struct pt_regs */
+#include <linux/kernel.h> /* for offsetof */
#include <dwarf-regs.h>
/*
- * Generic dwarf analysis helpers
+ * See arch/x86/kernel/ptrace.c.
+ * Different from it:
+ *
+ * - Since struct pt_regs is defined differently for user and kernel,
+ * but we want to use 'ax, bx' instead of 'rax, rbx' (which is struct
+ * field name of user's pt_regs), we make REG_OFFSET_NAME to accept
+ * both string name and reg field name.
+ *
+ * - Since accessing x86_32's pt_regs from x86_64 building is difficult
+ * and vise versa, we simply fill offset with -1, so
+ * get_arch_regstr() still works but regs_query_register_offset()
+ * returns error.
+ * The only inconvenience caused by it now is that we are not allowed
+ * to generate BPF prologue for a x86_64 kernel if perf is built for
+ * x86_32. This is really a rare usecase.
+ *
+ * - Order is different from kernel's ptrace.c for get_arch_regstr(). Use
+ * the order defined by dwarf.
*/
-#define X86_32_MAX_REGS 8
-const char *x86_32_regs_table[X86_32_MAX_REGS] = {
- "%ax",
- "%cx",
- "%dx",
- "%bx",
- "$stack", /* Stack address instead of %sp */
- "%bp",
- "%si",
- "%di",
+struct pt_regs_offset {
+ const char *name;
+ int offset;
+};
+
+#define REG_OFFSET_END {.name = NULL, .offset = 0}
+
+#ifdef __x86_64__
+# define REG_OFFSET_NAME_64(n, r) {.name = n, .offset = offsetof(struct pt_regs, r)}
+# define REG_OFFSET_NAME_32(n, r) {.name = n, .offset = -1}
+#else
+# define REG_OFFSET_NAME_64(n, r) {.name = n, .offset = -1}
+# define REG_OFFSET_NAME_32(n, r) {.name = n, .offset = offsetof(struct pt_regs, r)}
+#endif
+
+static const struct pt_regs_offset x86_32_regoffset_table[] = {
+ REG_OFFSET_NAME_32("%ax", eax),
+ REG_OFFSET_NAME_32("%cx", ecx),
+ REG_OFFSET_NAME_32("%dx", edx),
+ REG_OFFSET_NAME_32("%bx", ebx),
+ REG_OFFSET_NAME_32("$stack", esp), /* Stack address instead of %sp */
+ REG_OFFSET_NAME_32("%bp", ebp),
+ REG_OFFSET_NAME_32("%si", esi),
+ REG_OFFSET_NAME_32("%di", edi),
+ REG_OFFSET_END,
};
-#define X86_64_MAX_REGS 16
-const char *x86_64_regs_table[X86_64_MAX_REGS] = {
- "%ax",
- "%dx",
- "%cx",
- "%bx",
- "%si",
- "%di",
- "%bp",
- "%sp",
- "%r8",
- "%r9",
- "%r10",
- "%r11",
- "%r12",
- "%r13",
- "%r14",
- "%r15",
+static const struct pt_regs_offset x86_64_regoffset_table[] = {
+ REG_OFFSET_NAME_64("%ax", rax),
+ REG_OFFSET_NAME_64("%dx", rdx),
+ REG_OFFSET_NAME_64("%cx", rcx),
+ REG_OFFSET_NAME_64("%bx", rbx),
+ REG_OFFSET_NAME_64("%si", rsi),
+ REG_OFFSET_NAME_64("%di", rdi),
+ REG_OFFSET_NAME_64("%bp", rbp),
+ REG_OFFSET_NAME_64("%sp", rsp),
+ REG_OFFSET_NAME_64("%r8", r8),
+ REG_OFFSET_NAME_64("%r9", r9),
+ REG_OFFSET_NAME_64("%r10", r10),
+ REG_OFFSET_NAME_64("%r11", r11),
+ REG_OFFSET_NAME_64("%r12", r12),
+ REG_OFFSET_NAME_64("%r13", r13),
+ REG_OFFSET_NAME_64("%r14", r14),
+ REG_OFFSET_NAME_64("%r15", r15),
+ REG_OFFSET_END,
};
/* TODO: switching by dwarf address size */
#ifdef __x86_64__
-#define ARCH_MAX_REGS X86_64_MAX_REGS
-#define arch_regs_table x86_64_regs_table
+#define regoffset_table x86_64_regoffset_table
#else
-#define ARCH_MAX_REGS X86_32_MAX_REGS
-#define arch_regs_table x86_32_regs_table
+#define regoffset_table x86_32_regoffset_table
#endif
+/* Minus 1 for the ending REG_OFFSET_END */
+#define ARCH_MAX_REGS ((sizeof(regoffset_table) / sizeof(regoffset_table[0])) - 1)
+
/* Return architecture dependent register string (for kprobe-tracer) */
const char *get_arch_regstr(unsigned int n)
{
- return (n < ARCH_MAX_REGS) ? arch_regs_table[n] : NULL;
+ return (n < ARCH_MAX_REGS) ? regoffset_table[n].name : NULL;
+}
+
+/* Reuse code from arch/x86/kernel/ptrace.c */
+/**
+ * regs_query_register_offset() - query register offset from its name
+ * @name: the name of a register
+ *
+ * regs_query_register_offset() returns the offset of a register in struct
+ * pt_regs from its name. If the name is invalid, this returns -EINVAL;
+ */
+int regs_query_register_offset(const char *name)
+{
+ const struct pt_regs_offset *roff;
+ for (roff = regoffset_table; roff->name != NULL; roff++)
+ if (!strcmp(roff->name, name))
+ return roff->offset;
+ return -EINVAL;
}
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH 05/27] perf record, bpf: Parse and create probe points for BPF programs
2015-09-06 7:13 ` [PATCH 05/27] perf record, bpf: Parse and create probe points for BPF programs Wang Nan
@ 2015-09-16 21:43 ` Arnaldo Carvalho de Melo
2015-09-17 2:04 ` Wangnan (F)
0 siblings, 1 reply; 35+ messages in thread
From: Arnaldo Carvalho de Melo @ 2015-09-16 21:43 UTC (permalink / raw)
To: Wang Nan
Cc: ast, masami.hiramatsu.pt, namhyung, a.p.zijlstra, brendan.d.gregg,
daniel, dsahern, hekuang, jolsa, lizefan, paulus, xiakaixu,
pi3orama, linux-kernel, acme
Em Sun, Sep 06, 2015 at 07:13:21AM +0000, Wang Nan escreveu:
> This patch introduces bpf__{un,}probe() functions to enable callers to
> create kprobe points based on section names of BPF programs. It parses
Ok, so now I see that when we do:
perf record --event bpf_prog.o usleep 1
We are potentially inserting multiple events, one for each eBPF section
found in bpf_prog.o, right?
I.e. multiple evsels, so, when parsing, we should create the kprobes and
from there then create evsels and insert in the normal list that would
eventually get spliced to the evlist being processed, I think.
So, and reading that comment on 4/27, we need to allow that to happen at
parse_events() time, i.e. avoid adding a dummy evsel that would then,
later, be "expanded" into potentially multiple non-dummy evsels.
I put the first 3 patches, with some adjustments, in my perf/ebpf
branch, will try to do the above, i.e. do away with that dummy, allow
parsing the bpf_prog.o sections at parse_events() time.
- Arnaldo
> the section names of each eBPF program and creates corresponding 'struct
> perf_probe_event' structures. The parse_perf_probe_command() function is
> used to do the main parsing work.
>
> Parsing result is stored into an array to satisify
> {convert,apply}_perf_probe_events(). It accepts an array of
> 'struct perf_probe_event' and do all the work in one call.
>
> Define PERF_BPF_PROBE_GROUP as "perf_bpf_probe", which will be used as
> the group name of all eBPF probing points.
>
> probe_conf.max_probes is set to MAX_PROBES to support glob matching.
>
> Before ending of bpf__probe(), data in each 'struct perf_probe_event' is
> cleaned. Things will be changed by following patches because they need
> 'struct probe_trace_event' in them,
>
> Signed-off-by: Wang Nan <wangnan0@huawei.com>
> Cc: Alexei Starovoitov <ast@plumgrid.com>
> Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
> Cc: Daniel Borkmann <daniel@iogearbox.net>
> Cc: David Ahern <dsahern@gmail.com>
> Cc: He Kuang <hekuang@huawei.com>
> Cc: Jiri Olsa <jolsa@kernel.org>
> Cc: Kaixu Xia <xiakaixu@huawei.com>
> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Zefan Li <lizefan@huawei.com>
> Cc: pi3orama@163.com
> Link: http://lkml.kernel.org/n/1436445342-1402-21-git-send-email-wangnan0@huawei.com
> Link: http://lkml.kernel.org/n/1436445342-1402-23-git-send-email-wangnan0@huawei.com
> [Merged by two patches
> wangnan: Utilize new perf probe API {convert,apply,cleanup}_perf_probe_events()
> ]
> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
> ---
> tools/perf/builtin-record.c | 19 +++++-
> tools/perf/util/bpf-loader.c | 149 +++++++++++++++++++++++++++++++++++++++++++
> tools/perf/util/bpf-loader.h | 13 ++++
> 3 files changed, 180 insertions(+), 1 deletion(-)
>
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index f886706..b56109f 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -1141,7 +1141,23 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
> if (err)
> goto out_bpf_clear;
>
> - err = -ENOMEM;
> + /*
> + * bpf__probe must be called before symbol__init() because we
> + * need init_symbol_maps. If called after symbol__init,
> + * symbol_conf.sort_by_name won't take effect.
> + *
> + * bpf__unprobe() is safe even if bpf__probe() failed, and it
> + * also calls symbol__init. Therefore, goto out_symbol_exit
> + * is safe when probe failed.
> + */
> + err = bpf__probe();
> + if (err) {
> + bpf__strerror_probe(err, errbuf, sizeof(errbuf));
> +
> + pr_err("Probing at events in BPF object failed.\n");
> + pr_err("\t%s\n", errbuf);
> + goto out_symbol_exit;
> + }
>
> symbol__init(NULL);
>
> @@ -1202,6 +1218,7 @@ out_symbol_exit:
> perf_evlist__delete(rec->evlist);
> symbol__exit();
> auxtrace_record__free(rec->itr);
> + bpf__unprobe();
> out_bpf_clear:
> bpf__clear();
> return err;
> diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
> index 88531ea..10505cb 100644
> --- a/tools/perf/util/bpf-loader.c
> +++ b/tools/perf/util/bpf-loader.c
> @@ -9,6 +9,8 @@
> #include "perf.h"
> #include "debug.h"
> #include "bpf-loader.h"
> +#include "probe-event.h"
> +#include "probe-finder.h"
>
> #define DEFINE_PRINT_FN(name, level) \
> static int libbpf_##name(const char *fmt, ...) \
> @@ -28,6 +30,58 @@ DEFINE_PRINT_FN(debug, 1)
>
> static bool libbpf_initialized;
>
> +static int
> +config_bpf_program(struct bpf_program *prog, struct perf_probe_event *pev)
> +{
> + const char *config_str;
> + int err;
> +
> + config_str = bpf_program__title(prog, false);
> + if (!config_str) {
> + pr_debug("bpf: unable to get title for program\n");
> + return -EINVAL;
> + }
> +
> + pr_debug("bpf: config program '%s'\n", config_str);
> + err = parse_perf_probe_command(config_str, pev);
> + if (err < 0) {
> + pr_debug("bpf: '%s' is not a valid config string\n",
> + config_str);
> + /* parse failed, don't need clear pev. */
> + return -EINVAL;
> + }
> +
> + if (pev->group && strcmp(pev->group, PERF_BPF_PROBE_GROUP)) {
> + pr_debug("bpf: '%s': group for event is set and not '%s'.\n",
> + config_str, PERF_BPF_PROBE_GROUP);
> + err = -EINVAL;
> + goto errout;
> + } else if (!pev->group)
> + pev->group = strdup(PERF_BPF_PROBE_GROUP);
> +
> + if (!pev->group) {
> + pr_debug("bpf: strdup failed\n");
> + err = -ENOMEM;
> + goto errout;
> + }
> +
> + if (!pev->event) {
> + pr_debug("bpf: '%s': event name is missing\n",
> + config_str);
> + err = -EINVAL;
> + goto errout;
> + }
> +
> + pr_debug("bpf: config '%s' is ok\n", config_str);
> +
> + return 0;
> +
> +errout:
> + if (pev)
> + clear_perf_probe_event(pev);
> + return err;
> +}
> +
> int bpf__prepare_load(const char *filename)
> {
> struct bpf_object *obj;
> @@ -59,6 +113,90 @@ void bpf__clear(void)
> bpf_object__close(obj);
> }
>
> +static bool is_probed;
> +
> +int bpf__unprobe(void)
> +{
> + struct strfilter *delfilter;
> + int ret;
> +
> + if (!is_probed)
> + return 0;
> +
> + delfilter = strfilter__new(PERF_BPF_PROBE_GROUP ":*", NULL);
> + if (!delfilter) {
> + pr_debug("Failed to create delfilter when unprobing\n");
> + return -ENOMEM;
> + }
> +
> + ret = del_perf_probe_events(delfilter);
> + strfilter__delete(delfilter);
> + if (ret < 0 && is_probed)
> + pr_debug("Error: failed to delete events: %s\n",
> + strerror(-ret));
> + else
> + is_probed = false;
> + return ret < 0 ? ret : 0;
> +}
> +
> +int bpf__probe(void)
> +{
> + int err, nr_events = 0;
> + struct bpf_object *obj, *tmp;
> + struct bpf_program *prog;
> + struct perf_probe_event *pevs;
> +
> + pevs = calloc(MAX_PROBES, sizeof(pevs[0]));
> + if (!pevs)
> + return -ENOMEM;
> +
> + bpf_object__for_each_safe(obj, tmp) {
> + bpf_object__for_each_program(prog, obj) {
> + err = config_bpf_program(prog, &pevs[nr_events++]);
> + if (err < 0)
> + goto out;
> +
> + if (nr_events >= MAX_PROBES) {
> + pr_debug("Too many (more than %d) events\n",
> + MAX_PROBES);
> + err = -ERANGE;
> + goto out;
> + };
> + }
> + }
> +
> + if (!nr_events) {
> + /*
> + * Don't call following code to prevent perf report failure
> + * init_symbol_maps can fail when perf is started by non-root
> + * user, which prevent non-root user track normal events even
> + * without using BPF, because bpf__probe() is called by
> + * 'perf record' unconditionally.
> + */
> + err = 0;
> + goto out;
> + }
> +
> + probe_conf.max_probes = MAX_PROBES;
> + /* Let convert_perf_probe_events generates probe_trace_event (tevs) */
> + err = convert_perf_probe_events(pevs, nr_events);
> + if (err < 0) {
> + pr_debug("bpf_probe: failed to convert perf probe events");
> + goto out;
> + }
> +
> + err = apply_perf_probe_events(pevs, nr_events);
> + if (err < 0)
> + pr_debug("bpf probe: failed to probe events\n");
> + else
> + is_probed = true;
> +out_cleanup:
> + cleanup_perf_probe_events(pevs, nr_events);
> +out:
> + free(pevs);
> + return err < 0 ? err : 0;
> +}
> +
> #define bpf__strerror_head(err, buf, size) \
> char sbuf[STRERR_BUFSIZE], *emsg;\
> if (!size)\
> @@ -90,3 +228,14 @@ int bpf__strerror_prepare_load(const char *filename, int err,
> bpf__strerror_end(buf, size);
> return 0;
> }
> +
> +int bpf__strerror_probe(int err, char *buf, size_t size)
> +{
> + bpf__strerror_head(err, buf, size);
> + bpf__strerror_entry(ERANGE, "Too many (more than %d) events",
> + MAX_PROBES);
> + bpf__strerror_entry(ENOENT, "Selected kprobe point doesn't exist.");
> + bpf__strerror_entry(EEXIST, "Selected kprobe point already exist, try perf probe -d '*'.");
> + bpf__strerror_end(buf, size);
> + return 0;
> +}
> diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
> index 12be630..6b09a85 100644
> --- a/tools/perf/util/bpf-loader.h
> +++ b/tools/perf/util/bpf-loader.h
> @@ -9,10 +9,15 @@
> #include <string.h>
> #include "debug.h"
>
> +#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
> +
> #ifdef HAVE_LIBBPF_SUPPORT
> int bpf__prepare_load(const char *filename);
> int bpf__strerror_prepare_load(const char *filename, int err,
> char *buf, size_t size);
> +int bpf__probe(void);
> +int bpf__unprobe(void);
> +int bpf__strerror_probe(int err, char *buf, size_t size);
>
> void bpf__clear(void);
> #else
> @@ -22,6 +27,8 @@ static inline int bpf__prepare_load(const char *filename __maybe_unused)
> return -1;
> }
>
> +static inline int bpf__probe(void) { return 0; }
> +static inline int bpf__unprobe(void) { return 0; }
> static inline void bpf__clear(void) { }
>
> static inline int
> @@ -43,5 +50,11 @@ bpf__strerror_prepare_load(const char *filename __maybe_unused,
> {
> return __bpf_strerror(buf, size);
> }
> +
> +static inline int bpf__strerror_probe(int err __maybe_unused,
> + char *buf, size_t size)
> +{
> + return __bpf_strerror(buf, size);
> +}
> #endif
> #endif
> --
> 2.1.0
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH 05/27] perf record, bpf: Parse and create probe points for BPF programs
2015-09-16 21:43 ` Arnaldo Carvalho de Melo
@ 2015-09-17 2:04 ` Wangnan (F)
0 siblings, 0 replies; 35+ messages in thread
From: Wangnan (F) @ 2015-09-17 2:04 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo
Cc: ast, masami.hiramatsu.pt, namhyung, a.p.zijlstra, brendan.d.gregg,
daniel, dsahern, hekuang, jolsa, lizefan, paulus, xiakaixu,
pi3orama, linux-kernel, acme
On 2015/9/17 5:43, Arnaldo Carvalho de Melo wrote:
> Em Sun, Sep 06, 2015 at 07:13:21AM +0000, Wang Nan escreveu:
>> This patch introduces bpf__{un,}probe() functions to enable callers to
>> create kprobe points based on section names of BPF programs. It parses
> Ok, so now I see that when we do:
>
> perf record --event bpf_prog.o usleep 1
>
> We are potentially inserting multiple events, one for each eBPF section
> found in bpf_prog.o, right?
Yes.
> I.e. multiple evsels, so, when parsing, we should create the kprobes and
> from there then create evsels and insert in the normal list that would
> eventually get spliced to the evlist being processed, I think.
>
> So, and reading that comment on 4/27, we need to allow that to happen at
> parse_events() time, i.e. avoid adding a dummy evsel that would then,
> later, be "expanded" into potentially multiple non-dummy evsels.
>
> I put the first 3 patches, with some adjustments, in my perf/ebpf
> branch, will try to do the above, i.e. do away with that dummy, allow
> parsing the bpf_prog.o sections at parse_events() time.
Then I suggest a small interface modification in libbpf to return
'struct bpf_object *',
then after loading we can iterator on each program in that object, creat
kprobe points
then made evsels. We need to call convert_perf_probe_events() multiple
times for each
object. Since Namhyung updates the probing interface, I think it becomes
possible.
I'm glad to see you will working on this problem. If you have other
ugrent business
to do, I think I can also start this work from Sept. 21 and will have 3
days before
my vacation.
Thank you.
> - Arnaldo
>
>> the section names of each eBPF program and creates corresponding 'struct
>> perf_probe_event' structures. The parse_perf_probe_command() function is
>> used to do the main parsing work.
>>
>> Parsing result is stored into an array to satisify
>> {convert,apply}_perf_probe_events(). It accepts an array of
>> 'struct perf_probe_event' and do all the work in one call.
>>
>> Define PERF_BPF_PROBE_GROUP as "perf_bpf_probe", which will be used as
>> the group name of all eBPF probing points.
>>
>> probe_conf.max_probes is set to MAX_PROBES to support glob matching.
>>
>> Before ending of bpf__probe(), data in each 'struct perf_probe_event' is
>> cleaned. Things will be changed by following patches because they need
>> 'struct probe_trace_event' in them,
>>
>> Signed-off-by: Wang Nan <wangnan0@huawei.com>
>> Cc: Alexei Starovoitov <ast@plumgrid.com>
>> Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
>> Cc: Daniel Borkmann <daniel@iogearbox.net>
>> Cc: David Ahern <dsahern@gmail.com>
>> Cc: He Kuang <hekuang@huawei.com>
>> Cc: Jiri Olsa <jolsa@kernel.org>
>> Cc: Kaixu Xia <xiakaixu@huawei.com>
>> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
>> Cc: Namhyung Kim <namhyung@kernel.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
>> Cc: Zefan Li <lizefan@huawei.com>
>> Cc: pi3orama@163.com
>> Link: http://lkml.kernel.org/n/1436445342-1402-21-git-send-email-wangnan0@huawei.com
>> Link: http://lkml.kernel.org/n/1436445342-1402-23-git-send-email-wangnan0@huawei.com
>> [Merged by two patches
>> wangnan: Utilize new perf probe API {convert,apply,cleanup}_perf_probe_events()
>> ]
>> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
>> ---
>> tools/perf/builtin-record.c | 19 +++++-
>> tools/perf/util/bpf-loader.c | 149 +++++++++++++++++++++++++++++++++++++++++++
>> tools/perf/util/bpf-loader.h | 13 ++++
>> 3 files changed, 180 insertions(+), 1 deletion(-)
>>
>> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
>> index f886706..b56109f 100644
>> --- a/tools/perf/builtin-record.c
>> +++ b/tools/perf/builtin-record.c
>> @@ -1141,7 +1141,23 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
>> if (err)
>> goto out_bpf_clear;
>>
>> - err = -ENOMEM;
>> + /*
>> + * bpf__probe must be called before symbol__init() because we
>> + * need init_symbol_maps. If called after symbol__init,
>> + * symbol_conf.sort_by_name won't take effect.
>> + *
>> + * bpf__unprobe() is safe even if bpf__probe() failed, and it
>> + * also calls symbol__init. Therefore, goto out_symbol_exit
>> + * is safe when probe failed.
>> + */
>> + err = bpf__probe();
>> + if (err) {
>> + bpf__strerror_probe(err, errbuf, sizeof(errbuf));
>> +
>> + pr_err("Probing at events in BPF object failed.\n");
>> + pr_err("\t%s\n", errbuf);
>> + goto out_symbol_exit;
>> + }
>>
>> symbol__init(NULL);
>>
>> @@ -1202,6 +1218,7 @@ out_symbol_exit:
>> perf_evlist__delete(rec->evlist);
>> symbol__exit();
>> auxtrace_record__free(rec->itr);
>> + bpf__unprobe();
>> out_bpf_clear:
>> bpf__clear();
>> return err;
>> diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
>> index 88531ea..10505cb 100644
>> --- a/tools/perf/util/bpf-loader.c
>> +++ b/tools/perf/util/bpf-loader.c
>> @@ -9,6 +9,8 @@
>> #include "perf.h"
>> #include "debug.h"
>> #include "bpf-loader.h"
>> +#include "probe-event.h"
>> +#include "probe-finder.h"
>>
>> #define DEFINE_PRINT_FN(name, level) \
>> static int libbpf_##name(const char *fmt, ...) \
>> @@ -28,6 +30,58 @@ DEFINE_PRINT_FN(debug, 1)
>>
>> static bool libbpf_initialized;
>>
>> +static int
>> +config_bpf_program(struct bpf_program *prog, struct perf_probe_event *pev)
>> +{
>> + const char *config_str;
>> + int err;
>> +
>> + config_str = bpf_program__title(prog, false);
>> + if (!config_str) {
>> + pr_debug("bpf: unable to get title for program\n");
>> + return -EINVAL;
>> + }
>> +
>> + pr_debug("bpf: config program '%s'\n", config_str);
>> + err = parse_perf_probe_command(config_str, pev);
>> + if (err < 0) {
>> + pr_debug("bpf: '%s' is not a valid config string\n",
>> + config_str);
>> + /* parse failed, don't need clear pev. */
>> + return -EINVAL;
>> + }
>> +
>> + if (pev->group && strcmp(pev->group, PERF_BPF_PROBE_GROUP)) {
>> + pr_debug("bpf: '%s': group for event is set and not '%s'.\n",
>> + config_str, PERF_BPF_PROBE_GROUP);
>> + err = -EINVAL;
>> + goto errout;
>> + } else if (!pev->group)
>> + pev->group = strdup(PERF_BPF_PROBE_GROUP);
>> +
>> + if (!pev->group) {
>> + pr_debug("bpf: strdup failed\n");
>> + err = -ENOMEM;
>> + goto errout;
>> + }
>> +
>> + if (!pev->event) {
>> + pr_debug("bpf: '%s': event name is missing\n",
>> + config_str);
>> + err = -EINVAL;
>> + goto errout;
>> + }
>> +
>> + pr_debug("bpf: config '%s' is ok\n", config_str);
>> +
>> + return 0;
>> +
>> +errout:
>> + if (pev)
>> + clear_perf_probe_event(pev);
>> + return err;
>> +}
>> +
>> int bpf__prepare_load(const char *filename)
>> {
>> struct bpf_object *obj;
>> @@ -59,6 +113,90 @@ void bpf__clear(void)
>> bpf_object__close(obj);
>> }
>>
>> +static bool is_probed;
>> +
>> +int bpf__unprobe(void)
>> +{
>> + struct strfilter *delfilter;
>> + int ret;
>> +
>> + if (!is_probed)
>> + return 0;
>> +
>> + delfilter = strfilter__new(PERF_BPF_PROBE_GROUP ":*", NULL);
>> + if (!delfilter) {
>> + pr_debug("Failed to create delfilter when unprobing\n");
>> + return -ENOMEM;
>> + }
>> +
>> + ret = del_perf_probe_events(delfilter);
>> + strfilter__delete(delfilter);
>> + if (ret < 0 && is_probed)
>> + pr_debug("Error: failed to delete events: %s\n",
>> + strerror(-ret));
>> + else
>> + is_probed = false;
>> + return ret < 0 ? ret : 0;
>> +}
>> +
>> +int bpf__probe(void)
>> +{
>> + int err, nr_events = 0;
>> + struct bpf_object *obj, *tmp;
>> + struct bpf_program *prog;
>> + struct perf_probe_event *pevs;
>> +
>> + pevs = calloc(MAX_PROBES, sizeof(pevs[0]));
>> + if (!pevs)
>> + return -ENOMEM;
>> +
>> + bpf_object__for_each_safe(obj, tmp) {
>> + bpf_object__for_each_program(prog, obj) {
>> + err = config_bpf_program(prog, &pevs[nr_events++]);
>> + if (err < 0)
>> + goto out;
>> +
>> + if (nr_events >= MAX_PROBES) {
>> + pr_debug("Too many (more than %d) events\n",
>> + MAX_PROBES);
>> + err = -ERANGE;
>> + goto out;
>> + };
>> + }
>> + }
>> +
>> + if (!nr_events) {
>> + /*
>> + * Don't call following code to prevent perf report failure
>> + * init_symbol_maps can fail when perf is started by non-root
>> + * user, which prevent non-root user track normal events even
>> + * without using BPF, because bpf__probe() is called by
>> + * 'perf record' unconditionally.
>> + */
>> + err = 0;
>> + goto out;
>> + }
>> +
>> + probe_conf.max_probes = MAX_PROBES;
>> + /* Let convert_perf_probe_events generates probe_trace_event (tevs) */
>> + err = convert_perf_probe_events(pevs, nr_events);
>> + if (err < 0) {
>> + pr_debug("bpf_probe: failed to convert perf probe events");
>> + goto out;
>> + }
>> +
>> + err = apply_perf_probe_events(pevs, nr_events);
>> + if (err < 0)
>> + pr_debug("bpf probe: failed to probe events\n");
>> + else
>> + is_probed = true;
>> +out_cleanup:
>> + cleanup_perf_probe_events(pevs, nr_events);
>> +out:
>> + free(pevs);
>> + return err < 0 ? err : 0;
>> +}
>> +
>> #define bpf__strerror_head(err, buf, size) \
>> char sbuf[STRERR_BUFSIZE], *emsg;\
>> if (!size)\
>> @@ -90,3 +228,14 @@ int bpf__strerror_prepare_load(const char *filename, int err,
>> bpf__strerror_end(buf, size);
>> return 0;
>> }
>> +
>> +int bpf__strerror_probe(int err, char *buf, size_t size)
>> +{
>> + bpf__strerror_head(err, buf, size);
>> + bpf__strerror_entry(ERANGE, "Too many (more than %d) events",
>> + MAX_PROBES);
>> + bpf__strerror_entry(ENOENT, "Selected kprobe point doesn't exist.");
>> + bpf__strerror_entry(EEXIST, "Selected kprobe point already exist, try perf probe -d '*'.");
>> + bpf__strerror_end(buf, size);
>> + return 0;
>> +}
>> diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
>> index 12be630..6b09a85 100644
>> --- a/tools/perf/util/bpf-loader.h
>> +++ b/tools/perf/util/bpf-loader.h
>> @@ -9,10 +9,15 @@
>> #include <string.h>
>> #include "debug.h"
>>
>> +#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
>> +
>> #ifdef HAVE_LIBBPF_SUPPORT
>> int bpf__prepare_load(const char *filename);
>> int bpf__strerror_prepare_load(const char *filename, int err,
>> char *buf, size_t size);
>> +int bpf__probe(void);
>> +int bpf__unprobe(void);
>> +int bpf__strerror_probe(int err, char *buf, size_t size);
>>
>> void bpf__clear(void);
>> #else
>> @@ -22,6 +27,8 @@ static inline int bpf__prepare_load(const char *filename __maybe_unused)
>> return -1;
>> }
>>
>> +static inline int bpf__probe(void) { return 0; }
>> +static inline int bpf__unprobe(void) { return 0; }
>> static inline void bpf__clear(void) { }
>>
>> static inline int
>> @@ -43,5 +50,11 @@ bpf__strerror_prepare_load(const char *filename __maybe_unused,
>> {
>> return __bpf_strerror(buf, size);
>> }
>> +
>> +static inline int bpf__strerror_probe(int err __maybe_unused,
>> + char *buf, size_t size)
>> +{
>> + return __bpf_strerror(buf, size);
>> +}
>> #endif
>> #endif
>> --
>> 2.1.0
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 35+ messages in thread
* [tip:perf/core] perf tools: Don' t assume that the parser returns non empty evsel list
2015-09-06 7:13 ` [PATCH 01/27] perf tools: Don't write to evsel if parser doesn't collect evsel Wang Nan
@ 2015-09-23 8:43 ` tip-bot for Wang Nan
0 siblings, 0 replies; 35+ messages in thread
From: tip-bot for Wang Nan @ 2015-09-23 8:43 UTC (permalink / raw)
To: linux-tip-commits
Cc: dsahern, brendan.d.gregg, xiakaixu, ast, daniel, paulus, mingo,
wangnan0, lizefan, linux-kernel, a.p.zijlstra, hekuang, namhyung,
masami.hiramatsu.pt, hpa, jolsa, tglx, acme
Commit-ID: 854f736364c659046f066a98fed2fdb10a39577a
Gitweb: http://git.kernel.org/tip/854f736364c659046f066a98fed2fdb10a39577a
Author: Wang Nan <wangnan0@huawei.com>
AuthorDate: Sun, 6 Sep 2015 07:13:17 +0000
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Mon, 21 Sep 2015 18:01:17 -0300
perf tools: Don't assume that the parser returns non empty evsel list
Don't blindly retrieve and use a last element in the lists returned by
parse_events__scanner(), as it may have collected no entries, i.e.
return an empty list.
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1441523623-152703-2-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/parse-events.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 0fde529..61c2bc2 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -827,6 +827,11 @@ void parse_events__set_leader(char *name, struct list_head *list)
{
struct perf_evsel *leader;
+ if (list_empty(list)) {
+ WARN_ONCE(true, "WARNING: failed to set leader: empty list");
+ return;
+ }
+
__perf_evlist__set_leader(list);
leader = list_entry(list->next, struct perf_evsel, node);
leader->group_name = name ? strdup(name) : NULL;
@@ -1176,6 +1181,11 @@ int parse_events(struct perf_evlist *evlist, const char *str,
if (!ret) {
struct perf_evsel *last;
+ if (list_empty(&data.list)) {
+ WARN_ONCE(true, "WARNING: event parser found nothing");
+ return -1;
+ }
+
perf_evlist__splice_list_tail(evlist, &data.list);
evlist->nr_groups += data.nr_groups;
last = perf_evlist__last(evlist);
@@ -1285,6 +1295,12 @@ foreach_evsel_in_last_glob(struct perf_evlist *evlist,
struct perf_evsel *last = NULL;
int err;
+ /*
+ * Don't return when list_empty, give func a chance to report
+ * error when it found last == NULL.
+ *
+ * So no need to WARN here, let *func do this.
+ */
if (evlist->nr_entries > 0)
last = perf_evlist__last(evlist);
^ permalink raw reply related [flat|nested] 35+ messages in thread
end of thread, other threads:[~2015-09-23 8:44 UTC | newest]
Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-06 7:13 [GIT PULL 00/27] perf tools: filtering events using eBPF programs Wang Nan
2015-09-06 7:13 ` [PATCH 01/27] perf tools: Don't write to evsel if parser doesn't collect evsel Wang Nan
2015-09-23 8:43 ` [tip:perf/core] perf tools: Don' t assume that the parser returns non empty evsel list tip-bot for Wang Nan
2015-09-06 7:13 ` [PATCH 02/27] perf tools: Make perf depend on libbpf Wang Nan
2015-09-06 7:13 ` [PATCH 03/27] perf ebpf: Add the libbpf glue Wang Nan
2015-09-06 7:13 ` [PATCH 04/27] perf tools: Enable passing bpf object file to --event Wang Nan
2015-09-06 7:13 ` [PATCH 05/27] perf record, bpf: Parse and create probe points for BPF programs Wang Nan
2015-09-16 21:43 ` Arnaldo Carvalho de Melo
2015-09-17 2:04 ` Wangnan (F)
2015-09-06 7:13 ` [PATCH 06/27] perf bpf: Collect 'struct perf_probe_event' for bpf_program Wang Nan
2015-09-06 7:13 ` [PATCH 07/27] perf record: Load all eBPF object into kernel Wang Nan
2015-09-06 7:13 ` [PATCH 08/27] perf tools: Add bpf_fd field to evsel and config it Wang Nan
2015-09-06 7:13 ` [PATCH 09/27] perf tools: Attach eBPF program to perf event Wang Nan
2015-09-06 7:13 ` [PATCH 10/27] perf tools: Allow BPF placeholder dummy events to collect --filter options Wang Nan
2015-09-06 7:13 ` [PATCH 11/27] perf tools: Sync setting of real bpf events with placeholder Wang Nan
2015-09-06 7:13 ` [PATCH 12/27] perf record: Add clang options for compiling BPF scripts Wang Nan
2015-09-06 7:13 ` [PATCH 13/27] perf tools: Compile scriptlets to BPF objects when passing '.c' to --event Wang Nan
2015-09-06 7:13 ` [PATCH 14/27] perf test: Enforce LLVM test for BPF test Wang Nan
2015-09-06 7:13 ` [PATCH 15/27] perf test: Add 'perf test BPF' Wang Nan
2015-09-06 7:13 ` [PATCH 16/27] bpf tools: Load a program with different instances using preprocessor Wang Nan
2015-09-06 7:13 ` [PATCH 17/27] perf probe: Reset args and nargs for probe_trace_event when failure Wang Nan
2015-09-06 7:13 ` [PATCH 18/27] perf tools: Add BPF_PROLOGUE config options for further patches Wang Nan
2015-09-16 7:30 ` [tip:perf/core] perf tools: regs_query_register_offset() infrastructure tip-bot for Wang Nan
2015-09-06 7:13 ` [PATCH 19/27] perf tools: Introduce regs_query_register_offset() for x86 Wang Nan
2015-09-14 21:37 ` Arnaldo Carvalho de Melo
2015-09-15 1:36 ` Wangnan (F)
2015-09-16 7:30 ` [tip:perf/core] perf tools: Introduce regs_query_register_offset( ) " tip-bot for Wang Nan
2015-09-06 7:13 ` [PATCH 20/27] perf tools: Add prologue for BPF programs for fetching arguments Wang Nan
2015-09-06 7:13 ` [PATCH 21/27] perf tools: Generate prologue for BPF programs Wang Nan
2015-09-06 7:13 ` [PATCH 22/27] perf tools: Use same BPF program if arguments are identical Wang Nan
2015-09-06 7:13 ` [PATCH 23/27] perf record: Support custom vmlinux path Wang Nan
2015-09-06 7:13 ` [PATCH 24/27] perf probe: Init symbol as kprobe Wang Nan
2015-09-06 7:13 ` [PATCH 25/27] perf tools: Allow BPF program attach to uprobe events Wang Nan
2015-09-06 7:13 ` [PATCH 26/27] perf test: Enforce LLVM test, add kbuild test Wang Nan
2015-09-06 7:13 ` [PATCH 27/27] perf test: Test BPF prologue Wang Nan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).