From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
To: Steven Rostedt <rostedt@goodmis.org>
Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
Masami Hiramatsu <mhiramat@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Andrew Morton <akpm@linux-foundation.org>,
Alexei Starovoitov <alexei.starovoitov@gmail.com>,
Florent Revest <revest@chromium.org>,
Martin KaFai Lau <martin.lau@linux.dev>,
bpf <bpf@vger.kernel.org>, Sven Schnelle <svens@linux.ibm.com>,
Alexei Starovoitov <ast@kernel.org>, Jiri Olsa <jolsa@kernel.org>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Alan Maguire <alan.maguire@oracle.com>,
Peter Zijlstra <peterz@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>, Guo Ren <guoren@kernel.org>
Subject: Re: [PATCH v2 10/27] ftrace: Add subops logic to allow one ops to manage many
Date: Mon, 3 Jun 2024 10:33:16 +0900 [thread overview]
Message-ID: <20240603103316.3af9dea3214a5d2bde721cd8@kernel.org> (raw)
In-Reply-To: <20240602033832.709653366@goodmis.org>
Hi Steve,
On Sat, 01 Jun 2024 23:37:54 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:
> From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
I think this is a new patch, correct? I'm a bit confused.
And I have some comments below;
[..]
> @@ -3164,6 +3166,392 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
> return 0;
> }
>
> +/* Simply make a copy of @src and return it */
> +static struct ftrace_hash *copy_hash(struct ftrace_hash *src)
> +{
> + if (!src || src == EMPTY_HASH)
> + return EMPTY_HASH;
> +
> + return alloc_and_copy_ftrace_hash(src->size_bits, src);
> +}
> +
> +/*
> + * Append @new_hash entries to @hash:
> + *
> + * If @hash is the EMPTY_HASH then it traces all functions and nothing
> + * needs to be done.
> + *
> + * If @new_hash is the EMPTY_HASH, then make *hash the EMPTY_HASH so
> + * that it traces everything.
This lacks the most important comment, this function is only for
filter_hash, not for notrace_hash. :)
> + *
> + * Otherwise, go through all of @new_hash and add anything that @hash
> + * doesn't already have, to @hash.
> + */
> +static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash)
> +{
> + struct ftrace_func_entry *entry;
> + int size;
> + int i;
> +
> + /* An empty hash does everything */
> + if (!*hash || *hash == EMPTY_HASH)
> + return 0;
> +
> + /* If new_hash has everything make hash have everything */
> + if (!new_hash || new_hash == EMPTY_HASH) {
> + free_ftrace_hash(*hash);
> + *hash = EMPTY_HASH;
> + return 0;
> + }
> +
> + size = 1 << new_hash->size_bits;
> + for (i = 0; i < size; i++) {
> + hlist_for_each_entry(entry, &new_hash->buckets[i], hlist) {
> + /* Only add if not already in hash */
> + if (!__ftrace_lookup_ip(*hash, entry->ip) &&
> + add_hash_entry(*hash, entry->ip) == NULL)
> + return -ENOMEM;
> + }
> + }
> + return 0;
> +}
> +
> +/* Add to @hash only those that are in both @new_hash1 and @new_hash2 */
Ditto, this is only for the notrace_hash.
> +static int intersect_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash1,
> + struct ftrace_hash *new_hash2)
> +{
> + struct ftrace_func_entry *entry;
> + int size;
> + int i;
> +
> + /*
> + * If new_hash1 or new_hash2 is the EMPTY_HASH then make the hash
> + * empty as well as empty for notrace means none are notraced.
> + */
> + if (!new_hash1 || new_hash1 == EMPTY_HASH ||
> + !new_hash2 || new_hash2 == EMPTY_HASH) {
> + free_ftrace_hash(*hash);
> + *hash = EMPTY_HASH;
> + return 0;
> + }
> +
> + size = 1 << new_hash1->size_bits;
> + for (i = 0; i < size; i++) {
> + hlist_for_each_entry(entry, &new_hash1->buckets[i], hlist) {
> + /* Only add if in both @new_hash1 and @new_hash2 */
> + if (__ftrace_lookup_ip(new_hash2, entry->ip) &&
> + add_hash_entry(*hash, entry->ip) == NULL)
> + return -ENOMEM;
> + }
> + }
> + return 0;
> +}
> +
> +/* Return a new hash that has a union of all @ops->filter_hash entries */
> +static struct ftrace_hash *append_hashes(struct ftrace_ops *ops)
> +{
> + struct ftrace_hash *new_hash;
> + struct ftrace_ops *subops;
> + int ret;
> +
> + new_hash = alloc_ftrace_hash(ops->func_hash->filter_hash->size_bits);
> + if (!new_hash)
> + return NULL;
> +
> + list_for_each_entry(subops, &ops->subop_list, list) {
> + ret = append_hash(&new_hash, subops->func_hash->filter_hash);
> + if (ret < 0) {
> + free_ftrace_hash(new_hash);
> + return NULL;
> + }
> + /* Nothing more to do if new_hash is empty */
> + if (new_hash == EMPTY_HASH)
> + break;
> + }
> + return new_hash;
> +}
> +
> +/* Make @ops trace evenything except what all its subops do not trace */
> +static struct ftrace_hash *intersect_hashes(struct ftrace_ops *ops)
> +{
> + struct ftrace_hash *new_hash = NULL;
> + struct ftrace_ops *subops;
> + int size_bits;
> + int ret;
> +
> + list_for_each_entry(subops, &ops->subop_list, list) {
> + struct ftrace_hash *next_hash;
> +
> + if (!new_hash) {
> + size_bits = subops->func_hash->notrace_hash->size_bits;
> + new_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->notrace_hash);
> + if (!new_hash)
> + return NULL;
If the first subops has EMPTY_HASH, this allocates small empty hash (!= EMPTY_HASH)
on `new_hash`.
> + continue;
> + }
> + size_bits = new_hash->size_bits;
> + next_hash = new_hash;
And it is assigned to `next_hash`.
> + new_hash = alloc_ftrace_hash(size_bits);
> + ret = intersect_hash(&new_hash, next_hash, subops->func_hash->notrace_hash);
Since the `next_hash` != EMPTY_HASH but it is empty, this keeps `new_hash`
empty but allocated.
> + free_ftrace_hash(next_hash);
> + if (ret < 0) {
> + free_ftrace_hash(new_hash);
> + return NULL;
> + }
> + /* Nothing more to do if new_hash is empty */
> + if (new_hash == EMPTY_HASH)
Since `new_hash` is empty but != EMPTY_HASH, this does not pass. Keep looping on.
> + break;
> + }
> + return new_hash;
And this will return empty but not EMPTY_HASH hash.
So, we need;
#define FTRACE_EMPTY_HASH_OR_NULL(hash) (!(hash) || (hash) == EMPTY_HASH)
if (FTRACE_EMPTY_HASH_OR_NULL(subops->func_hash->notrace_hash)) {
free_ftrace_hash(new_hash);
new_hash = EMPTY_HASH;
break;
}
at the beginning of the loop.
Also, at the end of the loop,
if (ftrace_hash_empty(new_hash)) {
free_ftrace_hash(new_hash);
new_hash = EMPTY_HASH;
break;
}
> +}
> +
> +/* Returns 0 on equal or non-zero on non-equal */
> +static int compare_ops(struct ftrace_hash *A, struct ftrace_hash *B)
nit: Isn't it better to be `bool hash_equal()` and return true if A == B ?
Thank you,
> +{
> + struct ftrace_func_entry *entry;
> + int size;
> + int i;
> +
> + if (!A || A == EMPTY_HASH)
> + return !(!B || B == EMPTY_HASH);
> +
> + if (!B || B == EMPTY_HASH)
> + return !(!A || A == EMPTY_HASH);
> +
> + if (A->count != B->count)
> + return 1;
> +
> + size = 1 << A->size_bits;
> + for (i = 0; i < size; i++) {
> + hlist_for_each_entry(entry, &A->buckets[i], hlist) {
> + if (!__ftrace_lookup_ip(B, entry->ip))
> + return 1;
> + }
> + }
> +
> + return 0;
> +}
> +
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
next prev parent reply other threads:[~2024-06-03 1:33 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-02 3:37 [PATCH v2 00/27] function_graph: Allow multiple users for function graph tracing Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 01/27] function_graph: Convert ret_stack to a series of longs Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 02/27] fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 03/27] function_graph: Add an array structure that will allow multiple callbacks Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 04/27] function_graph: Allow multiple users to attach to function graph Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 05/27] function_graph: Handle tail calls for stack unwinding Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 06/27] function_graph: Remove logic around ftrace_graph_entry and return Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 07/27] ftrace/function_graph: Pass fgraph_ops to function graph callbacks Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 08/27] ftrace: Allow function_graph tracer to be enabled in instances Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 09/27] ftrace: Allow ftrace startup flags to exist without dynamic ftrace Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 10/27] ftrace: Add subops logic to allow one ops to manage many Steven Rostedt
2024-06-03 1:33 ` Masami Hiramatsu [this message]
2024-06-03 2:06 ` Steven Rostedt
2024-06-03 2:46 ` Masami Hiramatsu
2024-06-03 14:54 ` Steven Rostedt
2024-06-03 17:05 ` Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 11/27] ftrace: Allow subops filtering to be modified Steven Rostedt
2024-06-03 2:37 ` Masami Hiramatsu
2024-06-03 14:52 ` Steven Rostedt
2024-06-03 23:12 ` Masami Hiramatsu
2024-06-02 3:37 ` [PATCH v2 12/27] function_graph: Have the instances use their own ftrace_ops for filtering Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 13/27] function_graph: Add pid tracing back to function graph tracer Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 14/27] function_graph: Use a simple LRU for fgraph_array index number Steven Rostedt
2024-06-02 3:37 ` [PATCH v2 15/27] function_graph: Add "task variables" per task for fgraph_ops Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 16/27] function_graph: Move set_graph_function tests to shadow stack global var Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 17/27] function_graph: Move graph depth stored data " Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 18/27] function_graph: Move graph notrace bit " Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 19/27] function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data() Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 20/27] function_graph: Add selftest for passing local variables Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 21/27] ftrace: Add multiple fgraph storage selftest Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 22/27] function_graph: Use for_each_set_bit() in __ftrace_return_to_handler() Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 23/27] function_graph: Use bitmask to loop on fgraph entry Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 24/27] function_graph: Use static_call and branch to optimize entry function Steven Rostedt
2024-06-03 3:11 ` Masami Hiramatsu
2024-06-03 15:00 ` Steven Rostedt
2024-06-03 15:07 ` Steven Rostedt
2024-06-03 23:08 ` Masami Hiramatsu
2024-06-02 3:38 ` [PATCH v2 25/27] function_graph: Use static_call and branch to optimize return function Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 26/27] selftests/ftrace: Add function_graph tracer to func-filter-pid test Steven Rostedt
2024-06-02 3:38 ` [PATCH v2 27/27] selftests/ftrace: Add fgraph-multi.tc test Steven Rostedt
2024-06-02 3:44 ` [PATCH v2 00/27] function_graph: Allow multiple users for function graph tracing Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240603103316.3af9dea3214a5d2bde721cd8@kernel.org \
--to=mhiramat@kernel.org \
--cc=acme@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alan.maguire@oracle.com \
--cc=alexei.starovoitov@gmail.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=guoren@kernel.org \
--cc=jolsa@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=martin.lau@linux.dev \
--cc=mathieu.desnoyers@efficios.com \
--cc=peterz@infradead.org \
--cc=revest@chromium.org \
--cc=rostedt@goodmis.org \
--cc=svens@linux.ibm.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).