linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
To: Steven Rostedt <rostedt@goodmis.org>
Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
	Mark Rutland <mark.rutland@arm.com>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	Florent Revest <revest@chromium.org>,
	Martin KaFai Lau <martin.lau@linux.dev>,
	bpf <bpf@vger.kernel.org>, Sven Schnelle <svens@linux.ibm.com>,
	Alexei Starovoitov <ast@kernel.org>, Jiri Olsa <jolsa@kernel.org>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Alan Maguire <alan.maguire@oracle.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>, Guo Ren <guoren@kernel.org>
Subject: Re: [PATCH v2 10/27] ftrace: Add subops logic to allow one ops to manage many
Date: Mon, 3 Jun 2024 11:46:36 +0900	[thread overview]
Message-ID: <20240603114636.63b5abe2189cb732bec2474c@kernel.org> (raw)
In-Reply-To: <20240602220613.3f9eac04@gandalf.local.home>

On Sun, 2 Jun 2024 22:06:13 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> > > +/* Make @ops trace evenything except what all its subops do not trace */
> > > +static struct ftrace_hash *intersect_hashes(struct ftrace_ops *ops)
> > > +{
> > > +	struct ftrace_hash *new_hash = NULL;
> > > +	struct ftrace_ops *subops;
> > > +	int size_bits;
> > > +	int ret;
> > > +
> > > +	list_for_each_entry(subops, &ops->subop_list, list) {
> > > +		struct ftrace_hash *next_hash;
> > > +
> > > +		if (!new_hash) {
> > > +			size_bits = subops->func_hash->notrace_hash->size_bits;
> > > +			new_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->notrace_hash);
> > > +			if (!new_hash)
> > > +				return NULL;  
> > 
> > If the first subops has EMPTY_HASH, this allocates small empty hash (!= EMPTY_HASH)
> > on `new_hash`.
> 
> Could we just change the above to be: ?
> 
> 			new_hash = ftrace_hash_empty(ops->func_hash->notrace_hash) ? EMPTY_HASH :
> 				alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->notrace_hash);
> 			if (!new_hash)
> 				return NULL;  

Yeah, and if new_hash is EMPTY_HASH, we don't need looping on the rest of
the hashes, right?

> 
> 
> > 
> > > +			continue;
> > > +		}
> > > +		size_bits = new_hash->size_bits;
> > > +		next_hash = new_hash;  
> > 
> > And it is assigned to `next_hash`.
> > 
> > > +		new_hash = alloc_ftrace_hash(size_bits);
> > > +		ret = intersect_hash(&new_hash, next_hash, subops->func_hash->notrace_hash);  
> > 
> > Since the `next_hash` != EMPTY_HASH but it is empty, this keeps `new_hash`
> > empty but allocated.
> > 
> > > +		free_ftrace_hash(next_hash);
> > > +		if (ret < 0) {
> > > +			free_ftrace_hash(new_hash);
> > > +			return NULL;
> > > +		}
> > > +		/* Nothing more to do if new_hash is empty */
> > > +		if (new_hash == EMPTY_HASH)  
> > 
> > Since `new_hash` is empty but != EMPTY_HASH, this does not pass. Keep looping on.
> > 
> > > +			break;
> > > +	}
> > > +	return new_hash;  
> > 
> > And this will return empty but not EMPTY_HASH hash.
> > 
> > 
> > So, we need;
> > 
> > #define FTRACE_EMPTY_HASH_OR_NULL(hash)	(!(hash) || (hash) == EMPTY_HASH)
> > 
> > if (FTRACE_EMPTY_HASH_OR_NULL(subops->func_hash->notrace_hash)) {
> > 	free_ftrace_hash(new_hash);
> > 	new_hash = EMPTY_HASH;
> > 	break;
> > }
> > 
> > at the beginning of the loop.
> > Also, at the end of the loop,
> > 
> > if (ftrace_hash_empty(new_hash)) {
> > 	free_ftrace_hash(new_hash);
> > 	new_hash = EMPTY_HASH;
> > 	break;
> > }

And we still need this (I think this should be done in intersect_hash(), we just
need to count the number of entries.) 

> > 
> > > +}
> > > +
> > > +/* Returns 0 on equal or non-zero on non-equal */
> > > +static int compare_ops(struct ftrace_hash *A, struct ftrace_hash *B)  
> > 
> > nit: Isn't it better to be `bool hash_equal()` and return true if A == B ?
> 
> Sure. I guess I was thinking too much of strcmp() logic :-p

Yeah, it's the curse of the C programmer :( (even it is good for sorting.)

Thank you,

> 
> > 
> > Thank you,
> 
> Thanks for the review.
> 
> -- Steve
> 
> > 
> > > +{
> > > +	struct ftrace_func_entry *entry;
> > > +	int size;
> > > +	int i;
> > > +
> > > +	if (!A || A == EMPTY_HASH)
> > > +		return !(!B || B == EMPTY_HASH);
> > > +
> > > +	if (!B || B == EMPTY_HASH)
> > > +		return !(!A || A == EMPTY_HASH);
> > > +
> > > +	if (A->count != B->count)
> > > +		return 1;
> > > +
> > > +	size = 1 << A->size_bits;
> > > +	for (i = 0; i < size; i++) {
> > > +		hlist_for_each_entry(entry, &A->buckets[i], hlist) {
> > > +			if (!__ftrace_lookup_ip(B, entry->ip))
> > > +				return 1;
> > > +		}
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +  
> > 
> > 
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

  reply	other threads:[~2024-06-03  2:46 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-02  3:37 [PATCH v2 00/27] function_graph: Allow multiple users for function graph tracing Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 01/27] function_graph: Convert ret_stack to a series of longs Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 02/27] fgraph: Use BUILD_BUG_ON() to make sure we have structures divisible by long Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 03/27] function_graph: Add an array structure that will allow multiple callbacks Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 04/27] function_graph: Allow multiple users to attach to function graph Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 05/27] function_graph: Handle tail calls for stack unwinding Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 06/27] function_graph: Remove logic around ftrace_graph_entry and return Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 07/27] ftrace/function_graph: Pass fgraph_ops to function graph callbacks Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 08/27] ftrace: Allow function_graph tracer to be enabled in instances Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 09/27] ftrace: Allow ftrace startup flags to exist without dynamic ftrace Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 10/27] ftrace: Add subops logic to allow one ops to manage many Steven Rostedt
2024-06-03  1:33   ` Masami Hiramatsu
2024-06-03  2:06     ` Steven Rostedt
2024-06-03  2:46       ` Masami Hiramatsu [this message]
2024-06-03 14:54         ` Steven Rostedt
2024-06-03 17:05         ` Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 11/27] ftrace: Allow subops filtering to be modified Steven Rostedt
2024-06-03  2:37   ` Masami Hiramatsu
2024-06-03 14:52     ` Steven Rostedt
2024-06-03 23:12       ` Masami Hiramatsu
2024-06-02  3:37 ` [PATCH v2 12/27] function_graph: Have the instances use their own ftrace_ops for filtering Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 13/27] function_graph: Add pid tracing back to function graph tracer Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 14/27] function_graph: Use a simple LRU for fgraph_array index number Steven Rostedt
2024-06-02  3:37 ` [PATCH v2 15/27] function_graph: Add "task variables" per task for fgraph_ops Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 16/27] function_graph: Move set_graph_function tests to shadow stack global var Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 17/27] function_graph: Move graph depth stored data " Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 18/27] function_graph: Move graph notrace bit " Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 19/27] function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data() Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 20/27] function_graph: Add selftest for passing local variables Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 21/27] ftrace: Add multiple fgraph storage selftest Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 22/27] function_graph: Use for_each_set_bit() in __ftrace_return_to_handler() Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 23/27] function_graph: Use bitmask to loop on fgraph entry Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 24/27] function_graph: Use static_call and branch to optimize entry function Steven Rostedt
2024-06-03  3:11   ` Masami Hiramatsu
2024-06-03 15:00     ` Steven Rostedt
2024-06-03 15:07       ` Steven Rostedt
2024-06-03 23:08         ` Masami Hiramatsu
2024-06-02  3:38 ` [PATCH v2 25/27] function_graph: Use static_call and branch to optimize return function Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 26/27] selftests/ftrace: Add function_graph tracer to func-filter-pid test Steven Rostedt
2024-06-02  3:38 ` [PATCH v2 27/27] selftests/ftrace: Add fgraph-multi.tc test Steven Rostedt
2024-06-02  3:44 ` [PATCH v2 00/27] function_graph: Allow multiple users for function graph tracing Steven Rostedt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240603114636.63b5abe2189cb732bec2474c@kernel.org \
    --to=mhiramat@kernel.org \
    --cc=acme@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=alan.maguire@oracle.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=guoren@kernel.org \
    --cc=jolsa@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=martin.lau@linux.dev \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=peterz@infradead.org \
    --cc=revest@chromium.org \
    --cc=rostedt@goodmis.org \
    --cc=svens@linux.ibm.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).