From: Steven Rostedt <rostedt@goodmis.org>
To: "Tzvetomir Stoyanov (VMware)" <tz.stoyanov@gmail.com>
Cc: linux-trace-devel@vger.kernel.org
Subject: Re: [PATCH 2/2] trace-cmd: Reset CPU mask to its default value with "trace-cmd reset".
Date: Mon, 30 Sep 2019 14:44:44 -0400 [thread overview]
Message-ID: <20190930144444.4956dff8@gandalf.local.home> (raw)
In-Reply-To: <20190925110823.1242-3-tz.stoyanov@gmail.com>
On Wed, 25 Sep 2019 14:08:23 +0300
"Tzvetomir Stoyanov (VMware)" <tz.stoyanov@gmail.com> wrote:
> "trace-cmd reset" command should put all ftrace config to its default
> state, but trace cpumask was not reseted. The patch sets cpumask to
> its default value - all CPUs are enabled for tracing.
>
> Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com>
> ---
> tracecmd/trace-record.c | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
>
> diff --git a/tracecmd/trace-record.c b/tracecmd/trace-record.c
> index 69de82a..c41f55f 100644
> --- a/tracecmd/trace-record.c
> +++ b/tracecmd/trace-record.c
> @@ -4096,6 +4096,24 @@ static void reset_clock(void)
> write_instance_file(instance, "trace_clock", "local", "clock");
> }
>
> +static void reset_cpu_mask(void)
> +{
> + char str[24];
> + int cpumask = 0;
> + int cpus = count_cpus();
> + struct buffer_instance *instance;
> +
> + while (cpus--) {
> + cpumask <<= 1;
> + cpumask |= 1;
> + }
First, you can accomplish the same with:
(1 << cpus) - 1;
But then this only works if we have less than 32 CPUs.
What we would want is:
int fullwords;
char *buf;
int bits;
int cpus;
int len;
fullwords = cpus / 32;
bits = cpus % 32;
len = (fullwords + 1) * 8;
buf = malloc(len + 1);
buf[0] = '\0';
if (bits)
sprintf(buf, "%x", (1 << bits) - 1);
while (fullwords-- > 0)
strcat(buf, "ffffffff");
Because we may run this on machines with 1000s of CPUs!
(BTW, I tested the above under valgrind with the following values:
1 31 32 33 126 127 128 129 with no memory leaks and the results
looked good)
-- Steve
> + if (snprintf(str, 24, "%x", cpumask) <= 0)
> + return;
> +
> + for_all_instances(instance)
> + write_instance_file(instance, "tracing_cpumask", str, "cpumask");
> +}
> +
> static void reset_event_pid(void)
> {
> add_event_pid("");
> @@ -4808,6 +4826,7 @@ void trace_reset(int argc, char **argv)
> reset_clock();
> reset_event_pid();
> reset_max_latency_instance();
> + reset_cpu_mask();
> tracecmd_remove_instances();
> clear_func_filters();
> /* restore tracing_on to 1 */
next prev parent reply other threads:[~2019-09-30 21:10 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-25 11:08 [PATCH 0/2] Reset CPU mask Tzvetomir Stoyanov (VMware)
2019-09-25 11:08 ` [PATCH 1/2] trace-cmd: Reset CPU mask after setting it in trace-cmd record with option -M Tzvetomir Stoyanov (VMware)
2019-09-25 11:08 ` [PATCH 2/2] trace-cmd: Reset CPU mask to its default value with "trace-cmd reset" Tzvetomir Stoyanov (VMware)
2019-09-30 18:44 ` Steven Rostedt [this message]
[not found] ` <CACGkJdseMqPBcM8YOUoZvssT68BFrsmriis4DaMK74cUZVpKvg@mail.gmail.com>
2019-10-01 13:52 ` Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190930144444.4956dff8@gandalf.local.home \
--to=rostedt@goodmis.org \
--cc=linux-trace-devel@vger.kernel.org \
--cc=tz.stoyanov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).