public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Xiao Guangrong <ericxiao.gr@gmail.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Paul Mackerras <paulus@samba.org>,
	T??r??k Edwin <edwintorok@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v3] perf/sched: fix for getting task's execution time
Date: Wed, 9 Dec 2009 10:57:41 +0100	[thread overview]
Message-ID: <20091209095741.GA15499@elte.hu> (raw)
In-Reply-To: <4B1F7322.80103@cn.fujitsu.com>


* Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> wrote:

> In current code, task's execute time is got by reading
> '/proc/<pid>/sched' file, it's wrong if the task is created
> by pthread_create(), because every thread task has same pid.
> 
> This way also has two demerits:
> 
> 1: 'perf sched replay' can't work if the kernel not compile
>     with 'CONFIG_SCHED_DEBUG' option
> 2: perf tool should depend on proc file system
> 
> So, this patch use PERF_COUNT_SW_TASK_CLOCK to get task's
> execution time instead of reading /proc file
> 
> Changelog v2 -> v3:
> use PERF_COUNT_SW_TASK_CLOCK instead of rusage() as Ingo's suggestion
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
> ---
>  tools/perf/builtin-sched.c |   55 +++++++++++++++++++++----------------------
>  1 files changed, 27 insertions(+), 28 deletions(-)
> 
> diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
> index 19f43fa..b12b23a 100644
> --- a/tools/perf/builtin-sched.c
> +++ b/tools/perf/builtin-sched.c
> @@ -13,7 +13,6 @@
>  #include "util/debug.h"
>  #include "util/data_map.h"
>  
> -#include <sys/types.h>
>  #include <sys/prctl.h>
>  
>  #include <semaphore.h>
> @@ -414,34 +413,33 @@ static u64 get_cpu_usage_nsec_parent(void)
>  	return sum;
>  }
>  
> -static u64 get_cpu_usage_nsec_self(void)
> +static int self_open_counters(void)
>  {
> -	char filename [] = "/proc/1234567890/sched";
> -	unsigned long msecs, nsecs;
> -	char *line = NULL;
> -	u64 total = 0;
> -	size_t len = 0;
> -	ssize_t chars;
> -	FILE *file;
> -	int ret;
> +	struct perf_event_attr attr;
> +	int fd;
>  
> -	sprintf(filename, "/proc/%d/sched", getpid());
> -	file = fopen(filename, "r");
> -	BUG_ON(!file);
> +	memset(&attr, 0, sizeof(attr));
>  
> -	while ((chars = getline(&line, &len, file)) != -1) {
> -		ret = sscanf(line, "se.sum_exec_runtime : %ld.%06ld\n",
> -			&msecs, &nsecs);
> -		if (ret == 2) {
> -			total = msecs*1e6 + nsecs;
> -			break;
> -		}
> -	}
> -	if (line)
> -		free(line);
> -	fclose(file);
> +	attr.type = PERF_TYPE_SOFTWARE;
> +	attr.config = PERF_COUNT_SW_TASK_CLOCK;
>  
> -	return total;
> +	fd = sys_perf_event_open(&attr, 0, -1, -1, 0);
> +
> +	if (fd < 0)
> +		die("Error: sys_perf_event_open() syscall returned"
> +		    "with %d (%s)\n", fd, strerror(errno));
> +	return fd;
> +}
> +
> +static u64 get_cpu_usage_nsec_self(int fd)
> +{
> +	u64 runtime;
> +	int ret;
> +
> +	ret = read(fd, &runtime, sizeof(runtime));
> +	BUG_ON(ret != sizeof(runtime));
> +
> +	return runtime;
>  }
>  
>  static void *thread_func(void *ctx)
> @@ -450,9 +448,11 @@ static void *thread_func(void *ctx)
>  	u64 cpu_usage_0, cpu_usage_1;
>  	unsigned long i, ret;
>  	char comm2[22];
> +	int fd;
>  
>  	sprintf(comm2, ":%s", this_task->comm);
>  	prctl(PR_SET_NAME, comm2);
> +	fd = self_open_counters();
>  
>  again:
>  	ret = sem_post(&this_task->ready_for_work);
> @@ -462,16 +462,15 @@ again:
>  	ret = pthread_mutex_unlock(&start_work_mutex);
>  	BUG_ON(ret);
>  
> -	cpu_usage_0 = get_cpu_usage_nsec_self();
> +	cpu_usage_0 = get_cpu_usage_nsec_self(fd);
>  
>  	for (i = 0; i < this_task->nr_events; i++) {
>  		this_task->curr_event = i;
>  		process_sched_event(this_task, this_task->atoms[i]);
>  	}
>  
> -	cpu_usage_1 = get_cpu_usage_nsec_self();
> +	cpu_usage_1 = get_cpu_usage_nsec_self(fd);
>  	this_task->cpu_usage = cpu_usage_1 - cpu_usage_0;
> -
>  	ret = sem_post(&this_task->work_done_sem);
>  	BUG_ON(ret);

Very nice - and the code even got shorter a tiny bit.

Applied, thanks!

	Ingo

  parent reply	other threads:[~2009-12-09  9:57 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-12-06 10:57 [PATCH] perf/sched: fix for getting task's execute time Xiao Guangrong
2009-12-06 11:05 ` Peter Zijlstra
2009-12-06 11:06   ` Peter Zijlstra
2009-12-06 11:50     ` Ingo Molnar
2009-12-06 17:10     ` Xiao Guangrong
2009-12-07  7:20       ` [PATCH v2] perf/sched: fix for getting task's execution time Xiao Guangrong
2009-12-07  7:30         ` Ingo Molnar
2009-12-09  9:51           ` [PATCH v3] " Xiao Guangrong
2009-12-09  9:54             ` Xiao Guangrong
2009-12-09  9:59               ` Ingo Molnar
2009-12-09  9:57             ` Xiao Guangrong
2009-12-09  9:57             ` Ingo Molnar [this message]
2009-12-09 10:03             ` [tip:perf/urgent] perf sched: Fix " tip-bot for Xiao Guangrong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20091209095741.GA15499@elte.hu \
    --to=mingo@elte.hu \
    --cc=edwintorok@gmail.com \
    --cc=ericxiao.gr@gmail.com \
    --cc=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    --cc=xiaoguangrong@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox