public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Yunlong Song <yunlong.song@huawei.com>
To: <a.p.zijlstra@chello.nl>, <paulus@samba.org>, <mingo@redhat.com>,
	"Arnaldo Carvalho de Melo" <acme@kernel.org>
Cc: <linux-kernel@vger.kernel.org>, <wangnan0@huawei.com>
Subject: [Question] How does perf still record the stack of a specified pid even when that process is interrupted and CPU is scheduled to other process
Date: Fri, 24 Apr 2015 21:31:54 +0800	[thread overview]
Message-ID: <553A45CA.8020808@huawei.com> (raw)

[Profiling Background]
Now we are profiling the performance of ext4 and f2fs on an eMMC card with iozone,
we find a case that ext4 is better than f2fs in random write under the test of
"iozone -s 262144 -r 64 -i 0 -i 2". We want to analyze the I/O delay of the two
file systems. We have got a conclusion that 1% of sys_write takes up 60% time of
the overall sys_write (262144/64=4096). We want to find out the call stack during
this specific 1% sys_write. Our idea is to record the stack in a certain time period
and since the specific 1% case takes up 60% time, the total number of records of its
stack should also takes up 60% of the total records, then we can recognize those stacks
and figure out what the I/O stack of f2fs is doing in the 1% case.

[Profiling Problem]

Although perf can record the events (with call stack) of a specified pid, e.g. using
"perf record -g iozone -s 262144 -r 64 -i 0 -i 2". But we find iozone is interrupted
and the CPU is scheduled to other process. As a result, perf will not record any events
of iozone until iozone's context is recovered and the CPU is scheduled to continue
processing the sys_write of iozone. This obeys our initial idea which is described in
[Profiling Background], since we cannot recognize the call stacks of the specific 1% case
by using the ratio of the record number.

[Alternative Solution without Perf]
We can use /proc/#pid/stack to record the stack in a certain period (e.g. 1ms) of iozone,
no matter whether iozone is interrupted or not. However, we have not taken a deep sight
into this, since we want to use perf to do this kind of thing.

[Question about Perf]
So we have a question that "How does perf still record the stack of a specified pid even
when that process is interrupted and CPU is scheduled to other process?"

-- 
Thanks,
Yunlong Song


             reply	other threads:[~2015-04-24 13:32 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-24 13:31 Yunlong Song [this message]
2015-04-24 13:49 ` [Question] How does perf still record the stack of a specified pid even when that process is interrupted and CPU is scheduled to other process Yunlong Song
2015-04-25 14:03   ` Yunlong Song
2015-04-24 13:49 ` David Ahern
2015-04-24 13:56   ` Yunlong Song
2015-04-24 13:58 ` David Ahern
2015-04-25 14:05   ` Yunlong Song
2015-04-25 15:53     ` David Ahern
2015-05-05 21:53       ` Rabin Vincent
2015-05-05 22:24         ` David Ahern
2015-05-06  4:13         ` Yunlong Song
2015-05-06  4:10       ` Yunlong Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=553A45CA.8020808@huawei.com \
    --to=yunlong.song@huawei.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=acme@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=paulus@samba.org \
    --cc=wangnan0@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox