public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: David Ahern <dsahern@gmail.com>
To: "Liang, Kan" <kan.liang@intel.com>, Andi Kleen <andi@firstfloor.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Huang, Ying" <ying.huang@intel.com>
Subject: Re: [PATCH 1/1] perf,tools: add time out to force stop endless mmap processing
Date: Fri, 12 Jun 2015 11:28:36 -0600	[thread overview]
Message-ID: <557B16C4.7000000@gmail.com> (raw)
In-Reply-To: <37D7C6CF3E00A74B8858931C1DB2F07701876834@SHSMSX103.ccr.corp.intel.com>

On 6/12/15 11:05 AM, Liang, Kan wrote:
>>
>> On 6/12/15 8:42 AM, Liang, Kan wrote:
>>>
>>>>
>>>> On 6/11/15 12:47 PM, Andi Kleen wrote:
>>>>>> Can you elaborate on an example? I don't see how this can happen
>>>>>> reading a maps file. And it does not read maps for all threads only
>>>>>> thread group leaders.
>>>>>
>>>>> This is with a stress test case that generates lots of small
>>>>> mappings at very high speed and frees them again. So the maps file
>>>>> keeps changing faster than the proc reader can keep it and it can
>>>>> end up with a live lock.
>>>>
>>>> Can you pass it along? I'd like to see how the task_diag proposal handles
>> it.
>>>>
>>>> https://github.com/dsahern/linux/commits/task_diag-wip
>>>
>>> Hi David,
>>>
>>> I tried the task_diag on my platform, but it shows error message when
>>> I run perf top. " Message handling failed: rc -1, errno 25".
>>> And it looks perf top failed to get maps information.
>>
>> Not surprising; it's only half-baked. Can you try perf-record? So far that is
>> the only one I have tested.
>>
>
> Perf record cannot reproduce the infinite loop which found in perf top.
> But we can observe that synthesized threads took very long time in perf record.
>
> According to test result as below, current perf cost 13s to read the maps,
> while task_diag cost 14s to synthesized thread.
> (Note: The time will increase with the test run.)
>
> So it looks task_diag doesn't help on this issue.
>
> [perf]$ sudo ./perf record -e instructions:pp --pid 14560
> Reading /proc/14560/maps cost 13.12690599 s
> ^C[ perf record: Woken up 1 times to write data ]
> [ perf record: Captured and wrote 0.108 MB perf.data (2783 samples) ]

so perf was able to read the proc file?

>
> [perf]$ sudo ./perf_task_diag record -e instructions:pp --pid 14560
> synthesized threads took 14.435450 sec
> ^C[ perf record: Woken up 1 times to write data ]
> [ perf record: Captured and wrote 0.035 MB perf.data (885 samples) ]
>
>
>> Also, while running that kernel you can build the test programs under
>> tools/testing/selftests/task_diag/ and try task_diag_all. I am away from my
>> dev box at the moment. As I recall you will want to try 'task_diag_all o $pid'
>> or 'task_diag_all a'
>>
> Neither options work on my platform.
>
> [task_diag]$ sudo ./task_diag_all a
> Unable to receive message: Operation not supported
> [task_diag]$ sudo ./task_diag_all o 14751
> Unable to receive message: Operation not supported

Are you sure task_diag is enabled? There is an option under General I 
believe:
config TASK_DIAG
         bool "Export task/process properties through netlink"
         depends on NET && TASKSTATS
         default n
         help
           Export selected properties for tasks/processes through the
           generic netlink interface. Unlike the proc file system, task_diag
           returns information in a binary format, allows to specify which
           information are required.

           Say N if unsure.

David


  reply	other threads:[~2015-06-12 17:28 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-10  7:46 [PATCH 1/1] perf,tools: add time out to force stop endless mmap processing kan.liang
2015-06-11 14:06 ` Arnaldo Carvalho de Melo
2015-06-11 14:27   ` Liang, Kan
2015-06-11 15:37     ` Arnaldo Carvalho de Melo
2015-06-11 15:21   ` David Ahern
2015-06-11 18:47     ` Andi Kleen
2015-06-12  0:33       ` David Ahern
2015-06-12 14:42         ` Liang, Kan
2015-06-12 15:41           ` David Ahern
2015-06-12 17:05             ` Liang, Kan
2015-06-12 17:28               ` David Ahern [this message]
2015-06-12 18:19                 ` Liang, Kan
2015-06-12 19:29                   ` David Ahern
2015-06-12 19:45                     ` Andi Kleen
2015-06-12 20:39                     ` Liang, Kan
2015-06-12 20:52                       ` David Ahern
2015-06-12 22:41                         ` Liang, Kan
2015-06-13  4:07                           ` David Ahern
2015-06-13 14:59                             ` Liang, Kan
2015-06-13  4:24                       ` David Ahern
2015-06-13 15:06                         ` Liang, Kan
2015-06-16 15:11                         ` Arnaldo Carvalho de Melo
2015-06-16 15:44                           ` Andi Kleen
2015-06-16 15:57                           ` David Ahern
2015-06-16 16:42                           ` Liang, Kan
2015-06-16 18:08                             ` Arnaldo Carvalho de Melo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=557B16C4.7000000@gmail.com \
    --to=dsahern@gmail.com \
    --cc=acme@kernel.org \
    --cc=andi@firstfloor.org \
    --cc=kan.liang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox