linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Ahern <dsahern@gmail.com>
To: Rick Jones <rick.jones2@hpe.com>,
	M Kelly <mark.kelly@lexisnexis.com>,
	linux-perf-users@vger.kernel.org
Subject: Re: spin_lock cause ?
Date: Wed, 1 Jun 2016 15:10:27 -0600	[thread overview]
Message-ID: <bfebe136-5f0c-2fb7-4892-954e7f271e2c@gmail.com> (raw)
In-Reply-To: <574F447F.1000503@hpe.com>

On 6/1/16 2:24 PM, Rick Jones wrote:
> On 06/01/2016 12:25 PM, M Kelly wrote:
>> I think I recall you from hp, years ago when I was there :-)
>> Thanks for the info.  I suspect we are too heavy on
>> pthread_mutex_lock()/unlock()
>
> HP-UX has some ways/tools to track mutex contention.  Some extensions to
> give names to mutexes and then get stats on how often one went to grab
> them and it was held etc etc.  Been ages since I used them - I'd hope
> there was something similar for Linux but I've not played with threads
> in a very long time.

If the kernel is new enough (3.8 or 3.10, I forget when userspace return 
probes were added) you can use probes + futex system call to analyze 
lock contention.

 From an old script for x86:

PL=/lib64/libpthread.so.0
perf probe -x ${PL} -a 'mutex_lock=pthread_mutex_lock addr=%di'
perf probe -x ${PL} -a 'mutex_trylock=pthread_mutex_trylock addr=%di'
perf probe -x ${PL} -a 'mutex_timedlock=pthread_mutex_timedlock addr=%di'
perf probe -x ${PL} -a 'mutex_unlock=pthread_mutex_unlock addr=%di'

perf probe -x ${PL} -a 'mutex_lock_ret=pthread_mutex_lock%return ret=%ax'
perf probe -x ${PL} -a 'mutex_trylock_ret=pthread_mutex_trylock%return 
ret=%ax'
perf probe -x ${PL} -a 
'mutex_timedlock_ret=pthread_mutex_timedlock%return ret=%ax'


perf record -a -e probe_libpthread:* -e 
syscalls:sys_enter_futex,syscalls:sys_exit_futex -- <workload spec>

perf script -g python

--> edit perf-script.py for analysis of interest.

perf script -s perf-script.py

e.g., you can track the thread id holding a lock and for a how long, how 
many contentions, who got the lock when it was released, etc.

Many years ago I used the above to understand a nasty apparent deadlock 
in a process.

  reply	other threads:[~2016-06-01 21:10 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-01 17:05 spin_lock cause ? Kelly, Mark (RIS-BCT)
2016-06-01 17:37 ` Milian Wolff
2016-06-01 17:43   ` M Kelly
2016-06-01 18:11     ` Arnaldo Carvalho de Melo
2016-06-01 18:49       ` M Kelly
2016-06-01 18:53         ` Rick Jones
2016-06-01 19:25           ` M Kelly
2016-06-01 20:24             ` Rick Jones
2016-06-01 21:10               ` David Ahern [this message]
2016-06-02  9:18         ` Milian Wolff

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bfebe136-5f0c-2fb7-4892-954e7f271e2c@gmail.com \
    --to=dsahern@gmail.com \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.kelly@lexisnexis.com \
    --cc=rick.jones2@hpe.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).