From: Jiri Olsa <jolsa@redhat.com>
To: 叶澄锋 <dg573847474@gmail.com>
Cc: peterz@infradead.org, mingo@redhat.com, acme@kernel.org,
mark.rutland@arm.com, alexander.shishkin@linux.intel.com,
namhyung@kernel.org, linux-perf-users@vger.kernel.org,
linux-kernel@vger.kernel.org, CAI Yuandao <ycaibb@cse.ust.hk>
Subject: Re: Possible deadlock errors in tools/perf/builtin-sched.c
Date: Tue, 31 Aug 2021 20:46:36 +0200 [thread overview]
Message-ID: <YS5uAwp8dGn4CK1V@krava> (raw)
In-Reply-To: <CAAo+4rXNLdgvAiT2-B8cWtLNPnWoGo9RWMW=8SPchzRgxJ4BhA@mail.gmail.com>
On Sat, Aug 28, 2021 at 03:57:17PM +0800, 叶澄锋 wrote:
> Dear developers:
>
> Thank you for your checking.
>
> It seems there are two deadlock errors on the lock
> *sched->work_done_wait_mutex*and*sched->start_work_mutex.*
>
> They are triggered due to one thread(A) runs function *run_one_test* locating
> in a loop and unreleasing the two locks in the*wait_for_tasks*function, and
> another thread(B) runs function *thread_func *acquiring the two locks.
>
> Because the two locks are not properly released in thread A, there will be
> a deadlock problem if thread B acquires the two locks.
hi,
do you have a way to reproduce this?
thanks,
jirka
>
> The related codes are below:
>
> Thread A:
>
> static void create_tasks(struct perf_sched *sched)
> {
> ...;
> err = pthread_mutex_lock(&sched->start_work_mutex);
> ...;
> err = pthread_mutex_lock(&sched->work_done_wait_mutex);
> ...;
> }
> static int perf_sched__replay(struct perf_sched *sched)
> {
> ...;
>
> create_tasks(sched);
> printf("------------------------------------------------------------\n");
> for (i = 0; i < sched->replay_repeat; i++)
> run_one_test(sched); // multiple reacquisition on the lock
> sched->work_done_wait_mutex and sched->start_work_mutex
>
> return 0;
> }
>
> static void run_one_test(struct perf_sched *sched)
> {
> ...;
> wait_for_tasks(sched);
> ...;
> }
> static void wait_for_tasks(struct perf_sched *sched)
> {
> ...;
> pthread_mutex_unlock(&sched->work_done_wait_mutex);
>
> ...;
> ret = pthread_mutex_lock(&sched->work_done_wait_mutex);
> ...;
> pthread_mutex_unlock(&sched->start_work_mutex);
>
> ...;
>
> ret = pthread_mutex_lock(&sched->start_work_mutex);
> ....;
> }
>
> Thread B:
>
> static void *thread_func(void *ctx)
> {
>
> ...;
> ret = pthread_mutex_lock(&sched->start_work_mutex);
> ...;
> ret = pthread_mutex_unlock(&sched->start_work_mutex);
>
> ...;
>
> ret = pthread_mutex_lock(&sched->work_done_wait_mutex);
> ...;
> ret = pthread_mutex_unlock(&sched->work_done_wait_mutex);
> ..;
>
> }
>
>
> Thanks,
next prev parent reply other threads:[~2021-08-31 18:46 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CAAo+4rXNLdgvAiT2-B8cWtLNPnWoGo9RWMW=8SPchzRgxJ4BhA@mail.gmail.com>
2021-08-28 8:06 ` Possible deadlock errors in tools/perf/builtin-sched.c 叶澄锋
2021-08-31 18:46 ` Jiri Olsa [this message]
2021-09-01 5:19 ` CAI Yuandao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YS5uAwp8dGn4CK1V@krava \
--to=jolsa@redhat.com \
--cc=acme@kernel.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=dg573847474@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=ycaibb@cse.ust.hk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).