From: Greg KH <gregkh@linuxfoundation.org>
To: "李棒(伯兮)" <libang.li@antgroup.com>
Cc: peterz@infradead.org, mingo@redhat.com, acme@kernel.org,
mark.rutland@arm.com, alexander.shishkin@linux.intel.com,
jolsa@kernel.org, namhyung@kernel.org,
linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 6.1.y] perf/core: Fix possible deadlock in sys_perf_event_open()
Date: Wed, 6 Sep 2023 20:46:02 +0100 [thread overview]
Message-ID: <2023090652-obsessed-scrutiny-d388@gregkh> (raw)
In-Reply-To: <20230906163821.85031-1-libang.li@antgroup.com>
On Thu, Sep 07, 2023 at 12:38:21AM +0800, 李棒(伯兮) wrote:
> In certain scenarios, gctx and ctx may be equal in the
> __perf_event_ctx_lock_double() function, resulting in a deadlock.
>
> Thread 1, thread 2 and thread 3 belong to the same process, and the
> process number is assumed to be M. The deadlock scenario is as follows:
>
> 1) Thread 1 creates a pure software group through the system call
> sys_perf_event_open() and returns an fd, assuming the value of fd is N.
> The parameters of sys_perf_event_open() are as follows.
>
> For example:
> perf_event_attr.type = PERF_TYPE_SOFTWARE;
> pid = M;
> cpu = 0;
> group_fd = -1;
> flags = 0;
> N = sys_perf_event_open(&perf_event_attr, pid, cpu, group_fd, flags);
>
> 2) Thread 2 and thread 3 call the perf_event_open() function concurrently
> with the same parameters on a different cpu. And use the fd generated
> by thread 1 as group_fd. The parameters of sys_perf_event_open() are
> as follows.
>
> For example:
> perf_event_attr.type = PERF_TYPE_HARDWARE;
> pid = M;
> cpu = 0;
> group_fd = N;
> flags = 0;
> sys_perf_event_open(&perf_event_attr, pid, cpu, group_fd, flags);
>
> 3) In the __perf_event_ctx_lock_double function, assuming that thread 2
> successfully acquires gctx->mutex and ctx->mutex first, thread 3 will
> wait here. At the same time, thread 2 will move the pure software gruop
> to the hardware context and change group_leader->ctx to the hardware
> context.
>
> 4) When thread 2 releases gctx->mutex and ctx->mutex, thread 3 acquires
> gctx->mutex and ctx->mutex. And find that group_leader->ctx != gctx,
> then reacquires gctx. At this time, gctx is equal to the ctx of thread
> 3, triggering a deadlock.
>
> Fixes: 321027c1fe77 ("perf/core: Fix concurrent sys_perf_event_open() vs. 'move_group' race")
> Signed-off-by: Bang Li <libang.li@antgroup.com>
> ---
> kernel/events/core.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
<formletter>
This is not the correct way to submit patches for inclusion in the
stable kernel tree. Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.
</formletter>
prev parent reply other threads:[~2023-09-06 19:46 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-06 16:38 [PATCH 6.1.y] perf/core: Fix possible deadlock in sys_perf_event_open() 李棒(伯兮)
2023-09-06 19:46 ` Greg KH [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2023090652-obsessed-scrutiny-d388@gregkh \
--to=gregkh@linuxfoundation.org \
--cc=acme@kernel.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=jolsa@kernel.org \
--cc=libang.li@antgroup.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).