From: Oleg Nesterov <oleg@redhat.com>
To: Jiri Olsa <jolsa@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>,
Paul Mackerras <paulus@samba.org>, Ingo Molnar <mingo@elte.hu>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH,RFC] perf: panic due to inclied cpu context task_ctx value
Date: Fri, 25 Mar 2011 20:10:34 +0100 [thread overview]
Message-ID: <20110325191034.GA26454@redhat.com> (raw)
In-Reply-To: <20110324164436.GC1930@jolsa.brq.redhat.com>
On 03/24, Jiri Olsa wrote:
>
> - close is called on event on CPU 0:
> - the task is scheduled on CPU 0
> - __perf_event_task_sched_in is called
> - cpuctx->task_ctx is set
> - perf_sched_events jump label is decremented and == 0
> - __perf_event_task_sched_out is not called
> - cpuctx->task_ctx on CPU 0 stays set
I think you are right.
And this is already wrong afaics, even if we add the workaround into
free_ctx/etc.
For example, suppose that we attach another PERF_ATTACH_TASK counter
to this task later. perf_sched_events will be incremented again, but
perf_install_in_context() should hang retrying until this task runs
again, ->is_active == T.
Or, even if sys_perf_event_open() succeeds the next
perf_event_context_sched_in() will do nothing because it checks
cpuctx->task_ctx != ctx (unless another task has a counter and
schedules in on that CPU).
This should be fixed somehow, but so far I have no ideas.
Oleg.
next prev parent reply other threads:[~2011-03-25 19:19 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-24 16:44 [PATCH,RFC] perf: panic due to inclied cpu context task_ctx value Jiri Olsa
2011-03-25 19:10 ` Oleg Nesterov [this message]
2011-03-26 15:37 ` Peter Zijlstra
2011-03-26 16:13 ` Oleg Nesterov
2011-03-26 16:38 ` Peter Zijlstra
2011-03-26 17:09 ` Oleg Nesterov
2011-03-26 17:35 ` Oleg Nesterov
2011-03-26 18:29 ` Peter Zijlstra
2011-03-26 18:49 ` Oleg Nesterov
2011-03-28 13:30 ` Oleg Nesterov
2011-03-28 14:57 ` Peter Zijlstra
2011-03-28 15:00 ` Peter Zijlstra
2011-03-28 15:15 ` Oleg Nesterov
2011-03-28 16:27 ` Peter Zijlstra
2011-03-28 15:39 ` Oleg Nesterov
2011-03-28 15:49 ` Peter Zijlstra
2011-03-28 16:56 ` Oleg Nesterov
2011-03-29 8:32 ` Peter Zijlstra
2011-03-29 10:49 ` Peter Zijlstra
2011-03-29 16:28 ` Oleg Nesterov
2011-03-29 19:01 ` Peter Zijlstra
2011-03-30 13:09 ` Jiri Olsa
2011-03-30 14:51 ` Peter Zijlstra
2011-03-30 16:37 ` Oleg Nesterov
2011-03-30 18:30 ` Paul E. McKenney
2011-03-30 19:53 ` Oleg Nesterov
2011-03-30 21:26 ` Peter Zijlstra
2011-03-30 21:35 ` Oleg Nesterov
2011-03-31 10:32 ` Jiri Olsa
2011-03-31 12:41 ` [tip:perf/urgent] perf: Fix task context scheduling tip-bot for Peter Zijlstra
2011-03-31 13:28 ` [PATCH,RFC] perf: panic due to inclied cpu context task_ctx value Oleg Nesterov
2011-03-31 13:51 ` Peter Zijlstra
2011-03-31 14:10 ` Oleg Nesterov
2011-04-04 16:20 ` Oleg Nesterov
2011-03-30 15:32 ` Oleg Nesterov
2011-03-30 15:40 ` Peter Zijlstra
2011-03-30 15:52 ` Oleg Nesterov
2011-03-30 15:57 ` Peter Zijlstra
2011-03-30 16:11 ` Peter Zijlstra
2011-03-30 17:13 ` Oleg Nesterov
2011-03-26 17:09 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110325191034.GA26454@redhat.com \
--to=oleg@redhat.com \
--cc=a.p.zijlstra@chello.nl \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox