public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] kprobes,x86: disable irq durinr optimized callback
@ 2011-04-26 13:01 Jiri Olsa
  2011-04-26 13:46 ` Steven Rostedt
  2011-05-10 10:40 ` Ingo Molnar
  0 siblings, 2 replies; 10+ messages in thread
From: Jiri Olsa @ 2011-04-26 13:01 UTC (permalink / raw)
  To: masami.hiramatsu.pt; +Cc: linux-kernel, mingo, Jiri Olsa

hi,

attached patch is disabling irqs during optimized callback,
so we dont miss any in-irq kprobes as missed.

Also I think there's small window where current_kprobe variable
could be touched in non-safe way, but I was not able to hit
any issue.

I'm not sure wether this is a bug or if it was intentional to have
irqs enabled during the pre_handler callback.

wbr,
jirka

---
Disabling irqs during optimized callback, so we dont miss
any in-irq kprobes as missed.

Interrupts are also disabled during non-optimized kprobes callbacks.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
---
 arch/x86/kernel/kprobes.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c969fd9..917cb31 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 					 struct pt_regs *regs)
 {
 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	unsigned long flags;
 
 	/* This is possible if op is under delayed unoptimizing */
 	if (kprobe_disabled(&op->kp))
 		return;
 
+	local_irq_save(flags);
 	preempt_disable();
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
@@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 		__this_cpu_write(current_kprobe, NULL);
 	}
 	preempt_enable_no_resched();
+	local_irq_restore(flags);
 }
 
 static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] kprobes,x86: disable irq durinr optimized callback
  2011-04-26 13:01 [PATCH] kprobes,x86: disable irq durinr optimized callback Jiri Olsa
@ 2011-04-26 13:46 ` Steven Rostedt
  2011-04-26 14:19   ` Jiri Olsa
  2011-05-10 10:40 ` Ingo Molnar
  1 sibling, 1 reply; 10+ messages in thread
From: Steven Rostedt @ 2011-04-26 13:46 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: masami.hiramatsu.pt, linux-kernel, mingo

On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
> hi,
> 
> attached patch is disabling irqs during optimized callback,
> so we dont miss any in-irq kprobes as missed.
> 
> Also I think there's small window where current_kprobe variable
> could be touched in non-safe way, but I was not able to hit
> any issue.
> 
> I'm not sure wether this is a bug or if it was intentional to have
> irqs enabled during the pre_handler callback.

That's not very convincing. Did you see if we actually did miss events.
If that's the case then it is a bug. The conversion to optimizing should
not cause events to be missed.


> 
> wbr,
> jirka
> 
> ---
> Disabling irqs during optimized callback, so we dont miss
> any in-irq kprobes as missed.
> 
> Interrupts are also disabled during non-optimized kprobes callbacks.
> 
> Signed-off-by: Jiri Olsa <jolsa@redhat.com>
> ---
>  arch/x86/kernel/kprobes.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index c969fd9..917cb31 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>  					 struct pt_regs *regs)
>  {
>  	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +	unsigned long flags;
>  
>  	/* This is possible if op is under delayed unoptimizing */
>  	if (kprobe_disabled(&op->kp))
>  		return;
>  
> +	local_irq_save(flags);
>  	preempt_disable();

No reason to disable preemption if you disabled interrupts.

>  	if (kprobe_running()) {
>  		kprobes_inc_nmissed_count(&op->kp);
> @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>  		__this_cpu_write(current_kprobe, NULL);
>  	}
>  	preempt_enable_no_resched();

Remove the preempt_enable_no_resched() as well.

BTW, what's up with all these preempt_enable_no_resched()'s laying
around in the kprobe code? Looks to me that this can cause lots of
missing wakeups (preemption leaks). Which would make this horrible for
real-time.

-- Steve

> +	local_irq_restore(flags);
>  }
>  
>  static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] kprobes,x86: disable irq durinr optimized callback
  2011-04-26 13:46 ` Steven Rostedt
@ 2011-04-26 14:19   ` Jiri Olsa
  2011-04-27  0:51     ` Masami Hiramatsu
  0 siblings, 1 reply; 10+ messages in thread
From: Jiri Olsa @ 2011-04-26 14:19 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: masami.hiramatsu.pt, linux-kernel, mingo

On Tue, Apr 26, 2011 at 09:46:25AM -0400, Steven Rostedt wrote:
> On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
> > hi,
> > 
> > attached patch is disabling irqs during optimized callback,
> > so we dont miss any in-irq kprobes as missed.
> > 
> > Also I think there's small window where current_kprobe variable
> > could be touched in non-safe way, but I was not able to hit
> > any issue.
> > 
> > I'm not sure wether this is a bug or if it was intentional to have
> > irqs enabled during the pre_handler callback.
> 
> That's not very convincing. Did you see if we actually did miss events.
> If that's the case then it is a bug. The conversion to optimizing should
> not cause events to be missed.

yep, running following:

# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable

makes the optimized kprobes to be missed. They are not missed in
same testcase for non-optimized kprobes. I should have mentioned
that, sry ;)

> 
> 
> > 
> > wbr,
> > jirka
> > 
> > ---
> > Disabling irqs during optimized callback, so we dont miss
> > any in-irq kprobes as missed.
> > 
> > Interrupts are also disabled during non-optimized kprobes callbacks.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@redhat.com>
> > ---
> >  arch/x86/kernel/kprobes.c |    3 +++
> >  1 files changed, 3 insertions(+), 0 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> > index c969fd9..917cb31 100644
> > --- a/arch/x86/kernel/kprobes.c
> > +++ b/arch/x86/kernel/kprobes.c
> > @@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> >  					 struct pt_regs *regs)
> >  {
> >  	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +	unsigned long flags;
> >  
> >  	/* This is possible if op is under delayed unoptimizing */
> >  	if (kprobe_disabled(&op->kp))
> >  		return;
> >  
> > +	local_irq_save(flags);
> >  	preempt_disable();
> 
> No reason to disable preemption if you disabled interrupts.

ops, missed that..  attaching new patch

thanks,
jirka


---
Disabling irqs during optimized callback, so we dont miss
any in-irq kprobes as missed.

running following:

# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable

makes the optimized kprobes to be missed. None is missed
if the kprobe optimatization is disabled.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
---
 arch/x86/kernel/kprobes.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c969fd9..f1a6244 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 					 struct pt_regs *regs)
 {
 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	unsigned long flags;
 
 	/* This is possible if op is under delayed unoptimizing */
 	if (kprobe_disabled(&op->kp))
 		return;
 
-	preempt_disable();
+	local_irq_save(flags);
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
 	} else {
@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 		opt_pre_handler(&op->kp, regs);
 		__this_cpu_write(current_kprobe, NULL);
 	}
-	preempt_enable_no_resched();
+	local_irq_restore(flags);
 }
 
 static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] kprobes,x86: disable irq durinr optimized callback
  2011-04-26 14:19   ` Jiri Olsa
@ 2011-04-27  0:51     ` Masami Hiramatsu
  2011-05-09 11:11       ` Jiri Olsa
  0 siblings, 1 reply; 10+ messages in thread
From: Masami Hiramatsu @ 2011-04-27  0:51 UTC (permalink / raw)
  To: Jiri Olsa, mingo; +Cc: Steven Rostedt, linux-kernel

(2011/04/26 23:19), Jiri Olsa wrote:
> On Tue, Apr 26, 2011 at 09:46:25AM -0400, Steven Rostedt wrote:
>> On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
>>> hi,
>>>
>>> attached patch is disabling irqs during optimized callback,
>>> so we dont miss any in-irq kprobes as missed.
>>>
>>> Also I think there's small window where current_kprobe variable
>>> could be touched in non-safe way, but I was not able to hit
>>> any issue.
>>>
>>> I'm not sure wether this is a bug or if it was intentional to have
>>> irqs enabled during the pre_handler callback.
>>
>> That's not very convincing. Did you see if we actually did miss events.
>> If that's the case then it is a bug. The conversion to optimizing should
>> not cause events to be missed.
> 
> yep, running following:
> 
> # cd /debug/tracing/
> # echo "p mutex_unlock" >> kprobe_events
> # echo "p _raw_spin_lock" >> kprobe_events
> # echo "p smp_apic_timer_interrupt" >> ./kprobe_events
> # echo 1 > events/enable
> 
> makes the optimized kprobes to be missed. They are not missed in
> same testcase for non-optimized kprobes. I should have mentioned
> that, sry ;)

Good catch! that's right! kprobes' int3 automatically disables
irq, but optimized path doesn't. And that causes unexpected
event loss.


> 
>>
>>
>>>
>>> wbr,
>>> jirka
>>>
>>> ---
>>> Disabling irqs during optimized callback, so we dont miss
>>> any in-irq kprobes as missed.
>>>
>>> Interrupts are also disabled during non-optimized kprobes callbacks.
>>>
>>> Signed-off-by: Jiri Olsa <jolsa@redhat.com>
>>> ---
>>>  arch/x86/kernel/kprobes.c |    3 +++
>>>  1 files changed, 3 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
>>> index c969fd9..917cb31 100644
>>> --- a/arch/x86/kernel/kprobes.c
>>> +++ b/arch/x86/kernel/kprobes.c
>>> @@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>>>  					 struct pt_regs *regs)
>>>  {
>>>  	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>>> +	unsigned long flags;
>>>  
>>>  	/* This is possible if op is under delayed unoptimizing */
>>>  	if (kprobe_disabled(&op->kp))
>>>  		return;
>>>  
>>> +	local_irq_save(flags);
>>>  	preempt_disable();
>>
>> No reason to disable preemption if you disabled interrupts.

Right,

> ops, missed that..  attaching new patch


> ---
> Disabling irqs during optimized callback, so we dont miss
> any in-irq kprobes as missed.
> 
> running following:
> 
> # cd /debug/tracing/
> # echo "p mutex_unlock" >> kprobe_events
> # echo "p _raw_spin_lock" >> kprobe_events
> # echo "p smp_apic_timer_interrupt" >> ./kprobe_events
> # echo 1 > events/enable
> 
> makes the optimized kprobes to be missed. None is missed
> if the kprobe optimatization is disabled.
> 
> Signed-off-by: Jiri Olsa <jolsa@redhat.com>

Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>


Ingo, could you pull this as a bugfix?

Thank you!


> ---
>  arch/x86/kernel/kprobes.c |    5 +++--
>  1 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index c969fd9..f1a6244 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>  					 struct pt_regs *regs)
>  {
>  	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +	unsigned long flags;
>  
>  	/* This is possible if op is under delayed unoptimizing */
>  	if (kprobe_disabled(&op->kp))
>  		return;
>  
> -	preempt_disable();
> +	local_irq_save(flags);
>  	if (kprobe_running()) {
>  		kprobes_inc_nmissed_count(&op->kp);
>  	} else {
> @@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>  		opt_pre_handler(&op->kp, regs);
>  		__this_cpu_write(current_kprobe, NULL);
>  	}
> -	preempt_enable_no_resched();
> +	local_irq_restore(flags);
>  }
>  
>  static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)


-- 
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] kprobes,x86: disable irq durinr optimized callback
  2011-04-27  0:51     ` Masami Hiramatsu
@ 2011-05-09 11:11       ` Jiri Olsa
  0 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2011-05-09 11:11 UTC (permalink / raw)
  To: mingo; +Cc: Steven Rostedt, linux-kernel, Masami Hiramatsu

On Wed, Apr 27, 2011 at 09:51:23AM +0900, Masami Hiramatsu wrote:
> (2011/04/26 23:19), Jiri Olsa wrote:
> > On Tue, Apr 26, 2011 at 09:46:25AM -0400, Steven Rostedt wrote:
> >> On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:

SNIP

> >>> hi,
> > 
> > makes the optimized kprobes to be missed. None is missed
> > if the kprobe optimatization is disabled.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@redhat.com>
> 
> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
> 
> 
> Ingo, could you pull this as a bugfix?

hi,
could this be pulled in?

thanks,
jirka

> 
> Thank you!
> 
> 
> > ---
> >  arch/x86/kernel/kprobes.c |    5 +++--
> >  1 files changed, 3 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> > index c969fd9..f1a6244 100644
> > --- a/arch/x86/kernel/kprobes.c
> > +++ b/arch/x86/kernel/kprobes.c
> > @@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> >  					 struct pt_regs *regs)
> >  {
> >  	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +	unsigned long flags;
> >  
> >  	/* This is possible if op is under delayed unoptimizing */
> >  	if (kprobe_disabled(&op->kp))
> >  		return;
> >  
> > -	preempt_disable();
> > +	local_irq_save(flags);
> >  	if (kprobe_running()) {
> >  		kprobes_inc_nmissed_count(&op->kp);
> >  	} else {
> > @@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> >  		opt_pre_handler(&op->kp, regs);
> >  		__this_cpu_write(current_kprobe, NULL);
> >  	}
> > -	preempt_enable_no_resched();
> > +	local_irq_restore(flags);
> >  }
> >  
> >  static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
> 
> 
> -- 
> Masami HIRAMATSU
> Software Platform Research Dept. Linux Technology Center
> Hitachi, Ltd., Yokohama Research Laboratory
> E-mail: masami.hiramatsu.pt@hitachi.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] kprobes,x86: disable irq durinr optimized callback
  2011-04-26 13:01 [PATCH] kprobes,x86: disable irq durinr optimized callback Jiri Olsa
  2011-04-26 13:46 ` Steven Rostedt
@ 2011-05-10 10:40 ` Ingo Molnar
  2011-05-10 11:39   ` Jiri Olsa
  1 sibling, 1 reply; 10+ messages in thread
From: Ingo Molnar @ 2011-05-10 10:40 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: masami.hiramatsu.pt, linux-kernel


* Jiri Olsa <jolsa@redhat.com> wrote:

> +	local_irq_save(flags);
>  	preempt_disable();
>  	if (kprobe_running()) {
>  		kprobes_inc_nmissed_count(&op->kp);
> @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>  		__this_cpu_write(current_kprobe, NULL);
>  	}
>  	preempt_enable_no_resched();
> +	local_irq_restore(flags);

irq-disable is synonymous to preempt disable so the preempt_disable()/enable() 
pair looks like superfluous overhead.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] kprobes,x86: disable irq durinr optimized callback
  2011-05-10 10:40 ` Ingo Molnar
@ 2011-05-10 11:39   ` Jiri Olsa
  2011-05-10 11:44     ` Ingo Molnar
  0 siblings, 1 reply; 10+ messages in thread
From: Jiri Olsa @ 2011-05-10 11:39 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: masami.hiramatsu.pt, linux-kernel

On Tue, May 10, 2011 at 12:40:19PM +0200, Ingo Molnar wrote:
> 
> * Jiri Olsa <jolsa@redhat.com> wrote:
> 
> > +	local_irq_save(flags);
> >  	preempt_disable();
> >  	if (kprobe_running()) {
> >  		kprobes_inc_nmissed_count(&op->kp);
> > @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> >  		__this_cpu_write(current_kprobe, NULL);
> >  	}
> >  	preempt_enable_no_resched();
> > +	local_irq_restore(flags);
> 
> irq-disable is synonymous to preempt disable so the preempt_disable()/enable() 
> pair looks like superfluous overhead.

yes, there's correct patch already in the list here:
http://marc.info/?l=linux-kernel&m=130382756829695&w=2

thanks,
jirka

> 
> Thanks,
> 
> 	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] kprobes,x86: disable irq durinr optimized callback
  2011-05-10 11:39   ` Jiri Olsa
@ 2011-05-10 11:44     ` Ingo Molnar
  2011-05-11 11:06       ` [PATCH, v2] kprobes, x86: Disable irq during " Jiri Olsa
  0 siblings, 1 reply; 10+ messages in thread
From: Ingo Molnar @ 2011-05-10 11:44 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: masami.hiramatsu.pt, linux-kernel


* Jiri Olsa <jolsa@redhat.com> wrote:

> On Tue, May 10, 2011 at 12:40:19PM +0200, Ingo Molnar wrote:
> > 
> > * Jiri Olsa <jolsa@redhat.com> wrote:
> > 
> > > +	local_irq_save(flags);
> > >  	preempt_disable();
> > >  	if (kprobe_running()) {
> > >  		kprobes_inc_nmissed_count(&op->kp);
> > > @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> > >  		__this_cpu_write(current_kprobe, NULL);
> > >  	}
> > >  	preempt_enable_no_resched();
> > > +	local_irq_restore(flags);
> > 
> > irq-disable is synonymous to preempt disable so the preempt_disable()/enable() 
> > pair looks like superfluous overhead.
> 
> yes, there's correct patch already in the list here:
> http://marc.info/?l=linux-kernel&m=130382756829695&w=2

It helps to change the subject line when you think another patch should be 
considered, to something like:

  [PATCH, v2] kprobes, x86: Disable irq during optimized callback

(also note the other changes i made to the title, 3 altogether.)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH, v2] kprobes, x86: Disable irq during optimized callback
  2011-05-10 11:44     ` Ingo Molnar
@ 2011-05-11 11:06       ` Jiri Olsa
  2011-05-11 13:10         ` [tip:perf/urgent] kprobes, x86: Disable irqs " tip-bot for Jiri Olsa
  0 siblings, 1 reply; 10+ messages in thread
From: Jiri Olsa @ 2011-05-11 11:06 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: masami.hiramatsu.pt, linux-kernel

On Tue, May 10, 2011 at 01:44:18PM +0200, Ingo Molnar wrote:
> 
> * Jiri Olsa <jolsa@redhat.com> wrote:
> 
> > On Tue, May 10, 2011 at 12:40:19PM +0200, Ingo Molnar wrote:
> > > 
> > > * Jiri Olsa <jolsa@redhat.com> wrote:
> > > 
> > > > +	local_irq_save(flags);
> > > >  	preempt_disable();
> > > >  	if (kprobe_running()) {
> > > >  		kprobes_inc_nmissed_count(&op->kp);
> > > > @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> > > >  		__this_cpu_write(current_kprobe, NULL);
> > > >  	}
> > > >  	preempt_enable_no_resched();
> > > > +	local_irq_restore(flags);
> > > 
> > > irq-disable is synonymous to preempt disable so the preempt_disable()/enable() 
> > > pair looks like superfluous overhead.
> > 
> > yes, there's correct patch already in the list here:
> > http://marc.info/?l=linux-kernel&m=130382756829695&w=2
> 
> It helps to change the subject line when you think another patch should be 
> considered, to something like:
> 
>   [PATCH, v2] kprobes, x86: Disable irq during optimized callback
> 
> (also note the other changes i made to the title, 3 altogether.)

sorry, here it is ;) thanks

jirka

---
Disabling irqs during optimized callback, so we dont miss
any in-irq kprobes as missed.

running following:

# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable

makes the optimized kprobes to be missed. None is missed
if the kprobe optimatization is disabled.


Signed-off-by: Jiri Olsa <jolsa@redhat.com>
---
 arch/x86/kernel/kprobes.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c969fd9..f1a6244 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 					 struct pt_regs *regs)
 {
 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	unsigned long flags;
 
 	/* This is possible if op is under delayed unoptimizing */
 	if (kprobe_disabled(&op->kp))
 		return;
 
-	preempt_disable();
+	local_irq_save(flags);
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
 	} else {
@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 		opt_pre_handler(&op->kp, regs);
 		__this_cpu_write(current_kprobe, NULL);
 	}
-	preempt_enable_no_resched();
+	local_irq_restore(flags);
 }
 
 static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [tip:perf/urgent] kprobes, x86: Disable irqs during optimized callback
  2011-05-11 11:06       ` [PATCH, v2] kprobes, x86: Disable irq during " Jiri Olsa
@ 2011-05-11 13:10         ` tip-bot for Jiri Olsa
  0 siblings, 0 replies; 10+ messages in thread
From: tip-bot for Jiri Olsa @ 2011-05-11 13:10 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, masami.hiramatsu.pt, tglx, mingo, jolsa

Commit-ID:  9bbeacf52f66d165739a4bbe9c018d17493a74b5
Gitweb:     http://git.kernel.org/tip/9bbeacf52f66d165739a4bbe9c018d17493a74b5
Author:     Jiri Olsa <jolsa@redhat.com>
AuthorDate: Wed, 11 May 2011 13:06:13 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Wed, 11 May 2011 13:21:23 +0200

kprobes, x86: Disable irqs during optimized callback

Disable irqs during optimized callback, so we dont miss any in-irq kprobes.

The following commands:

 # cd /debug/tracing/
 # echo "p mutex_unlock" >> kprobe_events
 # echo "p _raw_spin_lock" >> kprobe_events
 # echo "p smp_apic_timer_interrupt" >> ./kprobe_events
 # echo 1 > events/enable

Cause the optimized kprobes to be missed. None is missed
with the fix applied.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Link: http://lkml.kernel.org/r/20110511110613.GB2390@jolsa.brq.redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 arch/x86/kernel/kprobes.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c969fd9..f1a6244 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 					 struct pt_regs *regs)
 {
 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+	unsigned long flags;
 
 	/* This is possible if op is under delayed unoptimizing */
 	if (kprobe_disabled(&op->kp))
 		return;
 
-	preempt_disable();
+	local_irq_save(flags);
 	if (kprobe_running()) {
 		kprobes_inc_nmissed_count(&op->kp);
 	} else {
@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
 		opt_pre_handler(&op->kp, regs);
 		__this_cpu_write(current_kprobe, NULL);
 	}
-	preempt_enable_no_resched();
+	local_irq_restore(flags);
 }
 
 static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2011-05-11 16:24 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-26 13:01 [PATCH] kprobes,x86: disable irq durinr optimized callback Jiri Olsa
2011-04-26 13:46 ` Steven Rostedt
2011-04-26 14:19   ` Jiri Olsa
2011-04-27  0:51     ` Masami Hiramatsu
2011-05-09 11:11       ` Jiri Olsa
2011-05-10 10:40 ` Ingo Molnar
2011-05-10 11:39   ` Jiri Olsa
2011-05-10 11:44     ` Ingo Molnar
2011-05-11 11:06       ` [PATCH, v2] kprobes, x86: Disable irq during " Jiri Olsa
2011-05-11 13:10         ` [tip:perf/urgent] kprobes, x86: Disable irqs " tip-bot for Jiri Olsa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox