* Re: [RFC] semantics of singlestepping vs. tracer exiting
2012-09-03 16:05 ` [RFC] semantics of singlestepping vs. tracer exiting Oleg Nesterov
@ 2012-09-03 17:02 ` Oleg Nesterov
2012-09-03 17:31 ` Al Viro
1 sibling, 0 replies; 6+ messages in thread
From: Oleg Nesterov @ 2012-09-03 17:02 UTC (permalink / raw)
To: Al Viro; +Cc: Linus Torvalds, Roland McGrath, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 586 bytes --]
On 09/03, Oleg Nesterov wrote:
>
> This is another reason to move enable/disable step into ptrace_stop().
> And in fact I had the patches a loong ago, but we need to cleanup
> the usage of PT_SINGLESTEP/PT_BLOCKSTEP first. The tracer should
> simply set/clear these PT_ flags and resume the tracee which should
> check them and do user_*_single_step() in response.
Found these patches, see the attachments.... And this also fixes the
problems with DEBUGCTLMSR_BTF.
The patches should be re-diffed, but I need to recall why I didn't
send them, perhaps I noticed some problem...
Oleg.
[-- Attachment #2: 1_use_PT_STEP.patch --]
[-- Type: text/plain, Size: 2773 bytes --]
[PATCH 1/3] ptrace_resume: set/clear PT_SINGLESTEP/PT_BLOCKSTEP
Contrary to the comment in ptrace.h, PT_SINGLESTEP is only used on
arch/xtensa, and PT_BLOCKSTEP is not used at all.
Change the arch independent ptrace_resume() to set/clear these bits
before user_enable_*_step/user_disable_single_step() and remove this
this code from arch/xtensa/kernel/ptrace.c.
Also change ptrace_init_task() to prevent the copying of these bits.
This doesn't make any difference on xtensa, and other arches do not
use these flags so far.
But, thereafter we can check task->ptrace & PT_*STEP to figure out if
this tracer wants the stepping and unlike TIF_SINGLESTEP it is always
correct in this sense and it is not arch dependent.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
include/linux/ptrace.h | 4 ++--
kernel/ptrace.c | 3 +++
arch/xtensa/kernel/ptrace.c | 2 --
3 files changed, 5 insertions(+), 4 deletions(-)
--- ptrace/include/linux/ptrace.h~1_use_PT_STEP 2011-06-28 17:50:27.000000000 +0200
+++ ptrace/include/linux/ptrace.h 2011-07-03 21:55:17.000000000 +0200
@@ -104,7 +104,6 @@
#define PT_TRACE_MASK 0x000003f4
-/* single stepping state bits (used on ARM and PA-RISC) */
#define PT_SINGLESTEP_BIT 31
#define PT_SINGLESTEP (1<<PT_SINGLESTEP_BIT)
#define PT_BLOCKSTEP_BIT 30
@@ -220,7 +219,8 @@ static inline void ptrace_init_task(stru
child->parent = child->real_parent;
child->ptrace = 0;
if (unlikely(ptrace) && (current->ptrace & PT_PTRACED)) {
- child->ptrace = current->ptrace;
+ child->ptrace = current->ptrace &
+ ~(PT_SINGLESTEP | PT_BLOCKSTEP);
__ptrace_link(child, current->parent);
}
--- ptrace/kernel/ptrace.c~1_use_PT_STEP 2011-06-28 17:50:27.000000000 +0200
+++ ptrace/kernel/ptrace.c 2011-07-03 21:55:17.000000000 +0200
@@ -599,13 +599,16 @@ static int ptrace_resume(struct task_str
clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
#endif
+ child->ptrace &= ~(PT_SINGLESTEP | PT_BLOCKSTEP);
if (is_singleblock(request)) {
if (unlikely(!arch_has_block_step()))
return -EIO;
+ child->ptrace |= PT_BLOCKSTEP;
user_enable_block_step(child);
} else if (is_singlestep(request) || is_sysemu_singlestep(request)) {
if (unlikely(!arch_has_single_step()))
return -EIO;
+ child->ptrace |= PT_SINGLESTEP;
user_enable_single_step(child);
} else {
user_disable_single_step(child);
--- ptrace/arch/xtensa/kernel/ptrace.c~1_use_PT_STEP 2011-04-06 21:33:43.000000000 +0200
+++ ptrace/arch/xtensa/kernel/ptrace.c 2011-07-03 20:30:57.000000000 +0200
@@ -33,12 +33,10 @@
void user_enable_single_step(struct task_struct *child)
{
- child->ptrace |= PT_SINGLESTEP;
}
void user_disable_single_step(struct task_struct *child)
{
- child->ptrace &= ~PT_SINGLESTEP;
}
/*
[-- Attachment #3: 2_ptrace_finish_resume.patch --]
[-- Type: text/plain, Size: 2980 bytes --]
[PATCH 2/3] ptrace: shift user_*_step() from ptrace_resume() to ptrace_stop() path
Ignoring the buggy PTRACE_KILL, ptrace_resume() calls user_*_step()
when the tracee sleeps in ptrace_stop(). Now that ptrace_resume()
sets PT_SINGLE* flags, we can reassign user_*_step() from the tracer
to the tracee.
Introduce ptrace_finish_stop(), it is called by ptrace_stop() after
schedule(). Move user_*_step() call sites from ptrace_resume() to
ptrace_finish_stop().
This way:
- we can remove user_disable_single_step() from detach paths.
This is the main motivation, we can implement asynchronous
detach.
- this makes the detach-on-exit more correct, we do not leak
TIF_SINGLESTEP if the tracer dies.
- user_enable_*_step(tsk) can be implemented more efficiently
if tsk == current, we can avoid access_process_vm().
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
include/linux/ptrace.h | 1 +
kernel/signal.c | 1 +
kernel/ptrace.c | 16 ++++++++++++----
3 files changed, 14 insertions(+), 4 deletions(-)
--- ptrace/include/linux/ptrace.h~2_ptrace_finish_resume 2011-07-03 21:55:17.000000000 +0200
+++ ptrace/include/linux/ptrace.h 2011-07-03 21:55:37.000000000 +0200
@@ -112,6 +112,7 @@
#include <linux/compiler.h> /* For unlikely. */
#include <linux/sched.h> /* For struct task_struct. */
+extern void ptrace_finish_stop(void);
extern long arch_ptrace(struct task_struct *child, long request,
unsigned long addr, unsigned long data);
--- ptrace/kernel/signal.c~2_ptrace_finish_resume 2011-07-03 21:55:17.000000000 +0200
+++ ptrace/kernel/signal.c 2011-07-03 21:55:37.000000000 +0200
@@ -1879,6 +1879,7 @@ static void ptrace_stop(int exit_code, i
*/
try_to_freeze();
+ ptrace_finish_stop();
/*
* We are back. Now reacquire the siglock before touching
* last_siginfo, so that we are sure to have synchronized with
--- ptrace/kernel/ptrace.c~2_ptrace_finish_resume 2011-07-03 21:55:17.000000000 +0200
+++ ptrace/kernel/ptrace.c 2011-07-03 21:55:37.000000000 +0200
@@ -581,6 +581,18 @@ static int ptrace_setsiginfo(struct task
#define is_sysemu_singlestep(request) 0
#endif
+void ptrace_finish_stop(void)
+{
+ struct task_struct *task = current;
+
+ if (task->ptrace & PT_BLOCKSTEP)
+ user_enable_block_step(task);
+ else if (task->ptrace & PT_SINGLESTEP)
+ user_enable_single_step(task);
+ else
+ user_disable_single_step(task);
+}
+
static int ptrace_resume(struct task_struct *child, long request,
unsigned long data)
{
@@ -604,14 +616,10 @@ static int ptrace_resume(struct task_str
if (unlikely(!arch_has_block_step()))
return -EIO;
child->ptrace |= PT_BLOCKSTEP;
- user_enable_block_step(child);
} else if (is_singlestep(request) || is_sysemu_singlestep(request)) {
if (unlikely(!arch_has_single_step()))
return -EIO;
child->ptrace |= PT_SINGLESTEP;
- user_enable_single_step(child);
- } else {
- user_disable_single_step(child);
}
child->exit_code = data;
[-- Attachment #4: 3_detach_dont_disable_step.patch --]
[-- Type: text/plain, Size: 593 bytes --]
[PATCH 3/3] x86: remove ptrace_disable()->user_disable_single_step()
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
arch/x86/kernel/ptrace.c | 1 -
1 file changed, 1 deletion(-)
--- ptrace/arch/x86/kernel/ptrace.c~3_detach_dont_disable_step 2011-06-07 19:20:02.000000000 +0200
+++ ptrace/arch/x86/kernel/ptrace.c 2011-07-03 21:56:42.000000000 +0200
@@ -807,7 +807,6 @@ static int ioperm_get(struct task_struct
*/
void ptrace_disable(struct task_struct *child)
{
- user_disable_single_step(child);
#ifdef TIF_SYSCALL_EMU
clear_tsk_thread_flag(child, TIF_SYSCALL_EMU);
#endif
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [RFC] semantics of singlestepping vs. tracer exiting
2012-09-03 16:05 ` [RFC] semantics of singlestepping vs. tracer exiting Oleg Nesterov
2012-09-03 17:02 ` Oleg Nesterov
@ 2012-09-03 17:31 ` Al Viro
2012-09-04 15:39 ` Oleg Nesterov
1 sibling, 1 reply; 6+ messages in thread
From: Al Viro @ 2012-09-03 17:31 UTC (permalink / raw)
To: Oleg Nesterov; +Cc: Linus Torvalds, linux-kernel, linux-alpha
On Mon, Sep 03, 2012 at 06:05:38PM +0200, Oleg Nesterov wrote:
> This is not easy to fix. ptrace_disable() and user_disable_single_step()
> is arch dependant, but at least on x86 it assumes that the tracee is not
> running, so exit_ptrace() can't do this.
True (IOW, proposed fix is hopeless - we definitely want the detachees to be
in kernel space, and not only on x86).
> This is another reason to move enable/disable step into ptrace_stop().
> And in fact I had the patches a loong ago, but we need to cleanup
> the usage of PT_SINGLESTEP/PT_BLOCKSTEP first. The tracer should
> simply set/clear these PT_ flags and resume the tracee which should
> check them and do user_*_single_step() in response.
> > Related question: should execve(2) clear (ptrace-inflicted)
> > singlestepping?
>
> Perhaps, but
>
> > Tracer
> > exit(), however, does *not* do that right now, so the state after
> > execve(2) is theoretically observable.
>
> ... why execve() is special?
Because that behaviour had been changed over the history, for one thing:
commit e1f287735c1e58c653b516931b5d3dd899edcb77
Author: Roland McGrath <roland@redhat.com>
Date: Wed Jan 30 13:30:50 2008 +0100
x86 single_step: TIF_FORCED_TF
had done that for x86, unless I'm misreading something. BTW, now that
I've looked at that, alpha seems to have a really unpleasant bug with
single-stepping through execve() - it *must* reset ->bpt_nsaved to 0
in start_thread(), simply because the address space the breakpoints used
to be in is gone at that point. I don't see any place where that would
be done; suppose we single-step right into callsys insn and do PTRACE_CONT
when stopped on the way out. Won't that end up with ptrace_cancel_bpt()
done in *new* address space, silently buggering new .text contents?
BTW, speaking of alpha, what about PTRACE_SINGLESTEP when the task is stopped
on syscall entry/exit after previous PTRACE_SYSCALL, BTW? Looks like it will
be like PTRACE_CONT until we hit the first signal, at which point it converts
to singlesteping mode; unless I'm seriously misreading that code, we rely
on ptrace_set_bpt() done shortly after returning from get_signal_to_deliver()
if we found that we'd been singlestepping. Fine, but in this case we
had been resumed *not* in get_signal_to_deliver()...
Cc'd linux-alpha, in hopes to hear "you don't understand how single-stepping
works on alpha, you idiot, everything's fine because of $REASONS"...
^ permalink raw reply [flat|nested] 6+ messages in thread