linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* cross memory attach && security check
@ 2012-01-05 15:10 Oleg Nesterov
  2012-01-09  7:11 ` Christopher Yeoh
  0 siblings, 1 reply; 6+ messages in thread
From: Oleg Nesterov @ 2012-01-05 15:10 UTC (permalink / raw)
  To: Chris Yeoh; +Cc: Andrew Morton, David Howells, linux-kernel

Hello,

Just noticed the new file in mm/ ;) A couple of questions.

process_vm_rw_core() does

	task_lock(task);
	if (__ptrace_may_access(task, PTRACE_MODE_ATTACH)) {
		task_unlock(task);
		rc = -EPERM;
		goto put_task_struct;
	}
	mm = task->mm;

this is racy, task_lock() can't help. And I don't think you should
use it directly.

execve() does exec_mmap() first, this switches to the new ->mm.
After that install_exec_creds() changes task->cred. The window
is not that small.

I guess you need ->cred_guard_mutex, please look at mm_for_maps().




Another question, process_vm_rw_pages() does get_user_pages() without
FOLL_FORCE. Is this on purpose? This limits the usage of the new
syscalls.




Hmm. And could you please explain the change in rw_copy_check_uvector()?
Why process_vm_rw() does
rw_copy_check_uvector(READ, rvec, check_access => 0) ?

Oleg.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: cross memory attach && security check
  2012-01-05 15:10 cross memory attach && security check Oleg Nesterov
@ 2012-01-09  7:11 ` Christopher Yeoh
  2012-01-09 14:53   ` Oleg Nesterov
  0 siblings, 1 reply; 6+ messages in thread
From: Christopher Yeoh @ 2012-01-09  7:11 UTC (permalink / raw)
  To: Oleg Nesterov, linux-kernel; +Cc: Chris Yeoh, Andrew Morton, David Howells

On Thu, 5 Jan 2012 16:10:12 +0100
Oleg Nesterov <oleg@redhat.com> wrote:
 
> Just noticed the new file in mm/ ;) A couple of questions.
> 
> process_vm_rw_core() does
> 
> 	task_lock(task);
> 	if (__ptrace_may_access(task, PTRACE_MODE_ATTACH)) {
> 		task_unlock(task);
> 		rc = -EPERM;
> 		goto put_task_struct;
> 	}
> 	mm = task->mm;
> 
> this is racy, task_lock() can't help. And I don't think you should
> use it directly.
> 
> execve() does exec_mmap() first, this switches to the new ->mm.
> After that install_exec_creds() changes task->cred. The window
> is not that small.
> 
> I guess you need ->cred_guard_mutex, please look at mm_for_maps().
> 

Thanks, agreed this looks like it's a problem. Need to do a bit more
testing, but I think the following patch fixes the race?

Signed-off-by: Chris Yeoh <yeohc@au1.ibm.com>
 process_vm_access.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c
index e920aa3..207d7cc 100644
--- a/mm/process_vm_access.c
+++ b/mm/process_vm_access.c
@@ -298,9 +298,14 @@ static ssize_t process_vm_rw_core(pid_t pid, const struct iovec *lvec,
 		goto free_proc_pages;
 	}
 
+	rc = mutex_lock_interruptible(&task->signal->cred_guard_mutex);
+	if (rc)
+		goto put_task_struct;
+
 	task_lock(task);
 	if (__ptrace_may_access(task, PTRACE_MODE_ATTACH)) {
 		task_unlock(task);
+		mutex_unlock(&task->signal->cred_guard_mutex);
 		rc = -EPERM;
 		goto put_task_struct;
 	}
@@ -308,12 +313,14 @@ static ssize_t process_vm_rw_core(pid_t pid, const struct iovec *lvec,
 
 	if (!mm || (task->flags & PF_KTHREAD)) {
 		task_unlock(task);
+		mutex_unlock(&task->signal->cred_guard_mutex);
 		rc = -EINVAL;
 		goto put_task_struct;
 	}
 
 	atomic_inc(&mm->mm_users);
 	task_unlock(task);
+	mutex_unlock(&task->signal->cred_guard_mutex);
 
 	for (i = 0; i < riovcnt && iov_l_curr_idx < liovcnt; i++) {
 		rc = process_vm_rw_single_vec(

> Another question, process_vm_rw_pages() does get_user_pages() without
> FOLL_FORCE. Is this on purpose? This limits the usage of the new
> syscalls.

Other than reading the comment for get_user_pages saying that I don't want
to set the force flag, I didn't really consider it. The use cases where I'm
interested (intranode communication) has the cooperation of the target process 
anyway so its not needed. Any downsides to having FOLL_FORCE enabled?

> Hmm. And could you please explain the change in
> rw_copy_check_uvector()? Why process_vm_rw() does
> rw_copy_check_uvector(READ, rvec, check_access => 0) ?

process_vm_readv/writev get passed an iovec for another process which 
describes where to read/write from/to. So we need to do part of what
rw_copy_check_uvector ordinarily does (validate and copy the iovec
data), but we don't want to check the memory it points to at this point
because it refers to memory in the other process (this check is done later).

So the change to rw_copy_check_uvector to optionally not do this second check
allows us to reuse the code rather than have another function which is almost
identical.

Regards,

Chris
-- 
cyeoh@au.ibm.com


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: cross memory attach && security check
  2012-01-09  7:11 ` Christopher Yeoh
@ 2012-01-09 14:53   ` Oleg Nesterov
  2012-01-10 13:14     ` Oleg Nesterov
  2012-01-11  1:17     ` Christopher Yeoh
  0 siblings, 2 replies; 6+ messages in thread
From: Oleg Nesterov @ 2012-01-09 14:53 UTC (permalink / raw)
  To: Christopher Yeoh; +Cc: linux-kernel, Chris Yeoh, Andrew Morton, David Howells

On 01/09, Christopher Yeoh wrote:
>
> On Thu, 5 Jan 2012 16:10:12 +0100
> Oleg Nesterov <oleg@redhat.com> wrote:
>
> > Just noticed the new file in mm/ ;) A couple of questions.
> >
> > process_vm_rw_core() does
> >
> > 	task_lock(task);
> > 	if (__ptrace_may_access(task, PTRACE_MODE_ATTACH)) {
> > 		task_unlock(task);
> > 		rc = -EPERM;
> > 		goto put_task_struct;
> > 	}
> > 	mm = task->mm;
> >
> > this is racy, task_lock() can't help. And I don't think you should
> > use it directly.
> >
> > execve() does exec_mmap() first, this switches to the new ->mm.
> > After that install_exec_creds() changes task->cred. The window
> > is not that small.
> >
> > I guess you need ->cred_guard_mutex, please look at mm_for_maps().
> >
>
> Thanks, agreed this looks like it's a problem. Need to do a bit more
> testing, but I think the following patch fixes the race?
>
> @@ -298,9 +298,14 @@ static ssize_t process_vm_rw_core(pid_t pid, const struct iovec *lvec,
>  		goto free_proc_pages;
>  	}
>
> +	rc = mutex_lock_interruptible(&task->signal->cred_guard_mutex);
> +	if (rc)
> +		goto put_task_struct;
> +
>  	task_lock(task);
>  	if (__ptrace_may_access(task, PTRACE_MODE_ATTACH)) {
>  		task_unlock(task);
> +		mutex_unlock(&task->signal->cred_guard_mutex);

Yes, I think this works, but I don't think you should play with task_lock()
or ->mm_users, just use get_task_mm(). Better yet, can't we do

	--- x/fs/proc/base.c
	+++ x/fs/proc/base.c
	@@ -254,22 +254,7 @@ static struct mm_struct *check_mem_permission(struct task_struct *task)
	 
	 struct mm_struct *mm_for_maps(struct task_struct *task)
	 {
	-	struct mm_struct *mm;
	-	int err;
	-
	-	err =  mutex_lock_killable(&task->signal->cred_guard_mutex);
	-	if (err)
	-		return ERR_PTR(err);
	-
	-	mm = get_task_mm(task);
	-	if (mm && mm != current->mm &&
	-			!ptrace_may_access(task, PTRACE_MODE_READ)) {
	-		mmput(mm);
	-		mm = ERR_PTR(-EACCES);
	-	}
	-	mutex_unlock(&task->signal->cred_guard_mutex);
	-
	-	return mm;
	+	return get_check_task_mm(task, PTRACE_MODE_READ);
	 }
	 
	 static int proc_pid_cmdline(struct task_struct *task, char * buffer)
	--- x/kernel/fork.c
	+++ x/kernel/fork.c
	@@ -644,6 +644,25 @@ struct mm_struct *get_task_mm(struct task_struct *task)
	 }
	 EXPORT_SYMBOL_GPL(get_task_mm);
	 
	+struct mm_struct *get_check_task_mm(struct task_struct *task, unsigned int mode)
	+{
	+	struct mm_struct *mm;
	+	int err;
	+
	+	err =  mutex_lock_killable(&task->signal->cred_guard_mutex);
	+	if (err)
	+		return ERR_PTR(err);
	+
	+	mm = get_task_mm(task);
	+	if (mm && mm != current->mm && !ptrace_may_access(task, mode)) {
	+		mmput(mm);
	+		mm = ERR_PTR(-EACCES);
	+	}
	+	mutex_unlock(&task->signal->cred_guard_mutex);
	+
	+	return mm;
	+}
	+
	 /* Please note the differences between mmput and mm_release.
	  * mmput is called whenever we stop holding onto a mm_struct,
	  * error success whatever.

?

Then process_vm_rw_core() can use get_check_task_mm(PTRACE_MODE_ATTACH).

> Other than reading the comment for get_user_pages saying that I don't want
> to set the force flag, I didn't really consider it. The use cases where I'm
> interested (intranode communication) has the cooperation of the target process
> anyway so its not needed. Any downsides to having FOLL_FORCE enabled?

Without FOLL_FORCE, say, gdb can't use the new syscall to set the breakpoint
or to read the !VM_READ mappings. OK, process_vm_rw() has flags, we can
add PROCESS_VM_FORCE if needed.

> > Hmm. And could you please explain the change in
> > rw_copy_check_uvector()? Why process_vm_rw() does
> > rw_copy_check_uvector(READ, rvec, check_access => 0) ?
>
> process_vm_readv/writev get passed an iovec for another process

Ah. Thanks I see. And I didn't realize that rvec means "remote vec".
Partly I was confused because (I guess) there is another minor bug in
process_vm_rw(), I think we need

	--- x/mm/process_vm_access.c
	+++ x/mm/process_vm_access.c
	@@ -375,10 +375,10 @@ static ssize_t process_vm_rw(pid_t pid,
	 
		/* Check iovecs */
		if (vm_write)
	-		rc = rw_copy_check_uvector(WRITE, lvec, liovcnt, UIO_FASTIOV,
	+		rc = rw_copy_check_uvector(READ, lvec, liovcnt, UIO_FASTIOV,
						   iovstack_l, &iov_l, 1);
		else
	-		rc = rw_copy_check_uvector(READ, lvec, liovcnt, UIO_FASTIOV,
	+		rc = rw_copy_check_uvector(WRITE, lvec, liovcnt, UIO_FASTIOV,
						   iovstack_l, &iov_l, 1);
		if (rc <= 0)
			goto free_iovecs;


However. Yes, this is subjective, but imho the new argument looks a bit
ugly. Please look at this code again,

	rw_copy_check_uvector(READ, rvec, check_access => 0);

what does this READ means without check_access? Plus we need another
argument. Can't we do

	--- x/fs/read_write.c
	+++ x/fs/read_write.c
	@@ -633,8 +633,7 @@ ssize_t do_loop_readv_writev(struct file
	 ssize_t rw_copy_check_uvector(int type, const struct iovec __user * uvector,
				      unsigned long nr_segs, unsigned long fast_segs,
				      struct iovec *fast_pointer,
	-			      struct iovec **ret_pointer,
	-			      int check_access)
	+			      struct iovec **ret_pointer)
	 {
		unsigned long seg;
		ssize_t ret;
	@@ -690,8 +689,8 @@ ssize_t rw_copy_check_uvector(int type, 
				ret = -EINVAL;
				goto out;
			}
	-		if (check_access
	-		    && unlikely(!access_ok(vrfy_dir(type), buf, len))) {
	+		if (type >= 0 &&
	+		    unlikely(!access_ok(vrfy_dir(type), buf, len))) {
				ret = -EFAULT;
				goto out;
			}

and update the callers? In this case all callers just lose the unneeded
argument and the code above does

	rw_copy_check_uvector(-1, rvec);

Perhaps we can add another NOCHECK (or whatever) define near READ/WRITE.

What do you think?

Oleg.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: cross memory attach && security check
  2012-01-09 14:53   ` Oleg Nesterov
@ 2012-01-10 13:14     ` Oleg Nesterov
  2012-01-11  1:17     ` Christopher Yeoh
  1 sibling, 0 replies; 6+ messages in thread
From: Oleg Nesterov @ 2012-01-10 13:14 UTC (permalink / raw)
  To: Christopher Yeoh; +Cc: linux-kernel, Chris Yeoh, Andrew Morton, David Howells

On 01/09, Oleg Nesterov wrote:
>
> Partly I was confused because (I guess) there is another minor bug in
> process_vm_rw(), I think we need
>
> 	--- x/mm/process_vm_access.c
> 	+++ x/mm/process_vm_access.c
> 	@@ -375,10 +375,10 @@ static ssize_t process_vm_rw(pid_t pid,
> 	 
> 		/* Check iovecs */
> 		if (vm_write)
> 	-		rc = rw_copy_check_uvector(WRITE, lvec, liovcnt, UIO_FASTIOV,
> 	+		rc = rw_copy_check_uvector(READ, lvec, liovcnt, UIO_FASTIOV,
> 						   iovstack_l, &iov_l, 1);
> 		else
> 	-		rc = rw_copy_check_uvector(READ, lvec, liovcnt, UIO_FASTIOV,
> 	+		rc = rw_copy_check_uvector(WRITE, lvec, liovcnt, UIO_FASTIOV,
> 						   iovstack_l, &iov_l, 1);

Argh. No, I was wrong, vrfy_dir() swaps READ/WRITE.

Oleg.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: cross memory attach && security check
  2012-01-09 14:53   ` Oleg Nesterov
  2012-01-10 13:14     ` Oleg Nesterov
@ 2012-01-11  1:17     ` Christopher Yeoh
  2012-01-11 15:49       ` Oleg Nesterov
  1 sibling, 1 reply; 6+ messages in thread
From: Christopher Yeoh @ 2012-01-11  1:17 UTC (permalink / raw)
  To: Oleg Nesterov, linux-kernel; +Cc: Andrew Morton, David Howells

On Mon, 9 Jan 2012 15:53:42 +0100
Oleg Nesterov <oleg@redhat.com> wrote:
> On 01/09, Christopher Yeoh wrote:
> >
> > On Thu, 5 Jan 2012 16:10:12 +0100
> > Oleg Nesterov <oleg@redhat.com> wrote:
> >
> > >
> > > I guess you need ->cred_guard_mutex, please look at mm_for_maps().
> > >
> >
> > Thanks, agreed this looks like it's a problem. Need to do a bit more
> > testing, but I think the following patch fixes the race?
> >
> > @@ -298,9 +298,14 @@ static ssize_t process_vm_rw_core(pid_t pid,
> > const struct iovec *lvec, goto free_proc_pages;
> >  	}
> >
> > +	rc =
> > mutex_lock_interruptible(&task->signal->cred_guard_mutex);
> > +	if (rc)
> > +		goto put_task_struct;
> > +
> >  	task_lock(task);
> >  	if (__ptrace_may_access(task, PTRACE_MODE_ATTACH)) {
> >  		task_unlock(task);
> > +		mutex_unlock(&task->signal->cred_guard_mutex);
> 
> Yes, I think this works, but I don't think you should play with
> task_lock() or ->mm_users, just use get_task_mm(). Better yet, can't
> we do
> 

I agree with the consolidation with mm_for_maps (though we might need
to argue over EPERM vs EACCES. However, I originally broke out
get_task_mm and ptrace_may_access using __ptrace_may_access instead
because both these functions grab the task lock at the start and
release it at the end. Seemed better just to take it once.

So maybe something like this (untested) instead? 

diff --git a/kernel/fork.c b/kernel/fork.c
index f34f894..162562d 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -644,6 +644,37 @@ struct mm_struct *get_task_mm(struct task_struct *task)
 }
 EXPORT_SYMBOL_GPL(get_task_mm);
 
+struct mm_struct *get_check_task_mm(struct task_struct *task, unsigned int mode)
+{
+	struct mm_struct *mm;
+	int err;
+
+	err =  mutex_lock_killable(&task->signal->cred_guard_mutex);
+	if (err)
+		return ERR_PTR(err);
+
+	task_lock(task);
+	if (__ptrace_may_access(task, mode)) {
+		mm = ERR_PTR(-EPERM);
+		goto out;
+	}
+
+	mm = task->mm;
+	if (mm) {
+		if (task->flags & PF_KTHREAD)
+			mm = NULL;
+		else
+			atomic_inc(&mm->mm_users);
+	}
+
+out:
+	task_unlock(task);
+	mutex_unlock(&task->signal->cred_guard_mutex);
+
+	return mm;
+}
+EXPORT_SYMBOL_GPL(get_check_task_mm);
+
 /* Please note the differences between mmput and mm_release.
  * mmput is called whenever we stop holding onto a mm_struct,
  * error success whatever.

> > Other than reading the comment for get_user_pages saying that I
> > don't want to set the force flag, I didn't really consider it. The
> > use cases where I'm interested (intranode communication) has the
> > cooperation of the target process anyway so its not needed. Any
> > downsides to having FOLL_FORCE enabled?
> 
> Without FOLL_FORCE, say, gdb can't use the new syscall to set the
> breakpoint or to read the !VM_READ mappings. OK, process_vm_rw() has
> flags, we can add PROCESS_VM_FORCE if needed.
> 

It wasn't really intended for gdb, but perhaps it could be
used/consolidated with other ptrace stuff (I don't know). I'm not
really sure what the best thing to do here is. As I mentioned there is
a level of cooperation where I am using it and honouring mprotect may
help pickup inadvertent application errors. Perhaps a PROCESS_VM_FORCE
flag would be the more conservative option.

> However. Yes, this is subjective, but imho the new argument looks a
> bit ugly. Please look at this code again,
> 
> 	rw_copy_check_uvector(READ, rvec, check_access => 0);
> 
> what does this READ means without check_access? Plus we need another
> argument. Can't we do
> 
> 	--- x/fs/read_write.c
> 	+++ x/fs/read_write.c
> 	@@ -633,8 +633,7 @@ ssize_t do_loop_readv_writev(struct file
> 	 ssize_t rw_copy_check_uvector(int type, const struct iovec
> __user * uvector, unsigned long nr_segs, unsigned long fast_segs,
> 				      struct iovec *fast_pointer,
> 	-			      struct iovec **ret_pointer,
> 	-			      int check_access)
> 	+			      struct iovec **ret_pointer)
> 	 {
> 		unsigned long seg;
> 		ssize_t ret;
> 	@@ -690,8 +689,8 @@ ssize_t rw_copy_check_uvector(int type, 
> 				ret = -EINVAL;
> 				goto out;
> 			}
> 	-		if (check_access
> 	-		    && unlikely(!access_ok(vrfy_dir(type),
> buf, len))) {
> 	+		if (type >= 0 &&
> 	+		    unlikely(!access_ok(vrfy_dir(type), buf,
> len))) { ret = -EFAULT;
> 				goto out;
> 			}
> 
> and update the callers? In this case all callers just lose the
> unneeded argument and the code above does
> 
> 	rw_copy_check_uvector(-1, rvec);
> 
> Perhaps we can add another NOCHECK (or whatever) define near
> READ/WRITE.
> 
> What do you think?

Yes, this is much better. I think for readability we do need a define.

rw_copy_check_uvector(NOCHECK, rvec) 

looks a bit odd (why am I passing NOCHECK to a function with the
word check in it?). So perhaps maybe IOVEC_ONLY or something like that?

Regards,

Chris
-- 
cyeoh@au.ibm.com


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: cross memory attach && security check
  2012-01-11  1:17     ` Christopher Yeoh
@ 2012-01-11 15:49       ` Oleg Nesterov
  0 siblings, 0 replies; 6+ messages in thread
From: Oleg Nesterov @ 2012-01-11 15:49 UTC (permalink / raw)
  To: Christopher Yeoh; +Cc: linux-kernel, Andrew Morton, David Howells

On 01/11, Christopher Yeoh wrote:
>
> On Mon, 9 Jan 2012 15:53:42 +0100
> Oleg Nesterov <oleg@redhat.com> wrote:
> >
> > Yes, I think this works, but I don't think you should play with
> > task_lock() or ->mm_users, just use get_task_mm(). Better yet, can't
> > we do
> >
>
> I agree with the consolidation with mm_for_maps (though we might need
> to argue over EPERM vs EACCES. However, I originally broke out
> get_task_mm and ptrace_may_access using __ptrace_may_access instead
> because both these functions grab the task lock at the start and
> release it at the end. Seemed better just to take it once.
>
> +struct mm_struct *get_check_task_mm(struct task_struct *task, unsigned int mode)
> +{
> +	struct mm_struct *mm;
> +	int err;
> +
> +	err =  mutex_lock_killable(&task->signal->cred_guard_mutex);
> +	if (err)
> +		return ERR_PTR(err);
> +
> +	task_lock(task);
> +	if (__ptrace_may_access(task, mode)) {
> +		mm = ERR_PTR(-EPERM);
> +		goto out;
> +	}
> +
> +	mm = task->mm;
> +	if (mm) {
> +		if (task->flags & PF_KTHREAD)
> +			mm = NULL;
> +		else
> +			atomic_inc(&mm->mm_users);
> +	}
> +
> +out:
> +	task_unlock(task);

Well, this saves unlock+lock, but adds more copy-and-paste code.
Personally I'd prefer to consolidate.

> > Without FOLL_FORCE, say, gdb can't use the new syscall to set the
> > breakpoint or to read the !VM_READ mappings. OK, process_vm_rw() has
> > flags, we can add PROCESS_VM_FORCE if needed.
> >
>
> It wasn't really intended for gdb, but perhaps it could be
> used/consolidated with other ptrace stuff (I don't know). I'm not
> really sure what the best thing to do here is. As I mentioned there is
> a level of cooperation where I am using it and honouring mprotect may
> help pickup inadvertent application errors. Perhaps a PROCESS_VM_FORCE
> flag would be the more conservative option.

OK, agreed.

> > 	--- x/fs/read_write.c
> > 	+++ x/fs/read_write.c
> > 	@@ -633,8 +633,7 @@ ssize_t do_loop_readv_writev(struct file
> > 	 ssize_t rw_copy_check_uvector(int type, const struct iovec
> > __user * uvector, unsigned long nr_segs, unsigned long fast_segs,
> > 				      struct iovec *fast_pointer,
> > 	-			      struct iovec **ret_pointer,
> > 	-			      int check_access)
> > 	+			      struct iovec **ret_pointer)
> > 	 {
> > 		unsigned long seg;
> > 		ssize_t ret;
> > 	@@ -690,8 +689,8 @@ ssize_t rw_copy_check_uvector(int type,
> > 				ret = -EINVAL;
> > 				goto out;
> > 			}
> > 	-		if (check_access
> > 	-		    && unlikely(!access_ok(vrfy_dir(type),
> > buf, len))) {
> > 	+		if (type >= 0 &&
> > 	+		    unlikely(!access_ok(vrfy_dir(type), buf,
> > len))) { ret = -EFAULT;
> > 				goto out;
> > 			}
> >
> > and update the callers? In this case all callers just lose the
> > unneeded argument and the code above does
> >
> > 	rw_copy_check_uvector(-1, rvec);
> >
> > Perhaps we can add another NOCHECK (or whatever) define near
> > READ/WRITE.
> >
> > What do you think?
>
> Yes, this is much better. I think for readability we do need a define.
>
> rw_copy_check_uvector(NOCHECK, rvec)
>
> looks a bit odd (why am I passing NOCHECK to a function with the
> word check in it?).

But it also has "copy" in the name. However, I agree that NOCHECK doesn't
look very good, and

> So perhaps maybe IOVEC_ONLY or something like that?

I agree with any naming ;)

Oleg.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-01-11 15:55 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-01-05 15:10 cross memory attach && security check Oleg Nesterov
2012-01-09  7:11 ` Christopher Yeoh
2012-01-09 14:53   ` Oleg Nesterov
2012-01-10 13:14     ` Oleg Nesterov
2012-01-11  1:17     ` Christopher Yeoh
2012-01-11 15:49       ` Oleg Nesterov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).