public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Oleg Nesterov <oleg@redhat.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	"Eric W. Biederman" <ebiederm@xmission.com>
Cc: Michal Hocko <mhocko@suse.cz>, Sameer Nanda <snanda@chromium.org>,
	Sergey Dyasly <dserrg@gmail.com>,
	linux-kernel@vger.kernel.org
Subject: [PATCH 1/4] proc: fix the potential use-after-free in first_tid()
Date: Wed, 20 Nov 2013 19:30:22 +0100	[thread overview]
Message-ID: <20131120183022.GA12209@redhat.com> (raw)
In-Reply-To: <20131120183009.GA12193@redhat.com>

proc_task_readdir() verifies that the result of get_proc_task()
is pid_alive() and thus its ->group_leader is fine too. However
this is not necessarily true after rcu_read_unlock(), we need
to recheck this again after first_tid() does rcu_read_lock().
Otherwise leader->thread_group.next (used by next_thread()) can
be invalid if the rcu grace period expires in between.

The race is subtle and unlikely, but still it is possible afaics.
To simplify lets ignore the "likely" case when tid != 0, f_version
can be cleared by proc_task_operations->llseek().

Suppose we have a main thread M and its subthread T. Suppose that
f_pos == 3, iow first_tid() should return T. Now suppose that the
following happens between rcu_read_unlock() and rcu_read_lock():

	1. T execs and becomes the new leader. This removes M from
	    ->thread_group but next_thread(M) is still T.

	2. T creates another thread X which does exec as well, T
	   goes away.

	3. X creates another subthread, this increments nr_threads.

	4. first_tid() does next_thread(M) and returns the already
	   dead T.

Note also that we need 2. and 3. only because of get_nr_threads()
check, and this check was supposed to be optimization only.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
 fs/proc/base.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/fs/proc/base.c b/fs/proc/base.c
index 1485e38..da12c5c 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -3103,6 +3103,9 @@ static struct task_struct *first_tid(struct task_struct *leader,
 	pos = NULL;
 	if (nr && nr >= get_nr_threads(leader))
 		goto out;
+	/* It could be unhashed before we take rcu lock */
+	if (!pid_alive(leader))
+		goto out;
 
 	/* If we haven't found our starting place yet start
 	 * with the leader and walk nr threads forward.
-- 
1.5.5.1


  reply	other threads:[~2013-11-20 18:29 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-20 18:30 [PATCH 0/4] proc: proc_task_readdir/first_tid fixes/cleanups Oleg Nesterov
2013-11-20 18:30 ` Oleg Nesterov [this message]
2013-11-20 18:30 ` [PATCH 2/4] proc: change first_tid() to use while_each_thread() rather than next_thread() Oleg Nesterov
2013-11-20 18:30 ` [PATCH 3/4] proc: don't (ab)use ->group_leader in proc_task_readdir() paths Oleg Nesterov
2013-11-20 18:30 ` [PATCH 4/4] proc: fix ->f_pos overflows in first_tid() Oleg Nesterov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131120183022.GA12209@redhat.com \
    --to=oleg@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=dserrg@gmail.com \
    --cc=ebiederm@xmission.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mhocko@suse.cz \
    --cc=snanda@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox