* [PATCH] Yama: remove locking from delete path
@ 2012-11-19 23:34 Kees Cook
2012-11-20 2:23 ` James Morris
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Kees Cook @ 2012-11-19 23:34 UTC (permalink / raw)
To: linux-kernel
Cc: James Morris, Kees Cook, John Johansen, Serge E. Hallyn,
Eric Paris, linux-security-module
Instead of locking the list during a delete, mark entries as invalid
and trigger a workqueue to clean them up. This lets us easily handle
task_free from interrupt context.
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
security/yama/yama_lsm.c | 43 ++++++++++++++++++++++++++++++++++++-------
1 file changed, 36 insertions(+), 7 deletions(-)
diff --git a/security/yama/yama_lsm.c b/security/yama/yama_lsm.c
index 17da6ca..1cba901 100644
--- a/security/yama/yama_lsm.c
+++ b/security/yama/yama_lsm.c
@@ -17,6 +17,7 @@
#include <linux/ptrace.h>
#include <linux/prctl.h>
#include <linux/ratelimit.h>
+#include <linux/workqueue.h>
#define YAMA_SCOPE_DISABLED 0
#define YAMA_SCOPE_RELATIONAL 1
@@ -29,6 +30,7 @@ static int ptrace_scope = YAMA_SCOPE_RELATIONAL;
struct ptrace_relation {
struct task_struct *tracer;
struct task_struct *tracee;
+ bool invalid;
struct list_head node;
struct rcu_head rcu;
};
@@ -36,6 +38,27 @@ struct ptrace_relation {
static LIST_HEAD(ptracer_relations);
static DEFINE_SPINLOCK(ptracer_relations_lock);
+static void yama_relation_cleanup(struct work_struct *work);
+static DECLARE_WORK(yama_relation_work, yama_relation_cleanup);
+
+/**
+ * yama_relation_cleanup - remove invalid entries from the relation list
+ *
+ */
+static void yama_relation_cleanup(struct work_struct *work)
+{
+ struct ptrace_relation *relation;
+
+ spin_lock(&ptracer_relations_lock);
+ list_for_each_entry_rcu(relation, &ptracer_relations, node) {
+ if (relation->invalid) {
+ list_del_rcu(&relation->node);
+ kfree_rcu(relation, rcu);
+ }
+ }
+ spin_unlock(&ptracer_relations_lock);
+}
+
/**
* yama_ptracer_add - add/replace an exception for this tracer/tracee pair
* @tracer: the task_struct of the process doing the ptrace
@@ -57,9 +80,12 @@ static int yama_ptracer_add(struct task_struct *tracer,
added->tracee = tracee;
added->tracer = tracer;
+ added->invalid = false;
- spin_lock_bh(&ptracer_relations_lock);
+ spin_lock(&ptracer_relations_lock);
list_for_each_entry_rcu(relation, &ptracer_relations, node) {
+ if (relation->invalid)
+ continue;
if (relation->tracee == tracee) {
list_replace_rcu(&relation->node, &added->node);
kfree_rcu(relation, rcu);
@@ -70,7 +96,7 @@ static int yama_ptracer_add(struct task_struct *tracer,
list_add_rcu(&added->node, &ptracer_relations);
out:
- spin_unlock_bh(&ptracer_relations_lock);
+ spin_unlock(&ptracer_relations_lock);
return 0;
}
@@ -84,15 +110,15 @@ static void yama_ptracer_del(struct task_struct *tracer,
{
struct ptrace_relation *relation;
- spin_lock_bh(&ptracer_relations_lock);
list_for_each_entry_rcu(relation, &ptracer_relations, node) {
+ if (relation->invalid)
+ continue;
if (relation->tracee == tracee ||
(tracer && relation->tracer == tracer)) {
- list_del_rcu(&relation->node);
- kfree_rcu(relation, rcu);
+ relation->invalid = true;
+ schedule_work(&yama_relation_work);
}
}
- spin_unlock_bh(&ptracer_relations_lock);
}
/**
@@ -219,12 +245,15 @@ static int ptracer_exception_found(struct task_struct *tracer,
rcu_read_lock();
if (!thread_group_leader(tracee))
tracee = rcu_dereference(tracee->group_leader);
- list_for_each_entry_rcu(relation, &ptracer_relations, node)
+ list_for_each_entry_rcu(relation, &ptracer_relations, node) {
+ if (relation->invalid)
+ continue;
if (relation->tracee == tracee) {
parent = relation->tracer;
found = true;
break;
}
+ }
if (found && (parent == NULL || task_is_descendant(parent, tracer)))
rc = 1;
--
1.7.9.5
--
Kees Cook
Chrome OS Security
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH] Yama: remove locking from delete path 2012-11-19 23:34 [PATCH] Yama: remove locking from delete path Kees Cook @ 2012-11-20 2:23 ` James Morris 2012-11-20 3:01 ` Kees Cook 2012-11-20 3:45 ` Serge Hallyn 2012-11-20 12:24 ` Tetsuo Handa 2 siblings, 1 reply; 9+ messages in thread From: James Morris @ 2012-11-20 2:23 UTC (permalink / raw) To: Kees Cook Cc: linux-kernel, James Morris, John Johansen, Serge E. Hallyn, Eric Paris, linux-security-module On Mon, 19 Nov 2012, Kees Cook wrote: > Instead of locking the list during a delete, mark entries as invalid > and trigger a workqueue to clean them up. This lets us easily handle > task_free from interrupt context. > > Cc: Sasha Levin <sasha.levin@oracle.com> > Signed-off-by: Kees Cook <keescook@chromium.org> This doesn't apply cleanly to my -next branch. Kees: can you think about using git to push changes to me for Yama? -- James Morris <jmorris@namei.org> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] Yama: remove locking from delete path 2012-11-20 2:23 ` James Morris @ 2012-11-20 3:01 ` Kees Cook 2012-11-20 3:46 ` James Morris 0 siblings, 1 reply; 9+ messages in thread From: Kees Cook @ 2012-11-20 3:01 UTC (permalink / raw) To: James Morris Cc: linux-kernel, James Morris, John Johansen, Serge E. Hallyn, Eric Paris, linux-security-module On Mon, Nov 19, 2012 at 6:23 PM, James Morris <jmorris@namei.org> wrote: > On Mon, 19 Nov 2012, Kees Cook wrote: > >> Instead of locking the list during a delete, mark entries as invalid >> and trigger a workqueue to clean them up. This lets us easily handle >> task_free from interrupt context. >> >> Cc: Sasha Levin <sasha.levin@oracle.com> >> Signed-off-by: Kees Cook <keescook@chromium.org> > > This doesn't apply cleanly to my -next branch. Did you already apply this from last week? https://lkml.org/lkml/2012/11/13/808 > Kees: can you think about using git to push changes to me for Yama? If this pace of changes keeps up, sure thing. Thanks! -Kees -- Kees Cook Chrome OS Security ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] Yama: remove locking from delete path 2012-11-20 3:01 ` Kees Cook @ 2012-11-20 3:46 ` James Morris 0 siblings, 0 replies; 9+ messages in thread From: James Morris @ 2012-11-20 3:46 UTC (permalink / raw) To: Kees Cook Cc: linux-kernel, James Morris, John Johansen, Serge E. Hallyn, Eric Paris, linux-security-module On Mon, 19 Nov 2012, Kees Cook wrote: > On Mon, Nov 19, 2012 at 6:23 PM, James Morris <jmorris@namei.org> wrote: > > On Mon, 19 Nov 2012, Kees Cook wrote: > > > >> Instead of locking the list during a delete, mark entries as invalid > >> and trigger a workqueue to clean them up. This lets us easily handle > >> task_free from interrupt context. > >> > >> Cc: Sasha Levin <sasha.levin@oracle.com> > >> Signed-off-by: Kees Cook <keescook@chromium.org> > > > > This doesn't apply cleanly to my -next branch. > > Did you already apply this from last week? > https://lkml.org/lkml/2012/11/13/808 Nope. > > > Kees: can you think about using git to push changes to me for Yama? > > If this pace of changes keeps up, sure thing. In any case, please check that your patches apply cleanly to my -next branch. - James -- James Morris <jmorris@namei.org> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] Yama: remove locking from delete path 2012-11-19 23:34 [PATCH] Yama: remove locking from delete path Kees Cook 2012-11-20 2:23 ` James Morris @ 2012-11-20 3:45 ` Serge Hallyn 2012-11-20 6:14 ` Kees Cook 2012-11-20 12:24 ` Tetsuo Handa 2 siblings, 1 reply; 9+ messages in thread From: Serge Hallyn @ 2012-11-20 3:45 UTC (permalink / raw) To: Kees Cook Cc: linux-kernel, James Morris, John Johansen, Serge E. Hallyn, Eric Paris, linux-security-module Quoting Kees Cook (keescook@chromium.org): > Instead of locking the list during a delete, mark entries as invalid > and trigger a workqueue to clean them up. This lets us easily handle > task_free from interrupt context. > > Cc: Sasha Levin <sasha.levin@oracle.com> > Signed-off-by: Kees Cook <keescook@chromium.org> > --- > security/yama/yama_lsm.c | 43 ++++++++++++++++++++++++++++++++++++------- > 1 file changed, 36 insertions(+), 7 deletions(-) > > diff --git a/security/yama/yama_lsm.c b/security/yama/yama_lsm.c > index 17da6ca..1cba901 100644 > --- a/security/yama/yama_lsm.c > +++ b/security/yama/yama_lsm.c > @@ -17,6 +17,7 @@ > #include <linux/ptrace.h> > #include <linux/prctl.h> > #include <linux/ratelimit.h> > +#include <linux/workqueue.h> > > #define YAMA_SCOPE_DISABLED 0 > #define YAMA_SCOPE_RELATIONAL 1 > @@ -29,6 +30,7 @@ static int ptrace_scope = YAMA_SCOPE_RELATIONAL; > struct ptrace_relation { > struct task_struct *tracer; > struct task_struct *tracee; > + bool invalid; > struct list_head node; > struct rcu_head rcu; > }; > @@ -36,6 +38,27 @@ struct ptrace_relation { > static LIST_HEAD(ptracer_relations); > static DEFINE_SPINLOCK(ptracer_relations_lock); > > +static void yama_relation_cleanup(struct work_struct *work); > +static DECLARE_WORK(yama_relation_work, yama_relation_cleanup); > + > +/** > + * yama_relation_cleanup - remove invalid entries from the relation list > + * > + */ > +static void yama_relation_cleanup(struct work_struct *work) > +{ > + struct ptrace_relation *relation; > + > + spin_lock(&ptracer_relations_lock); > + list_for_each_entry_rcu(relation, &ptracer_relations, node) { > + if (relation->invalid) { > + list_del_rcu(&relation->node); > + kfree_rcu(relation, rcu); > + } > + } > + spin_unlock(&ptracer_relations_lock); > +} > + > /** > * yama_ptracer_add - add/replace an exception for this tracer/tracee pair > * @tracer: the task_struct of the process doing the ptrace > @@ -57,9 +80,12 @@ static int yama_ptracer_add(struct task_struct *tracer, > > added->tracee = tracee; > added->tracer = tracer; > + added->invalid = false; > > - spin_lock_bh(&ptracer_relations_lock); > + spin_lock(&ptracer_relations_lock); > list_for_each_entry_rcu(relation, &ptracer_relations, node) { > + if (relation->invalid) > + continue; > if (relation->tracee == tracee) { > list_replace_rcu(&relation->node, &added->node); > kfree_rcu(relation, rcu); > @@ -70,7 +96,7 @@ static int yama_ptracer_add(struct task_struct *tracer, > list_add_rcu(&added->node, &ptracer_relations); > > out: > - spin_unlock_bh(&ptracer_relations_lock); > + spin_unlock(&ptracer_relations_lock); > return 0; > } > > @@ -84,15 +110,15 @@ static void yama_ptracer_del(struct task_struct *tracer, > { > struct ptrace_relation *relation; > > - spin_lock_bh(&ptracer_relations_lock); I don't understand - is there a patch I don't have sitting around which puts the calls to yama_ptracer_del() under rcu_read_lock()? If not, I don't see how it's safe to walk the list here and risk racing against another yama_relation_cleanup() run. I'm probably missing something really cool about the locking, but it doesn't look right to me. I would think you'd want to do the loop under rcu_read_lock(), set a boolean if one is changed, and call schedule_work() once at the end if the boolean is set. > list_for_each_entry_rcu(relation, &ptracer_relations, node) { > + if (relation->invalid) > + continue; > if (relation->tracee == tracee || > (tracer && relation->tracer == tracer)) { > - list_del_rcu(&relation->node); > - kfree_rcu(relation, rcu); > + relation->invalid = true; > + schedule_work(&yama_relation_work); > } > } > - spin_unlock_bh(&ptracer_relations_lock); > } > > /** > @@ -219,12 +245,15 @@ static int ptracer_exception_found(struct task_struct *tracer, > rcu_read_lock(); > if (!thread_group_leader(tracee)) > tracee = rcu_dereference(tracee->group_leader); > - list_for_each_entry_rcu(relation, &ptracer_relations, node) > + list_for_each_entry_rcu(relation, &ptracer_relations, node) { > + if (relation->invalid) > + continue; > if (relation->tracee == tracee) { > parent = relation->tracer; > found = true; > break; > } > + } > > if (found && (parent == NULL || task_is_descendant(parent, tracer))) > rc = 1; > -- > 1.7.9.5 > > > -- > Kees Cook > Chrome OS Security ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] Yama: remove locking from delete path 2012-11-20 3:45 ` Serge Hallyn @ 2012-11-20 6:14 ` Kees Cook 2012-11-20 6:53 ` Kees Cook 0 siblings, 1 reply; 9+ messages in thread From: Kees Cook @ 2012-11-20 6:14 UTC (permalink / raw) To: Serge Hallyn Cc: linux-kernel, James Morris, John Johansen, Serge E. Hallyn, Eric Paris, linux-security-module On Mon, Nov 19, 2012 at 7:45 PM, Serge Hallyn <serge.hallyn@canonical.com> wrote: > Quoting Kees Cook (keescook@chromium.org): >> Instead of locking the list during a delete, mark entries as invalid >> and trigger a workqueue to clean them up. This lets us easily handle >> task_free from interrupt context. >> >> Cc: Sasha Levin <sasha.levin@oracle.com> >> Signed-off-by: Kees Cook <keescook@chromium.org> >> --- >> security/yama/yama_lsm.c | 43 ++++++++++++++++++++++++++++++++++++------- >> 1 file changed, 36 insertions(+), 7 deletions(-) >> >> diff --git a/security/yama/yama_lsm.c b/security/yama/yama_lsm.c >> index 17da6ca..1cba901 100644 >> --- a/security/yama/yama_lsm.c >> +++ b/security/yama/yama_lsm.c >> @@ -17,6 +17,7 @@ >> #include <linux/ptrace.h> >> #include <linux/prctl.h> >> #include <linux/ratelimit.h> >> +#include <linux/workqueue.h> >> >> #define YAMA_SCOPE_DISABLED 0 >> #define YAMA_SCOPE_RELATIONAL 1 >> @@ -29,6 +30,7 @@ static int ptrace_scope = YAMA_SCOPE_RELATIONAL; >> struct ptrace_relation { >> struct task_struct *tracer; >> struct task_struct *tracee; >> + bool invalid; >> struct list_head node; >> struct rcu_head rcu; >> }; >> @@ -36,6 +38,27 @@ struct ptrace_relation { >> static LIST_HEAD(ptracer_relations); >> static DEFINE_SPINLOCK(ptracer_relations_lock); >> >> +static void yama_relation_cleanup(struct work_struct *work); >> +static DECLARE_WORK(yama_relation_work, yama_relation_cleanup); >> + >> +/** >> + * yama_relation_cleanup - remove invalid entries from the relation list >> + * >> + */ >> +static void yama_relation_cleanup(struct work_struct *work) >> +{ >> + struct ptrace_relation *relation; >> + >> + spin_lock(&ptracer_relations_lock); >> + list_for_each_entry_rcu(relation, &ptracer_relations, node) { >> + if (relation->invalid) { >> + list_del_rcu(&relation->node); >> + kfree_rcu(relation, rcu); >> + } >> + } >> + spin_unlock(&ptracer_relations_lock); >> +} >> + >> /** >> * yama_ptracer_add - add/replace an exception for this tracer/tracee pair >> * @tracer: the task_struct of the process doing the ptrace >> @@ -57,9 +80,12 @@ static int yama_ptracer_add(struct task_struct *tracer, >> >> added->tracee = tracee; >> added->tracer = tracer; >> + added->invalid = false; >> >> - spin_lock_bh(&ptracer_relations_lock); >> + spin_lock(&ptracer_relations_lock); >> list_for_each_entry_rcu(relation, &ptracer_relations, node) { >> + if (relation->invalid) >> + continue; >> if (relation->tracee == tracee) { >> list_replace_rcu(&relation->node, &added->node); >> kfree_rcu(relation, rcu); >> @@ -70,7 +96,7 @@ static int yama_ptracer_add(struct task_struct *tracer, >> list_add_rcu(&added->node, &ptracer_relations); >> >> out: >> - spin_unlock_bh(&ptracer_relations_lock); >> + spin_unlock(&ptracer_relations_lock); >> return 0; >> } >> >> @@ -84,15 +110,15 @@ static void yama_ptracer_del(struct task_struct *tracer, >> { >> struct ptrace_relation *relation; >> >> - spin_lock_bh(&ptracer_relations_lock); > > I don't understand - is there a patch I don't have sitting around > which puts the calls to yama_ptracer_del() under rcu_read_lock()? > If not, I don't see how it's safe to walk the list here and risk > racing against another yama_relation_cleanup() run. > > I'm probably missing something really cool about the locking, > but it doesn't look right to me. I would think you'd want to > do the loop under rcu_read_lock(), set a boolean if one is > changed, and call schedule_work() once at the end if the boolean > is set. Unless I'm mistaken and my lockdep tests are wrong, list_for_each_entry_rcu runs under rcu_read_lock(). I could optimize it to only run schedule_work() once all the marking is done at the end of the loop. -Kees > >> list_for_each_entry_rcu(relation, &ptracer_relations, node) { >> + if (relation->invalid) >> + continue; >> if (relation->tracee == tracee || >> (tracer && relation->tracer == tracer)) { >> - list_del_rcu(&relation->node); >> - kfree_rcu(relation, rcu); >> + relation->invalid = true; >> + schedule_work(&yama_relation_work); >> } >> } >> - spin_unlock_bh(&ptracer_relations_lock); >> } >> >> /** >> @@ -219,12 +245,15 @@ static int ptracer_exception_found(struct task_struct *tracer, >> rcu_read_lock(); >> if (!thread_group_leader(tracee)) >> tracee = rcu_dereference(tracee->group_leader); >> - list_for_each_entry_rcu(relation, &ptracer_relations, node) >> + list_for_each_entry_rcu(relation, &ptracer_relations, node) { >> + if (relation->invalid) >> + continue; >> if (relation->tracee == tracee) { >> parent = relation->tracer; >> found = true; >> break; >> } >> + } >> >> if (found && (parent == NULL || task_is_descendant(parent, tracer))) >> rc = 1; >> -- >> 1.7.9.5 >> >> >> -- >> Kees Cook >> Chrome OS Security -- Kees Cook Chrome OS Security ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] Yama: remove locking from delete path 2012-11-20 6:14 ` Kees Cook @ 2012-11-20 6:53 ` Kees Cook 0 siblings, 0 replies; 9+ messages in thread From: Kees Cook @ 2012-11-20 6:53 UTC (permalink / raw) To: Serge Hallyn Cc: linux-kernel, James Morris, John Johansen, Serge E. Hallyn, Eric Paris, linux-security-module On Mon, Nov 19, 2012 at 10:14 PM, Kees Cook <keescook@chromium.org> wrote: > On Mon, Nov 19, 2012 at 7:45 PM, Serge Hallyn > <serge.hallyn@canonical.com> wrote: >> Quoting Kees Cook (keescook@chromium.org): >>> Instead of locking the list during a delete, mark entries as invalid >>> and trigger a workqueue to clean them up. This lets us easily handle >>> task_free from interrupt context. >>> >>> Cc: Sasha Levin <sasha.levin@oracle.com> >>> Signed-off-by: Kees Cook <keescook@chromium.org> >>> --- >>> security/yama/yama_lsm.c | 43 ++++++++++++++++++++++++++++++++++++------- >>> 1 file changed, 36 insertions(+), 7 deletions(-) >>> >>> diff --git a/security/yama/yama_lsm.c b/security/yama/yama_lsm.c >>> index 17da6ca..1cba901 100644 >>> --- a/security/yama/yama_lsm.c >>> +++ b/security/yama/yama_lsm.c >>> @@ -17,6 +17,7 @@ >>> #include <linux/ptrace.h> >>> #include <linux/prctl.h> >>> #include <linux/ratelimit.h> >>> +#include <linux/workqueue.h> >>> >>> #define YAMA_SCOPE_DISABLED 0 >>> #define YAMA_SCOPE_RELATIONAL 1 >>> @@ -29,6 +30,7 @@ static int ptrace_scope = YAMA_SCOPE_RELATIONAL; >>> struct ptrace_relation { >>> struct task_struct *tracer; >>> struct task_struct *tracee; >>> + bool invalid; >>> struct list_head node; >>> struct rcu_head rcu; >>> }; >>> @@ -36,6 +38,27 @@ struct ptrace_relation { >>> static LIST_HEAD(ptracer_relations); >>> static DEFINE_SPINLOCK(ptracer_relations_lock); >>> >>> +static void yama_relation_cleanup(struct work_struct *work); >>> +static DECLARE_WORK(yama_relation_work, yama_relation_cleanup); >>> + >>> +/** >>> + * yama_relation_cleanup - remove invalid entries from the relation list >>> + * >>> + */ >>> +static void yama_relation_cleanup(struct work_struct *work) >>> +{ >>> + struct ptrace_relation *relation; >>> + >>> + spin_lock(&ptracer_relations_lock); >>> + list_for_each_entry_rcu(relation, &ptracer_relations, node) { >>> + if (relation->invalid) { >>> + list_del_rcu(&relation->node); >>> + kfree_rcu(relation, rcu); >>> + } >>> + } >>> + spin_unlock(&ptracer_relations_lock); >>> +} >>> + >>> /** >>> * yama_ptracer_add - add/replace an exception for this tracer/tracee pair >>> * @tracer: the task_struct of the process doing the ptrace >>> @@ -57,9 +80,12 @@ static int yama_ptracer_add(struct task_struct *tracer, >>> >>> added->tracee = tracee; >>> added->tracer = tracer; >>> + added->invalid = false; >>> >>> - spin_lock_bh(&ptracer_relations_lock); >>> + spin_lock(&ptracer_relations_lock); >>> list_for_each_entry_rcu(relation, &ptracer_relations, node) { >>> + if (relation->invalid) >>> + continue; >>> if (relation->tracee == tracee) { >>> list_replace_rcu(&relation->node, &added->node); >>> kfree_rcu(relation, rcu); >>> @@ -70,7 +96,7 @@ static int yama_ptracer_add(struct task_struct *tracer, >>> list_add_rcu(&added->node, &ptracer_relations); >>> >>> out: >>> - spin_unlock_bh(&ptracer_relations_lock); >>> + spin_unlock(&ptracer_relations_lock); >>> return 0; >>> } >>> >>> @@ -84,15 +110,15 @@ static void yama_ptracer_del(struct task_struct *tracer, >>> { >>> struct ptrace_relation *relation; >>> >>> - spin_lock_bh(&ptracer_relations_lock); >> >> I don't understand - is there a patch I don't have sitting around >> which puts the calls to yama_ptracer_del() under rcu_read_lock()? >> If not, I don't see how it's safe to walk the list here and risk >> racing against another yama_relation_cleanup() run. >> >> I'm probably missing something really cool about the locking, >> but it doesn't look right to me. I would think you'd want to >> do the loop under rcu_read_lock(), set a boolean if one is >> changed, and call schedule_work() once at the end if the boolean >> is set. > > Unless I'm mistaken and my lockdep tests are wrong, > list_for_each_entry_rcu runs under rcu_read_lock(). Ew, yeah, no, the examples I was looking at are missing rcu_read_lock(). That's sad. Anyway, I'll fix this up here and in the other patch. > I could optimize it to only run schedule_work() once all the marking > is done at the end of the loop. And I'll do this. -Kees > > -Kees > >> >>> list_for_each_entry_rcu(relation, &ptracer_relations, node) { >>> + if (relation->invalid) >>> + continue; >>> if (relation->tracee == tracee || >>> (tracer && relation->tracer == tracer)) { >>> - list_del_rcu(&relation->node); >>> - kfree_rcu(relation, rcu); >>> + relation->invalid = true; >>> + schedule_work(&yama_relation_work); >>> } >>> } >>> - spin_unlock_bh(&ptracer_relations_lock); >>> } >>> >>> /** >>> @@ -219,12 +245,15 @@ static int ptracer_exception_found(struct task_struct *tracer, >>> rcu_read_lock(); >>> if (!thread_group_leader(tracee)) >>> tracee = rcu_dereference(tracee->group_leader); >>> - list_for_each_entry_rcu(relation, &ptracer_relations, node) >>> + list_for_each_entry_rcu(relation, &ptracer_relations, node) { >>> + if (relation->invalid) >>> + continue; >>> if (relation->tracee == tracee) { >>> parent = relation->tracer; >>> found = true; >>> break; >>> } >>> + } >>> >>> if (found && (parent == NULL || task_is_descendant(parent, tracer))) >>> rc = 1; >>> -- >>> 1.7.9.5 >>> >>> >>> -- >>> Kees Cook >>> Chrome OS Security > > -- > Kees Cook > Chrome OS Security -- Kees Cook Chrome OS Security ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] Yama: remove locking from delete path 2012-11-19 23:34 [PATCH] Yama: remove locking from delete path Kees Cook 2012-11-20 2:23 ` James Morris 2012-11-20 3:45 ` Serge Hallyn @ 2012-11-20 12:24 ` Tetsuo Handa 2012-11-20 16:07 ` Kees Cook 2 siblings, 1 reply; 9+ messages in thread From: Tetsuo Handa @ 2012-11-20 12:24 UTC (permalink / raw) To: keescook, linux-kernel Cc: james.l.morris, john.johansen, serge.hallyn, eparis, linux-security-module Kees Cook wrote: > Instead of locking the list during a delete, mark entries as invalid > and trigger a workqueue to clean them up. This lets us easily handle > task_free from interrupt context. > @@ -57,9 +80,12 @@ static int yama_ptracer_add(struct task_struct *tracer, > > added->tracee = tracee; > added->tracer = tracer; > + added->invalid = false; > > - spin_lock_bh(&ptracer_relations_lock); > + spin_lock(&ptracer_relations_lock); Can't you use spin_lock_irqsave(&ptracer_relations_lock, flags); spin_unlock_irqrestore(&ptracer_relations_lock, flags); instead of adding ->invalid ? ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] Yama: remove locking from delete path 2012-11-20 12:24 ` Tetsuo Handa @ 2012-11-20 16:07 ` Kees Cook 0 siblings, 0 replies; 9+ messages in thread From: Kees Cook @ 2012-11-20 16:07 UTC (permalink / raw) To: Tetsuo Handa Cc: linux-kernel, james.l.morris, john.johansen, serge.hallyn, eparis, linux-security-module On Tue, Nov 20, 2012 at 4:24 AM, Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> wrote: > Kees Cook wrote: >> Instead of locking the list during a delete, mark entries as invalid >> and trigger a workqueue to clean them up. This lets us easily handle >> task_free from interrupt context. > >> @@ -57,9 +80,12 @@ static int yama_ptracer_add(struct task_struct *tracer, >> >> added->tracee = tracee; >> added->tracer = tracer; >> + added->invalid = false; >> >> - spin_lock_bh(&ptracer_relations_lock); >> + spin_lock(&ptracer_relations_lock); > > Can't you use > spin_lock_irqsave(&ptracer_relations_lock, flags); > spin_unlock_irqrestore(&ptracer_relations_lock, flags); > instead of adding ->invalid ? The _bh was sufficient originally, but looking at Sasha's deadlock, it seems like I should get rid of locking entirely on this path. What do you think of this report: https://lkml.org/lkml/2012/10/17/600 I'm concerned that blocking interrupts would be an even more expensive solution, since every task_free() is forced to block interrupts briefly. Most systems will have either an empty relations list, or a very short one, so it seemed better to avoid any locking at all on the task_free() path. Now the locking contention would be moved to being between the workqueue and any add calls. -Kees -Kees -- Kees Cook Chrome OS Security ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2012-11-20 16:07 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2012-11-19 23:34 [PATCH] Yama: remove locking from delete path Kees Cook 2012-11-20 2:23 ` James Morris 2012-11-20 3:01 ` Kees Cook 2012-11-20 3:46 ` James Morris 2012-11-20 3:45 ` Serge Hallyn 2012-11-20 6:14 ` Kees Cook 2012-11-20 6:53 ` Kees Cook 2012-11-20 12:24 ` Tetsuo Handa 2012-11-20 16:07 ` Kees Cook
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox