* [PATCH 0/2] fix ->shm_file leak
@ 2013-06-14 19:09 Oleg Nesterov
2013-06-14 19:09 ` [PATCH 1/2] fput: task_work_add() can fail if the caller has passed exit_task_work() Oleg Nesterov
2013-06-14 19:09 ` [PATCH 2/2] move exit_task_namespaces() outside of exit_notify() Oleg Nesterov
0 siblings, 2 replies; 9+ messages in thread
From: Oleg Nesterov @ 2013-06-14 19:09 UTC (permalink / raw)
To: Andrew Morton
Cc: Al Viro, Andrey Vagin, Eric W. Biederman, David Howells,
linux-kernel
Andrew,
These 2 patches are completely orthogonal, and either patch can
fix the problem reported by Andrey. However, I think they both
make sense.
The 2nd patch was already acked by Eric/Andrey. However it is
not as trivial as it looks.
The 1st one looks more straightforward, and perhaps it is 3.10
material.
Oleg.
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/2] fput: task_work_add() can fail if the caller has passed exit_task_work()
2013-06-14 19:09 [PATCH 0/2] fix ->shm_file leak Oleg Nesterov
@ 2013-06-14 19:09 ` Oleg Nesterov
2013-06-14 21:58 ` Andrew Morton
2013-06-14 19:09 ` [PATCH 2/2] move exit_task_namespaces() outside of exit_notify() Oleg Nesterov
1 sibling, 1 reply; 9+ messages in thread
From: Oleg Nesterov @ 2013-06-14 19:09 UTC (permalink / raw)
To: Andrew Morton
Cc: Al Viro, Andrey Vagin, Eric W. Biederman, David Howells,
linux-kernel
fput() assumes that it can't be called after exit_task_work() but
this is not true, for example free_ipc_ns()->shm_destroy() can do
this. In this case fput() silently leaks the file.
Change it to fallback to delayed_fput_work if task_work_add() fails.
The patch looks complicated but it is not, it changes the code from
if (PF_KTHREAD) {
schedule_work(...);
return;
}
task_work_add(...)
to
if (!PF_KTHREAD) {
if (!task_work_add(...))
return;
/* fallback */
}
schedule_work(...);
As for shm_destroy() in particular, we could make another fix but I
think this change makes sense anyway. There could be another similar
user, it is not safe to assume that task_work_add() can't fail.
Reported-by: Andrey Vagin <avagin@openvz.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
fs/file_table.c | 19 ++++++++++---------
1 files changed, 10 insertions(+), 9 deletions(-)
diff --git a/fs/file_table.c b/fs/file_table.c
index cd4d87a..485dc0e 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -306,17 +306,18 @@ void fput(struct file *file)
{
if (atomic_long_dec_and_test(&file->f_count)) {
struct task_struct *task = current;
+ unsigned long flags;
+
file_sb_list_del(file);
- if (unlikely(in_interrupt() || task->flags & PF_KTHREAD)) {
- unsigned long flags;
- spin_lock_irqsave(&delayed_fput_lock, flags);
- list_add(&file->f_u.fu_list, &delayed_fput_list);
- schedule_work(&delayed_fput_work);
- spin_unlock_irqrestore(&delayed_fput_lock, flags);
- return;
+ if (likely(!in_interrupt() && !(task->flags & PF_KTHREAD))) {
+ init_task_work(&file->f_u.fu_rcuhead, ____fput);
+ if (!task_work_add(task, &file->f_u.fu_rcuhead, true))
+ return;
}
- init_task_work(&file->f_u.fu_rcuhead, ____fput);
- task_work_add(task, &file->f_u.fu_rcuhead, true);
+ spin_lock_irqsave(&delayed_fput_lock, flags);
+ list_add(&file->f_u.fu_list, &delayed_fput_list);
+ schedule_work(&delayed_fput_work);
+ spin_unlock_irqrestore(&delayed_fput_lock, flags);
}
}
--
1.5.5.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/2] move exit_task_namespaces() outside of exit_notify()
2013-06-14 19:09 [PATCH 0/2] fix ->shm_file leak Oleg Nesterov
2013-06-14 19:09 ` [PATCH 1/2] fput: task_work_add() can fail if the caller has passed exit_task_work() Oleg Nesterov
@ 2013-06-14 19:09 ` Oleg Nesterov
1 sibling, 0 replies; 9+ messages in thread
From: Oleg Nesterov @ 2013-06-14 19:09 UTC (permalink / raw)
To: Andrew Morton
Cc: Al Viro, Andrey Vagin, Eric W. Biederman, David Howells,
linux-kernel
exit_notify() does exit_task_namespaces() after
forget_original_parent(). This was needed to ensure that ->nsproxy
can't be cleared prematurely, an exiting child we are going to
reparent can do do_notify_parent() and use the parent's (ours) pid_ns.
However, after 32084504 "pidns: use task_active_pid_ns in
do_notify_parent" ->nsproxy != NULL is no longer needed, we rely
on task_active_pid_ns().
Move exit_task_namespaces() from exit_notify() to do_exit(), after
exit_fs() and before exit_task_work().
This solves the problem reported by Andrey, free_ipc_ns()->shm_destroy()
does fput() which needs task_work_add().
Note: this particular problem can be fixed if we change fput(), and
that change makes sense anyway. But there is another reason to move
the callsite. The original reason for exit_task_namespaces() from
the middle of exit_notify() was subtle and it has already gone away,
now this looks confusing. And this allows us do simplify exit_notify(),
we can avoid unlock/lock(tasklist) and we can use ->exit_state instead
of PF_EXITING in forget_original_parent().
Reported-by: Andrey Vagin <avagin@openvz.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>
Acked-by: Andrey Vagin <avagin@openvz.org>
---
kernel/exit.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/exit.c b/kernel/exit.c
index 8fc3c8f..c623cd3 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -644,7 +644,6 @@ static void exit_notify(struct task_struct *tsk, int group_dead)
* jobs, send them a SIGHUP and then a SIGCONT. (POSIX 3.2.2.2)
*/
forget_original_parent(tsk);
- exit_task_namespaces(tsk);
write_lock_irq(&tasklist_lock);
if (group_dead)
@@ -790,6 +789,7 @@ void do_exit(long code)
exit_shm(tsk);
exit_files(tsk);
exit_fs(tsk);
+ exit_task_namespaces(tsk);
exit_task_work(tsk);
check_stack_usage();
exit_thread();
--
1.5.5.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] fput: task_work_add() can fail if the caller has passed exit_task_work()
2013-06-14 19:09 ` [PATCH 1/2] fput: task_work_add() can fail if the caller has passed exit_task_work() Oleg Nesterov
@ 2013-06-14 21:58 ` Andrew Morton
2013-06-15 17:29 ` [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work()) Oleg Nesterov
0 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2013-06-14 21:58 UTC (permalink / raw)
To: Oleg Nesterov
Cc: Al Viro, Andrey Vagin, Eric W. Biederman, David Howells,
linux-kernel
On Fri, 14 Jun 2013 21:09:47 +0200 Oleg Nesterov <oleg@redhat.com> wrote:
> fput() assumes that it can't be called after exit_task_work() but
> this is not true, for example free_ipc_ns()->shm_destroy() can do
> this. In this case fput() silently leaks the file.
>
> Change it to fallback to delayed_fput_work if task_work_add() fails.
> The patch looks complicated but it is not, it changes the code from
>
> if (PF_KTHREAD) {
> schedule_work(...);
> return;
> }
> task_work_add(...)
>
> to
> if (!PF_KTHREAD) {
> if (!task_work_add(...))
> return;
> /* fallback */
> }
> schedule_work(...);
>
> As for shm_destroy() in particular, we could make another fix but I
> think this change makes sense anyway. There could be another similar
> user, it is not safe to assume that task_work_add() can't fail.
>
> ...
>
> --- a/fs/file_table.c
> +++ b/fs/file_table.c
> @@ -306,17 +306,18 @@ void fput(struct file *file)
> {
> if (atomic_long_dec_and_test(&file->f_count)) {
> struct task_struct *task = current;
> + unsigned long flags;
> +
> file_sb_list_del(file);
> - if (unlikely(in_interrupt() || task->flags & PF_KTHREAD)) {
> - unsigned long flags;
> - spin_lock_irqsave(&delayed_fput_lock, flags);
> - list_add(&file->f_u.fu_list, &delayed_fput_list);
> - schedule_work(&delayed_fput_work);
> - spin_unlock_irqrestore(&delayed_fput_lock, flags);
> - return;
> + if (likely(!in_interrupt() && !(task->flags & PF_KTHREAD))) {
> + init_task_work(&file->f_u.fu_rcuhead, ____fput);
> + if (!task_work_add(task, &file->f_u.fu_rcuhead, true))
> + return;
A comment here would be useful, explaining the circumstances under
which we fall through to the delayed fput. This is particularly needed
because kernel/task_work.c is such undocumented crap.
This?
--- a/fs/file_table.c~fput-task_work_add-can-fail-if-the-caller-has-passed-exit_task_work-fix
+++ a/fs/file_table.c
@@ -313,6 +313,12 @@ void fput(struct file *file)
init_task_work(&file->f_u.fu_rcuhead, ____fput);
if (!task_work_add(task, &file->f_u.fu_rcuhead, true))
return;
+ /*
+ * After this task has run exit_task_work(),
+ * task_work_add() will fail. free_ipc_ns()->
+ * shm_destroy() can do this. Fall through to delayed
+ * fput to avoid leaking *file.
+ */
}
spin_lock_irqsave(&delayed_fput_lock, flags);
list_add(&file->f_u.fu_list, &delayed_fput_list);
> }
> - init_task_work(&file->f_u.fu_rcuhead, ____fput);
> - task_work_add(task, &file->f_u.fu_rcuhead, true);
> + spin_lock_irqsave(&delayed_fput_lock, flags);
> + list_add(&file->f_u.fu_list, &delayed_fput_list);
> + schedule_work(&delayed_fput_work);
> + spin_unlock_irqrestore(&delayed_fput_lock, flags);
OT: I don't think that schedule_work() needs to be inside the locked
region. Scalability improvements beckon!
> }
> }
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work())
2013-06-14 21:58 ` Andrew Morton
@ 2013-06-15 17:29 ` Oleg Nesterov
2013-06-15 17:30 ` [PATCH 1/3] fput: turn "list_head delayed_fput_list" into llist_head Oleg Nesterov
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Oleg Nesterov @ 2013-06-15 17:29 UTC (permalink / raw)
To: Andrew Morton
Cc: Al Viro, Andrey Vagin, Eric W. Biederman, David Howells,
linux-kernel, Huang Ying, Peter Zijlstra
On 06/14, Andrew Morton wrote:
>
> On Fri, 14 Jun 2013 21:09:47 +0200 Oleg Nesterov <oleg@redhat.com> wrote:
>
> > + if (likely(!in_interrupt() && !(task->flags & PF_KTHREAD))) {
> > + init_task_work(&file->f_u.fu_rcuhead, ____fput);
> > + if (!task_work_add(task, &file->f_u.fu_rcuhead, true))
> > + return;
>
> A comment here would be useful, explaining the circumstances under
> which we fall through to the delayed fput.
Thanks!
> This is particularly needed
> because kernel/task_work.c is such undocumented crap.
It seems that you are trying to force me to make the doc patch ;)
OK, I'll try. task_work.c needs a couple of cosmetic cleanups anyway.
> > + spin_lock_irqsave(&delayed_fput_lock, flags);
> > + list_add(&file->f_u.fu_list, &delayed_fput_list);
> > + schedule_work(&delayed_fput_work);
> > + spin_unlock_irqrestore(&delayed_fput_lock, flags);
>
> OT: I don't think that schedule_work() needs to be inside the locked
> region. Scalability improvements beckon!
Yeees, I thought about this too.
Performance-wise this can't really help, this case is unlikely. But
I think this change makes this code a bit simpler, so please see 1/3.
2/3 fixes the (theoretical) bug in llist_add() and imho cleanups the
code.
3/3 comes as a separate change because I do not want to argue if
someone dislike the non-inline llist_add(). But once again, we can
make llist_add_batch() inline, and I believe it is never good to
duplicate the code even if it is simple.
Oleg.
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/3] fput: turn "list_head delayed_fput_list" into llist_head
2013-06-15 17:29 ` [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work()) Oleg Nesterov
@ 2013-06-15 17:30 ` Oleg Nesterov
2013-06-15 17:30 ` [PATCH 2/3] llist: fix/simplify llist_add() and llist_add_batch() Oleg Nesterov
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Oleg Nesterov @ 2013-06-15 17:30 UTC (permalink / raw)
To: Andrew Morton
Cc: Al Viro, Andrey Vagin, Eric W. Biederman, David Howells,
Huang Ying, Peter Zijlstra, linux-kernel
fput() and delayed_fput() can use llist and avoid the locking.
This is unlikely path, it is not that this change can improve
the performance, but this way the code looks simpler.
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
fs/file_table.c | 25 ++++++++++---------------
include/linux/fs.h | 2 ++
2 files changed, 12 insertions(+), 15 deletions(-)
diff --git a/fs/file_table.c b/fs/file_table.c
index 3a2bbc5..94b1bfa 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -265,18 +265,15 @@ static void __fput(struct file *file)
mntput(mnt);
}
-static DEFINE_SPINLOCK(delayed_fput_lock);
-static LIST_HEAD(delayed_fput_list);
+static LLIST_HEAD(delayed_fput_list);
static void delayed_fput(struct work_struct *unused)
{
- LIST_HEAD(head);
- spin_lock_irq(&delayed_fput_lock);
- list_splice_init(&delayed_fput_list, &head);
- spin_unlock_irq(&delayed_fput_lock);
- while (!list_empty(&head)) {
- struct file *f = list_first_entry(&head, struct file, f_u.fu_list);
- list_del_init(&f->f_u.fu_list);
- __fput(f);
+ struct llist_node *node = llist_del_all(&delayed_fput_list);
+ struct llist_node *next;
+
+ for (; node; node = next) {
+ next = llist_next(node);
+ __fput(llist_entry(node, struct file, f_u.fu_llist));
}
}
@@ -306,7 +303,6 @@ void fput(struct file *file)
{
if (atomic_long_dec_and_test(&file->f_count)) {
struct task_struct *task = current;
- unsigned long flags;
file_sb_list_del(file);
if (likely(!in_interrupt() && !(task->flags & PF_KTHREAD))) {
@@ -320,10 +316,9 @@ void fput(struct file *file)
* fput to avoid leaking *file.
*/
}
- spin_lock_irqsave(&delayed_fput_lock, flags);
- list_add(&file->f_u.fu_list, &delayed_fput_list);
- schedule_work(&delayed_fput_work);
- spin_unlock_irqrestore(&delayed_fput_lock, flags);
+
+ if (llist_add(&file->f_u.fu_llist, &delayed_fput_list))
+ schedule_work(&delayed_fput_work);
}
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 43db02e..8a60d99 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -10,6 +10,7 @@
#include <linux/stat.h>
#include <linux/cache.h>
#include <linux/list.h>
+#include <linux/llist.h>
#include <linux/radix-tree.h>
#include <linux/rbtree.h>
#include <linux/init.h>
@@ -767,6 +768,7 @@ struct file {
*/
union {
struct list_head fu_list;
+ struct llist_node fu_llist;
struct rcu_head fu_rcuhead;
} f_u;
struct path f_path;
--
1.5.5.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] llist: fix/simplify llist_add() and llist_add_batch()
2013-06-15 17:29 ` [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work()) Oleg Nesterov
2013-06-15 17:30 ` [PATCH 1/3] fput: turn "list_head delayed_fput_list" into llist_head Oleg Nesterov
@ 2013-06-15 17:30 ` Oleg Nesterov
2013-06-15 17:30 ` [PATCH 3/3] llist: llist_add() can use llist_add_batch() Oleg Nesterov
2013-06-15 17:46 ` [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work()) Oleg Nesterov
3 siblings, 0 replies; 9+ messages in thread
From: Oleg Nesterov @ 2013-06-15 17:30 UTC (permalink / raw)
To: Andrew Morton
Cc: Al Viro, Andrey Vagin, Eric W. Biederman, David Howells,
Huang Ying, Peter Zijlstra, linux-kernel
1. This is mostly theoretical, but llist_add*() need ACCESS_ONCE().
Otherwise it is not guaranteed that the first cmpxchg() uses the
same value for old_entry and new_last->next.
2. These helpers cache the result of cmpxchg() and read the initial
value of head->first before the main loop. I do not think this
makes sense. In the likely case cmpxchg() succeeds, otherwise
it doesn't hurt to reload head->first.
I think it would be better to simplify the code and simply read
->first before cmpxchg().
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
include/linux/llist.h | 19 +++++++------------
lib/llist.c | 15 +++++----------
2 files changed, 12 insertions(+), 22 deletions(-)
diff --git a/include/linux/llist.h b/include/linux/llist.h
index a5199f6..3e2b969 100644
--- a/include/linux/llist.h
+++ b/include/linux/llist.h
@@ -151,18 +151,13 @@ static inline struct llist_node *llist_next(struct llist_node *node)
*/
static inline bool llist_add(struct llist_node *new, struct llist_head *head)
{
- struct llist_node *entry, *old_entry;
-
- entry = head->first;
- for (;;) {
- old_entry = entry;
- new->next = entry;
- entry = cmpxchg(&head->first, old_entry, new);
- if (entry == old_entry)
- break;
- }
-
- return old_entry == NULL;
+ struct llist_node *first;
+
+ do {
+ new->next = first = ACCESS_ONCE(head->first);
+ } while (cmpxchg(&head->first, first, new) != first);
+
+ return !first;
}
/**
diff --git a/lib/llist.c b/lib/llist.c
index 4a15115..4a70d12 100644
--- a/lib/llist.c
+++ b/lib/llist.c
@@ -39,18 +39,13 @@
bool llist_add_batch(struct llist_node *new_first, struct llist_node *new_last,
struct llist_head *head)
{
- struct llist_node *entry, *old_entry;
+ struct llist_node *first;
- entry = head->first;
- for (;;) {
- old_entry = entry;
- new_last->next = entry;
- entry = cmpxchg(&head->first, old_entry, new_first);
- if (entry == old_entry)
- break;
- }
+ do {
+ new_last->next = first = ACCESS_ONCE(head->first);
+ } while (cmpxchg(&head->first, first, new_first) != first);
- return old_entry == NULL;
+ return !first;
}
EXPORT_SYMBOL_GPL(llist_add_batch);
--
1.5.5.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/3] llist: llist_add() can use llist_add_batch()
2013-06-15 17:29 ` [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work()) Oleg Nesterov
2013-06-15 17:30 ` [PATCH 1/3] fput: turn "list_head delayed_fput_list" into llist_head Oleg Nesterov
2013-06-15 17:30 ` [PATCH 2/3] llist: fix/simplify llist_add() and llist_add_batch() Oleg Nesterov
@ 2013-06-15 17:30 ` Oleg Nesterov
2013-06-15 17:46 ` [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work()) Oleg Nesterov
3 siblings, 0 replies; 9+ messages in thread
From: Oleg Nesterov @ 2013-06-15 17:30 UTC (permalink / raw)
To: Andrew Morton
Cc: Al Viro, Andrey Vagin, Eric W. Biederman, David Howells,
Huang Ying, Peter Zijlstra, linux-kernel
llist_add(new, head) can simply use llist_add_batch(new, new, head),
no need to duplicate the code.
This obviously uninlines llist_add() and to me this is a win. But we
can make llist_add_batch() inline if this is desirable, in this case
gcc can notice that new_first == new_last if the caller is llist_add().
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
---
include/linux/llist.h | 14 ++++----------
1 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/include/linux/llist.h b/include/linux/llist.h
index 3e2b969..cdaa7f0 100644
--- a/include/linux/llist.h
+++ b/include/linux/llist.h
@@ -142,6 +142,9 @@ static inline struct llist_node *llist_next(struct llist_node *node)
return node->next;
}
+extern bool llist_add_batch(struct llist_node *new_first,
+ struct llist_node *new_last,
+ struct llist_head *head);
/**
* llist_add - add a new entry
* @new: new entry to be added
@@ -151,13 +154,7 @@ static inline struct llist_node *llist_next(struct llist_node *node)
*/
static inline bool llist_add(struct llist_node *new, struct llist_head *head)
{
- struct llist_node *first;
-
- do {
- new->next = first = ACCESS_ONCE(head->first);
- } while (cmpxchg(&head->first, first, new) != first);
-
- return !first;
+ return llist_add_batch(new, new, head);
}
/**
@@ -173,9 +170,6 @@ static inline struct llist_node *llist_del_all(struct llist_head *head)
return xchg(&head->first, NULL);
}
-extern bool llist_add_batch(struct llist_node *new_first,
- struct llist_node *new_last,
- struct llist_head *head);
extern struct llist_node *llist_del_first(struct llist_head *head);
#endif /* LLIST_H */
--
1.5.5.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work())
2013-06-15 17:29 ` [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work()) Oleg Nesterov
` (2 preceding siblings ...)
2013-06-15 17:30 ` [PATCH 3/3] llist: llist_add() can use llist_add_batch() Oleg Nesterov
@ 2013-06-15 17:46 ` Oleg Nesterov
3 siblings, 0 replies; 9+ messages in thread
From: Oleg Nesterov @ 2013-06-15 17:46 UTC (permalink / raw)
To: Andrew Morton
Cc: Al Viro, Andrey Vagin, Eric W. Biederman, David Howells,
linux-kernel, Huang Ying, Peter Zijlstra
sorry, forgot to mention...
On 06/15, Oleg Nesterov wrote:
>
> > OT: I don't think that schedule_work() needs to be inside the locked
> > region. Scalability improvements beckon!
>
> Yeees, I thought about this too.
>
> Performance-wise this can't really help, this case is unlikely. But
> I think this change makes this code a bit simpler, so please see 1/3.
This is on top of
fput-task_work_add-can-fail-if-the-caller-has-passed-exit_task_work-fix.patch
it textually depends on the comment block in fput() added by that patch.
Oleg.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2013-06-15 17:51 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-14 19:09 [PATCH 0/2] fix ->shm_file leak Oleg Nesterov
2013-06-14 19:09 ` [PATCH 1/2] fput: task_work_add() can fail if the caller has passed exit_task_work() Oleg Nesterov
2013-06-14 21:58 ` Andrew Morton
2013-06-15 17:29 ` [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work()) Oleg Nesterov
2013-06-15 17:30 ` [PATCH 1/3] fput: turn "list_head delayed_fput_list" into llist_head Oleg Nesterov
2013-06-15 17:30 ` [PATCH 2/3] llist: fix/simplify llist_add() and llist_add_batch() Oleg Nesterov
2013-06-15 17:30 ` [PATCH 3/3] llist: llist_add() can use llist_add_batch() Oleg Nesterov
2013-06-15 17:46 ` [PATCH 0/3] (Was: fput: task_work_add() can fail if the caller has passed exit_task_work()) Oleg Nesterov
2013-06-14 19:09 ` [PATCH 2/2] move exit_task_namespaces() outside of exit_notify() Oleg Nesterov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox