* [PATCHSET REPOST cgroup/for-3.14] cgroup: factor out css creation into create_css()
@ 2013-12-06 20:27 Tejun Heo
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2013-12-06 20:27 ` [PATCH 6/7] cgroup: implement for_each_css() Tejun Heo
0 siblings, 2 replies; 9+ messages in thread
From: Tejun Heo @ 2013-12-06 20:27 UTC (permalink / raw)
To: lizefan-hv44wF8Li93QT0dZR+AlfA
Cc: vdavydov-bzQdu9zFT3WakBO8gow8eQ, cgroups-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA
Hello,
This is repost of the following.
http://thread.gmane.org/gmane.linux.kernel.cgroups/8981
It got reviewed and acked then but I somehow forgot apply and Vladimir
reporting the same bug that the first patch in the original patch
fixed reminded me. The first patch is already applied to
cgroup/for-3.13-fixes which is pulled into for-3.14 for this series.
While the patches are not completely identical, the adjustments are
trivial. css_id got ripped out in the meantime, so "[PATCH 4/9]
cgroup: move css_id commit from cgroup_populate_dir() to online_css()"
is dropped and the patches are refreshed to reflect the dropping of
css_id handling in cgroup_create(). I'm applying the series to
cgroup/for-3.14.
The original patchset description follows. Thanks and sorry about the
messup.
For unified hierarchy, a css's (cgroup_subsys_state) lifetime will be
different from that of the associated cgroup. css's may be created
and destroyed dynamically over the lifetime of a single cgroup. The
previous changes decoupled css destruction from cgroup's. This
patchset decouples css creation from cgroup's.
This patchset contains the following seven patches.
0001-cgroup-css-iterations-and-css_from_dir-are-safe-unde.patch
0002-cgroup-make-for_each_subsys-useable-under-cgroup_roo.patch
0003-cgroup-reorder-operations-in-cgroup_create.patch
0004-cgroup-combine-css-handling-loops-in-cgroup_create.patch
0005-cgroup-factor-out-cgroup_subsys_state-creation-into-.patch
0006-cgroup-implement-for_each_css.patch
0007-cgroup-remove-for_each_root_subsys.patch
0001-0002 are prep patches.
0003-0005 collect css creation operations into single loop and factor
it out into create_css().
0006-0007 are somewhat tangential. As everything is css based now and
the enabled set of css's might be differ depending on the specific
cgroup in the future, they introduce for_each_css() and replace most
uses of for_each_root_subsys() with it. The two left overs are
opencoded and for_each_root_subsys() and the related logic are
removed.
This patchset shouldn't bring any userland noticeable behavior
changes. It's on top of cgroup/for-3.12 d1625964da ("cgroup: fix
cgroup_css() invocation in css_from_id()") and available in the
following git branch.
include/linux/cgroup.h | 9 -
kernel/cgroup.c | 300 ++++++++++++++++++++++++++-----------------------
2 files changed, 161 insertions(+), 148 deletions(-)
--
tejun
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/7] cgroup: css iterations and css_from_dir() are safe under cgroup_mutex
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
@ 2013-12-06 20:27 ` Tejun Heo
2013-12-06 20:27 ` [PATCH 2/7] cgroup: make for_each_subsys() useable under cgroup_root_mutex Tejun Heo
` (5 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2013-12-06 20:27 UTC (permalink / raw)
To: lizefan-hv44wF8Li93QT0dZR+AlfA
Cc: vdavydov-bzQdu9zFT3WakBO8gow8eQ, cgroups-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Tejun Heo
Currently, all css iterations and css_from_dir() require RCU read lock
whether the caller is holding cgroup_mutex or not, which is
unnecessarily restrictive. They are all safe to use under
cgroup_mutex without holding RCU read lock.
Factor out cgroup_assert_mutex_or_rcu_locked() from css_from_id() and
apply it to all css iteration functions and css_from_dir().
v2: cgroup_assert_mutex_or_rcu_locked() definition doesn't need to be
inside CONFIG_PROVE_RCU ifdef as rcu_lockdep_assert() is always
defined and conditionalized. Move it outside of the ifdef block.
Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Acked-by: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
kernel/cgroup.c | 56 ++++++++++++++++++++++++++++++--------------------------
1 file changed, 30 insertions(+), 26 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 2e5fbf9..c22eecb 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -93,6 +93,11 @@ static DEFINE_MUTEX(cgroup_mutex);
static DEFINE_MUTEX(cgroup_root_mutex);
+#define cgroup_assert_mutex_or_rcu_locked() \
+ rcu_lockdep_assert(rcu_read_lock_held() || \
+ lockdep_is_held(&cgroup_mutex), \
+ "cgroup_mutex or RCU read lock required");
+
/*
* cgroup destruction makes heavy use of work items and there can be a lot
* of concurrent destructions. Use a separate workqueue so that cgroup
@@ -2897,9 +2902,9 @@ static void cgroup_enable_task_cg_lists(void)
* @parent_css: css whose children to walk
*
* This function returns the next child of @parent_css and should be called
- * under RCU read lock. The only requirement is that @parent_css and
- * @pos_css are accessible. The next sibling is guaranteed to be returned
- * regardless of their states.
+ * under either cgroup_mutex or RCU read lock. The only requirement is
+ * that @parent_css and @pos_css are accessible. The next sibling is
+ * guaranteed to be returned regardless of their states.
*/
struct cgroup_subsys_state *
css_next_child(struct cgroup_subsys_state *pos_css,
@@ -2909,7 +2914,7 @@ css_next_child(struct cgroup_subsys_state *pos_css,
struct cgroup *cgrp = parent_css->cgroup;
struct cgroup *next;
- WARN_ON_ONCE(!rcu_read_lock_held());
+ cgroup_assert_mutex_or_rcu_locked();
/*
* @pos could already have been removed. Once a cgroup is removed,
@@ -2956,10 +2961,10 @@ EXPORT_SYMBOL_GPL(css_next_child);
* to visit for pre-order traversal of @root's descendants. @root is
* included in the iteration and the first node to be visited.
*
- * While this function requires RCU read locking, it doesn't require the
- * whole traversal to be contained in a single RCU critical section. This
- * function will return the correct next descendant as long as both @pos
- * and @root are accessible and @pos is a descendant of @root.
+ * While this function requires cgroup_mutex or RCU read locking, it
+ * doesn't require the whole traversal to be contained in a single critical
+ * section. This function will return the correct next descendant as long
+ * as both @pos and @root are accessible and @pos is a descendant of @root.
*/
struct cgroup_subsys_state *
css_next_descendant_pre(struct cgroup_subsys_state *pos,
@@ -2967,7 +2972,7 @@ css_next_descendant_pre(struct cgroup_subsys_state *pos,
{
struct cgroup_subsys_state *next;
- WARN_ON_ONCE(!rcu_read_lock_held());
+ cgroup_assert_mutex_or_rcu_locked();
/* if first iteration, visit @root */
if (!pos)
@@ -2998,17 +3003,17 @@ EXPORT_SYMBOL_GPL(css_next_descendant_pre);
* is returned. This can be used during pre-order traversal to skip
* subtree of @pos.
*
- * While this function requires RCU read locking, it doesn't require the
- * whole traversal to be contained in a single RCU critical section. This
- * function will return the correct rightmost descendant as long as @pos is
- * accessible.
+ * While this function requires cgroup_mutex or RCU read locking, it
+ * doesn't require the whole traversal to be contained in a single critical
+ * section. This function will return the correct rightmost descendant as
+ * long as @pos is accessible.
*/
struct cgroup_subsys_state *
css_rightmost_descendant(struct cgroup_subsys_state *pos)
{
struct cgroup_subsys_state *last, *tmp;
- WARN_ON_ONCE(!rcu_read_lock_held());
+ cgroup_assert_mutex_or_rcu_locked();
do {
last = pos;
@@ -3044,10 +3049,11 @@ css_leftmost_descendant(struct cgroup_subsys_state *pos)
* to visit for post-order traversal of @root's descendants. @root is
* included in the iteration and the last node to be visited.
*
- * While this function requires RCU read locking, it doesn't require the
- * whole traversal to be contained in a single RCU critical section. This
- * function will return the correct next descendant as long as both @pos
- * and @cgroup are accessible and @pos is a descendant of @cgroup.
+ * While this function requires cgroup_mutex or RCU read locking, it
+ * doesn't require the whole traversal to be contained in a single critical
+ * section. This function will return the correct next descendant as long
+ * as both @pos and @cgroup are accessible and @pos is a descendant of
+ * @cgroup.
*/
struct cgroup_subsys_state *
css_next_descendant_post(struct cgroup_subsys_state *pos,
@@ -3055,7 +3061,7 @@ css_next_descendant_post(struct cgroup_subsys_state *pos,
{
struct cgroup_subsys_state *next;
- WARN_ON_ONCE(!rcu_read_lock_held());
+ cgroup_assert_mutex_or_rcu_locked();
/* if first iteration, visit leftmost descendant which may be @root */
if (!pos)
@@ -5217,16 +5223,16 @@ __setup("cgroup_disable=", cgroup_disable);
* @dentry: directory dentry of interest
* @ss: subsystem of interest
*
- * Must be called under RCU read lock. The caller is responsible for
- * pinning the returned css if it needs to be accessed outside the RCU
- * critical section.
+ * Must be called under cgroup_mutex or RCU read lock. The caller is
+ * responsible for pinning the returned css if it needs to be accessed
+ * outside the critical section.
*/
struct cgroup_subsys_state *css_from_dir(struct dentry *dentry,
struct cgroup_subsys *ss)
{
struct cgroup *cgrp;
- WARN_ON_ONCE(!rcu_read_lock_held());
+ cgroup_assert_mutex_or_rcu_locked();
/* is @dentry a cgroup dir? */
if (!dentry->d_inode ||
@@ -5249,9 +5255,7 @@ struct cgroup_subsys_state *css_from_id(int id, struct cgroup_subsys *ss)
{
struct cgroup *cgrp;
- rcu_lockdep_assert(rcu_read_lock_held() ||
- lockdep_is_held(&cgroup_mutex),
- "css_from_id() needs proper protection");
+ cgroup_assert_mutex_or_rcu_locked();
cgrp = idr_find(&ss->root->cgroup_idr, id);
if (cgrp)
--
1.8.4.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/7] cgroup: make for_each_subsys() useable under cgroup_root_mutex
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2013-12-06 20:27 ` [PATCH 1/7] cgroup: css iterations and css_from_dir() are safe under cgroup_mutex Tejun Heo
@ 2013-12-06 20:27 ` Tejun Heo
2013-12-06 20:27 ` [PATCH 3/7] cgroup: reorder operations in cgroup_create() Tejun Heo
` (4 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2013-12-06 20:27 UTC (permalink / raw)
To: lizefan-hv44wF8Li93QT0dZR+AlfA
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
cgroups-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
vdavydov-bzQdu9zFT3WakBO8gow8eQ, Tejun Heo, kbuild test robot
We want to use for_each_subsys() in cgroupfs_root handling where only
cgroup_root_mutex is held. The only way cgroup_subsys[] can change is
through module load/unload, make cgroup_[un]load_subsys() grab
cgroup_root_mutex too and update the lockdep annotation in
for_each_subsys() to allow either cgroup_mutex or cgroup_root_mutex.
* Lockdep annotation is moved from inner 'if' condition to outer 'for'
init caluse. There's no reason to execute the assertion every loop.
* Loop index @i is renamed to @ssid. Indices iterating through subsys
will be [re]named to @ssid gradually.
v2: cgroup_assert_mutex_or_root_locked() caused build failure if
!CONFIG_LOCKEDP. Conditionalize its definition. The build failure
was reported by kbuild test bot.
Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Acked-by: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Cc: kbuild test robot <fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
---
kernel/cgroup.c | 26 ++++++++++++++++++++------
1 file changed, 20 insertions(+), 6 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index c22eecb..4a7fb40 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -98,6 +98,14 @@ static DEFINE_MUTEX(cgroup_root_mutex);
lockdep_is_held(&cgroup_mutex), \
"cgroup_mutex or RCU read lock required");
+#ifdef CONFIG_LOCKDEP
+#define cgroup_assert_mutex_or_root_locked() \
+ WARN_ON_ONCE(debug_locks && (!lockdep_is_held(&cgroup_mutex) && \
+ !lockdep_is_held(&cgroup_root_mutex)))
+#else
+#define cgroup_assert_mutex_or_root_locked() do { } while (0)
+#endif
+
/*
* cgroup destruction makes heavy use of work items and there can be a lot
* of concurrent destructions. Use a separate workqueue so that cgroup
@@ -237,14 +245,15 @@ static int notify_on_release(const struct cgroup *cgrp)
/**
* for_each_subsys - iterate all loaded cgroup subsystems
* @ss: the iteration cursor
- * @i: the index of @ss, CGROUP_SUBSYS_COUNT after reaching the end
+ * @ssid: the index of @ss, CGROUP_SUBSYS_COUNT after reaching the end
*
- * Should be called under cgroup_mutex.
+ * Iterates through all loaded subsystems. Should be called under
+ * cgroup_mutex or cgroup_root_mutex.
*/
-#define for_each_subsys(ss, i) \
- for ((i) = 0; (i) < CGROUP_SUBSYS_COUNT; (i)++) \
- if (({ lockdep_assert_held(&cgroup_mutex); \
- !((ss) = cgroup_subsys[i]); })) { } \
+#define for_each_subsys(ss, ssid) \
+ for (({ cgroup_assert_mutex_or_root_locked(); (ssid) = 0; }); \
+ (ssid) < CGROUP_SUBSYS_COUNT; (ssid)++) \
+ if (!((ss) = cgroup_subsys[(ssid)])) { } \
else
/**
@@ -4592,6 +4601,7 @@ int __init_or_module cgroup_load_subsys(struct cgroup_subsys *ss)
cgroup_init_cftsets(ss);
mutex_lock(&cgroup_mutex);
+ mutex_lock(&cgroup_root_mutex);
cgroup_subsys[ss->subsys_id] = ss;
/*
@@ -4641,10 +4651,12 @@ int __init_or_module cgroup_load_subsys(struct cgroup_subsys *ss)
goto err_unload;
/* success! */
+ mutex_unlock(&cgroup_root_mutex);
mutex_unlock(&cgroup_mutex);
return 0;
err_unload:
+ mutex_unlock(&cgroup_root_mutex);
mutex_unlock(&cgroup_mutex);
/* @ss can't be mounted here as try_module_get() would fail */
cgroup_unload_subsys(ss);
@@ -4674,6 +4686,7 @@ void cgroup_unload_subsys(struct cgroup_subsys *ss)
BUG_ON(ss->root != &cgroup_dummy_root);
mutex_lock(&cgroup_mutex);
+ mutex_lock(&cgroup_root_mutex);
offline_css(cgroup_css(cgroup_dummy_top, ss));
@@ -4708,6 +4721,7 @@ void cgroup_unload_subsys(struct cgroup_subsys *ss)
ss->css_free(cgroup_css(cgroup_dummy_top, ss));
RCU_INIT_POINTER(cgroup_dummy_top->subsys[ss->subsys_id], NULL);
+ mutex_unlock(&cgroup_root_mutex);
mutex_unlock(&cgroup_mutex);
}
EXPORT_SYMBOL_GPL(cgroup_unload_subsys);
--
1.8.4.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/7] cgroup: reorder operations in cgroup_create()
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2013-12-06 20:27 ` [PATCH 1/7] cgroup: css iterations and css_from_dir() are safe under cgroup_mutex Tejun Heo
2013-12-06 20:27 ` [PATCH 2/7] cgroup: make for_each_subsys() useable under cgroup_root_mutex Tejun Heo
@ 2013-12-06 20:27 ` Tejun Heo
2013-12-06 20:27 ` [PATCH 4/7] cgroup: combine css handling loops " Tejun Heo
` (3 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2013-12-06 20:27 UTC (permalink / raw)
To: lizefan-hv44wF8Li93QT0dZR+AlfA
Cc: vdavydov-bzQdu9zFT3WakBO8gow8eQ, cgroups-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Tejun Heo
cgroup_create() currently does the followings.
1. alloc cgroup
2. alloc css's
3. create the directory and commit to cgroup creation
4. online css's
5. create cgroup and css files
The sequence performs allocations before other operations but it
doesn't buy anything because each of the above steps may fail and
should be unrollable. Reorganize the sequence such that cgroup
operations are done before css operations.
1. alloc cgroup
2. create the directory and files and commit to cgroup creation
3. alloc css's
4. create files for and online css's
This simplifies the code a bit and enables further simplification and
separating out css creation from cgroup creation which is necessary
for the planned unified hierarchy where css's will be created and
destroyed dynamically across the lifetime of a cgroup.
Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Acked-by: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
kernel/cgroup.c | 70 +++++++++++++++++++++++++++------------------------------
1 file changed, 33 insertions(+), 37 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 4a7fb40..30a2670 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4144,23 +4144,6 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
if (test_bit(CGRP_CPUSET_CLONE_CHILDREN, &parent->flags))
set_bit(CGRP_CPUSET_CLONE_CHILDREN, &cgrp->flags);
- for_each_root_subsys(root, ss) {
- struct cgroup_subsys_state *css;
-
- css = ss->css_alloc(cgroup_css(parent, ss));
- if (IS_ERR(css)) {
- err = PTR_ERR(css);
- goto err_free_all;
- }
- css_ar[ss->subsys_id] = css;
-
- err = percpu_ref_init(&css->refcnt, css_release);
- if (err)
- goto err_free_all;
-
- init_css(css, ss, cgrp);
- }
-
/*
* Create directory. cgroup_create_file() returns with the new
* directory locked on success so that it can be populated without
@@ -4168,7 +4151,7 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
*/
err = cgroup_create_file(dentry, S_IFDIR | mode, sb);
if (err < 0)
- goto err_free_all;
+ goto err_unlock;
lockdep_assert_held(&dentry->d_inode->i_mutex);
cgrp->serial_nr = cgroup_serial_nr_next++;
@@ -4180,10 +4163,41 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
/* hold a ref to the parent's dentry */
dget(parent->dentry);
+ /*
+ * @cgrp is now fully operational. If something fails after this
+ * point, it'll be released via the normal destruction path.
+ */
+ idr_replace(&root->cgroup_idr, cgrp, cgrp->id);
+
+ err = cgroup_addrm_files(cgrp, cgroup_base_files, true);
+ if (err)
+ goto err_destroy;
+
+ for_each_root_subsys(root, ss) {
+ struct cgroup_subsys_state *css;
+
+ css = ss->css_alloc(cgroup_css(parent, ss));
+ if (IS_ERR(css)) {
+ err = PTR_ERR(css);
+ goto err_destroy;
+ }
+ css_ar[ss->subsys_id] = css;
+
+ err = percpu_ref_init(&css->refcnt, css_release);
+ if (err)
+ goto err_destroy;
+
+ init_css(css, ss, cgrp);
+ }
+
/* creation succeeded, notify subsystems */
for_each_root_subsys(root, ss) {
struct cgroup_subsys_state *css = css_ar[ss->subsys_id];
+ err = cgroup_populate_dir(cgrp, 1 << ss->subsys_id);
+ if (err)
+ goto err_destroy;
+
err = online_css(css);
if (err)
goto err_destroy;
@@ -4205,30 +4219,12 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
}
}
- idr_replace(&root->cgroup_idr, cgrp, cgrp->id);
-
- err = cgroup_addrm_files(cgrp, cgroup_base_files, true);
- if (err)
- goto err_destroy;
-
- err = cgroup_populate_dir(cgrp, root->subsys_mask);
- if (err)
- goto err_destroy;
-
mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgrp->dentry->d_inode->i_mutex);
return 0;
-err_free_all:
- for_each_root_subsys(root, ss) {
- struct cgroup_subsys_state *css = css_ar[ss->subsys_id];
-
- if (css) {
- percpu_ref_cancel_init(&css->refcnt);
- ss->css_free(css);
- }
- }
+err_unlock:
mutex_unlock(&cgroup_mutex);
/* Release the reference count that we took on the superblock */
deactivate_super(sb);
--
1.8.4.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/7] cgroup: combine css handling loops in cgroup_create()
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (2 preceding siblings ...)
2013-12-06 20:27 ` [PATCH 3/7] cgroup: reorder operations in cgroup_create() Tejun Heo
@ 2013-12-06 20:27 ` Tejun Heo
2013-12-06 20:27 ` [PATCH 5/7] cgroup: factor out cgroup_subsys_state creation into create_css() Tejun Heo
` (2 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2013-12-06 20:27 UTC (permalink / raw)
To: lizefan-hv44wF8Li93QT0dZR+AlfA
Cc: vdavydov-bzQdu9zFT3WakBO8gow8eQ, cgroups-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Tejun Heo
Now that css operations in cgroup_create() are back-to-back, there
isn't much point in allocating css's in one loop and onlining them in
another. Merge the two loops so that a css is allocated and onlined
on each iteration.
css_ar[] is no longer necessary and replaced with a single pointer.
This also simplifies the error handling path.
Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Acked-by: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
kernel/cgroup.c | 25 +++++++------------------
1 file changed, 7 insertions(+), 18 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 30a2670..39e2295 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4084,7 +4084,7 @@ static void offline_css(struct cgroup_subsys_state *css)
static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
umode_t mode)
{
- struct cgroup_subsys_state *css_ar[CGROUP_SUBSYS_COUNT] = { };
+ struct cgroup_subsys_state *css = NULL;
struct cgroup *cgrp;
struct cgroup_name *name;
struct cgroupfs_root *root = parent->root;
@@ -4173,26 +4173,20 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
if (err)
goto err_destroy;
+ /* let's create and online css's */
for_each_root_subsys(root, ss) {
- struct cgroup_subsys_state *css;
-
css = ss->css_alloc(cgroup_css(parent, ss));
if (IS_ERR(css)) {
err = PTR_ERR(css);
+ css = NULL;
goto err_destroy;
}
- css_ar[ss->subsys_id] = css;
err = percpu_ref_init(&css->refcnt, css_release);
if (err)
goto err_destroy;
init_css(css, ss, cgrp);
- }
-
- /* creation succeeded, notify subsystems */
- for_each_root_subsys(root, ss) {
- struct cgroup_subsys_state *css = css_ar[ss->subsys_id];
err = cgroup_populate_dir(cgrp, 1 << ss->subsys_id);
if (err)
@@ -4202,12 +4196,11 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
if (err)
goto err_destroy;
- /* each css holds a ref to the cgroup's dentry and parent css */
dget(dentry);
css_get(css->parent);
/* mark it consumed for error path */
- css_ar[ss->subsys_id] = NULL;
+ css = NULL;
if (ss->broken_hierarchy && !ss->warned_broken_hierarchy &&
parent->parent) {
@@ -4237,13 +4230,9 @@ err_free_cgrp:
return err;
err_destroy:
- for_each_root_subsys(root, ss) {
- struct cgroup_subsys_state *css = css_ar[ss->subsys_id];
-
- if (css) {
- percpu_ref_cancel_init(&css->refcnt);
- ss->css_free(css);
- }
+ if (css) {
+ percpu_ref_cancel_init(&css->refcnt);
+ css->ss->css_free(css);
}
cgroup_destroy_locked(cgrp);
mutex_unlock(&cgroup_mutex);
--
1.8.4.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 5/7] cgroup: factor out cgroup_subsys_state creation into create_css()
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (3 preceding siblings ...)
2013-12-06 20:27 ` [PATCH 4/7] cgroup: combine css handling loops " Tejun Heo
@ 2013-12-06 20:27 ` Tejun Heo
2013-12-06 20:27 ` [PATCH 7/7] cgroup: remove for_each_root_subsys() Tejun Heo
2013-12-09 9:15 ` [PATCHSET REPOST cgroup/for-3.14] cgroup: factor out css creation into create_css() Li Zefan
6 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2013-12-06 20:27 UTC (permalink / raw)
To: lizefan-hv44wF8Li93QT0dZR+AlfA
Cc: vdavydov-bzQdu9zFT3WakBO8gow8eQ, cgroups-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Tejun Heo
Now that all opertations to create a css (cgroup_subsys_state) are
collected into a single loop in cgroup_create(), it's easy to factor
it out into its own function. Factor out css creation into
create_css(). This makes the code easier to follow and will enable
decoupling css creation from cgroup creation which is necessary for
the planned unified hierarchy.
Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Acked-by: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
kernel/cgroup.c | 95 ++++++++++++++++++++++++++++++++++-----------------------
1 file changed, 57 insertions(+), 38 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 39e2295..d12c29f 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4073,6 +4073,62 @@ static void offline_css(struct cgroup_subsys_state *css)
RCU_INIT_POINTER(css->cgroup->subsys[ss->subsys_id], css);
}
+/**
+ * create_css - create a cgroup_subsys_state
+ * @cgrp: the cgroup new css will be associated with
+ * @ss: the subsys of new css
+ *
+ * Create a new css associated with @cgrp - @ss pair. On success, the new
+ * css is online and installed in @cgrp with all interface files created.
+ * Returns 0 on success, -errno on failure.
+ */
+static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss)
+{
+ struct cgroup *parent = cgrp->parent;
+ struct cgroup_subsys_state *css;
+ int err;
+
+ lockdep_assert_held(&cgrp->dentry->d_inode->i_mutex);
+ lockdep_assert_held(&cgroup_mutex);
+
+ css = ss->css_alloc(cgroup_css(parent, ss));
+ if (IS_ERR(css))
+ return PTR_ERR(css);
+
+ err = percpu_ref_init(&css->refcnt, css_release);
+ if (err)
+ goto err_free;
+
+ init_css(css, ss, cgrp);
+
+ err = cgroup_populate_dir(cgrp, 1 << ss->subsys_id);
+ if (err)
+ goto err_free;
+
+ err = online_css(css);
+ if (err)
+ goto err_free;
+
+ dget(cgrp->dentry);
+ css_get(css->parent);
+
+ if (ss->broken_hierarchy && !ss->warned_broken_hierarchy &&
+ parent->parent) {
+ pr_warning("cgroup: %s (%d) created nested cgroup for controller \"%s\" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.\n",
+ current->comm, current->pid, ss->name);
+ if (!strcmp(ss->name, "memory"))
+ pr_warning("cgroup: \"memory\" requires setting use_hierarchy to 1 on the root.\n");
+ ss->warned_broken_hierarchy = true;
+ }
+
+ return 0;
+
+err_free:
+ percpu_ref_cancel_init(&css->refcnt);
+ ss->css_free(css);
+ return err;
+}
+
/*
* cgroup_create - create a cgroup
* @parent: cgroup that will be parent of the new cgroup
@@ -4084,7 +4140,6 @@ static void offline_css(struct cgroup_subsys_state *css)
static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
umode_t mode)
{
- struct cgroup_subsys_state *css = NULL;
struct cgroup *cgrp;
struct cgroup_name *name;
struct cgroupfs_root *root = parent->root;
@@ -4175,41 +4230,9 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
/* let's create and online css's */
for_each_root_subsys(root, ss) {
- css = ss->css_alloc(cgroup_css(parent, ss));
- if (IS_ERR(css)) {
- err = PTR_ERR(css);
- css = NULL;
- goto err_destroy;
- }
-
- err = percpu_ref_init(&css->refcnt, css_release);
+ err = create_css(cgrp, ss);
if (err)
goto err_destroy;
-
- init_css(css, ss, cgrp);
-
- err = cgroup_populate_dir(cgrp, 1 << ss->subsys_id);
- if (err)
- goto err_destroy;
-
- err = online_css(css);
- if (err)
- goto err_destroy;
-
- dget(dentry);
- css_get(css->parent);
-
- /* mark it consumed for error path */
- css = NULL;
-
- if (ss->broken_hierarchy && !ss->warned_broken_hierarchy &&
- parent->parent) {
- pr_warning("cgroup: %s (%d) created nested cgroup for controller \"%s\" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.\n",
- current->comm, current->pid, ss->name);
- if (!strcmp(ss->name, "memory"))
- pr_warning("cgroup: \"memory\" requires setting use_hierarchy to 1 on the root.\n");
- ss->warned_broken_hierarchy = true;
- }
}
mutex_unlock(&cgroup_mutex);
@@ -4230,10 +4253,6 @@ err_free_cgrp:
return err;
err_destroy:
- if (css) {
- percpu_ref_cancel_init(&css->refcnt);
- css->ss->css_free(css);
- }
cgroup_destroy_locked(cgrp);
mutex_unlock(&cgroup_mutex);
mutex_unlock(&dentry->d_inode->i_mutex);
--
1.8.4.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 6/7] cgroup: implement for_each_css()
2013-12-06 20:27 [PATCHSET REPOST cgroup/for-3.14] cgroup: factor out css creation into create_css() Tejun Heo
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
@ 2013-12-06 20:27 ` Tejun Heo
1 sibling, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2013-12-06 20:27 UTC (permalink / raw)
To: lizefan; +Cc: containers, cgroups, linux-kernel, vdavydov, Tejun Heo
There are enough places where css's of a cgroup are iterated, which
currently uses for_each_root_subsys() + explicit cgroup_css(). This
patch implements for_each_css() and replaces the above combination
with it.
This patch doesn't introduce any behavior changes.
v2: Updated to apply cleanly on top of v2 of "cgroup: fix css leaks on
online_css() failure"
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
---
kernel/cgroup.c | 57 +++++++++++++++++++++++++++++++--------------------------
1 file changed, 31 insertions(+), 26 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index d12c29f..329fde8 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -243,6 +243,21 @@ static int notify_on_release(const struct cgroup *cgrp)
}
/**
+ * for_each_css - iterate all css's of a cgroup
+ * @css: the iteration cursor
+ * @ssid: the index of the subsystem, CGROUP_SUBSYS_COUNT after reaching the end
+ * @cgrp: the target cgroup to iterate css's of
+ *
+ * Should be called under cgroup_mutex.
+ */
+#define for_each_css(css, ssid, cgrp) \
+ for ((ssid) = 0; (ssid) < CGROUP_SUBSYS_COUNT; (ssid)++) \
+ if (!((css) = rcu_dereference_check( \
+ (cgrp)->subsys[(ssid)], \
+ lockdep_is_held(&cgroup_mutex)))) { } \
+ else
+
+/**
* for_each_subsys - iterate all loaded cgroup subsystems
* @ss: the iteration cursor
* @ssid: the index of @ss, CGROUP_SUBSYS_COUNT after reaching the end
@@ -1942,8 +1957,8 @@ static int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk,
bool threadgroup)
{
int retval, i, group_size;
- struct cgroup_subsys *ss, *failed_ss = NULL;
struct cgroupfs_root *root = cgrp->root;
+ struct cgroup_subsys_state *css, *failed_css = NULL;
/* threadgroup list cursor and array */
struct task_struct *leader = tsk;
struct task_and_cgroup *tc;
@@ -2016,13 +2031,11 @@ static int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk,
/*
* step 1: check that we can legitimately attach to the cgroup.
*/
- for_each_root_subsys(root, ss) {
- struct cgroup_subsys_state *css = cgroup_css(cgrp, ss);
-
- if (ss->can_attach) {
- retval = ss->can_attach(css, &tset);
+ for_each_css(css, i, cgrp) {
+ if (css->ss->can_attach) {
+ retval = css->ss->can_attach(css, &tset);
if (retval) {
- failed_ss = ss;
+ failed_css = css;
goto out_cancel_attach;
}
}
@@ -2058,12 +2071,9 @@ static int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk,
/*
* step 4: do subsystem attach callbacks.
*/
- for_each_root_subsys(root, ss) {
- struct cgroup_subsys_state *css = cgroup_css(cgrp, ss);
-
- if (ss->attach)
- ss->attach(css, &tset);
- }
+ for_each_css(css, i, cgrp)
+ if (css->ss->attach)
+ css->ss->attach(css, &tset);
/*
* step 5: success! and cleanup
@@ -2080,13 +2090,11 @@ out_put_css_set_refs:
}
out_cancel_attach:
if (retval) {
- for_each_root_subsys(root, ss) {
- struct cgroup_subsys_state *css = cgroup_css(cgrp, ss);
-
- if (ss == failed_ss)
+ for_each_css(css, i, cgrp) {
+ if (css == failed_css)
break;
- if (ss->cancel_attach)
- ss->cancel_attach(css, &tset);
+ if (css->ss->cancel_attach)
+ css->ss->cancel_attach(css, &tset);
}
}
out_free_group_list:
@@ -4375,9 +4383,10 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
__releases(&cgroup_mutex) __acquires(&cgroup_mutex)
{
struct dentry *d = cgrp->dentry;
- struct cgroup_subsys *ss;
+ struct cgroup_subsys_state *css;
struct cgroup *child;
bool empty;
+ int ssid;
lockdep_assert_held(&d->d_inode->i_mutex);
lockdep_assert_held(&cgroup_mutex);
@@ -4413,12 +4422,8 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
* will be invoked to perform the rest of destruction once the
* percpu refs of all css's are confirmed to be killed.
*/
- for_each_root_subsys(cgrp->root, ss) {
- struct cgroup_subsys_state *css = cgroup_css(cgrp, ss);
-
- if (css)
- kill_css(css);
- }
+ for_each_css(css, ssid, cgrp)
+ kill_css(css);
/*
* Mark @cgrp dead. This prevents further task migration and child
--
1.8.4.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 7/7] cgroup: remove for_each_root_subsys()
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (4 preceding siblings ...)
2013-12-06 20:27 ` [PATCH 5/7] cgroup: factor out cgroup_subsys_state creation into create_css() Tejun Heo
@ 2013-12-06 20:27 ` Tejun Heo
2013-12-09 9:15 ` [PATCHSET REPOST cgroup/for-3.14] cgroup: factor out css creation into create_css() Li Zefan
6 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2013-12-06 20:27 UTC (permalink / raw)
To: lizefan-hv44wF8Li93QT0dZR+AlfA
Cc: vdavydov-bzQdu9zFT3WakBO8gow8eQ, cgroups-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Tejun Heo
After the previous patch which introduced for_each_css(),
for_each_root_subsys() only has two users left. This patch replaces
it with for_each_subsys() + explicit subsys_mask testing and remove
for_each_root_subsys() along with cgroupfs_root->subsys_list handling.
This patch doesn't introduce any behavior changes.
Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Acked-by: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
include/linux/cgroup.h | 9 +--------
kernel/cgroup.c | 37 +++++++++++++++----------------------
2 files changed, 16 insertions(+), 30 deletions(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 8b9a594..cfaf416 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -319,9 +319,6 @@ struct cgroupfs_root {
/* Unique id for this hierarchy. */
int hierarchy_id;
- /* A list running through the attached subsystems */
- struct list_head subsys_list;
-
/* The root cgroup for this hierarchy */
struct cgroup top_cgroup;
@@ -617,12 +614,8 @@ struct cgroup_subsys {
#define MAX_CGROUP_TYPE_NAMELEN 32
const char *name;
- /*
- * Link to parent, and list entry in parent's children.
- * Protected by cgroup_lock()
- */
+ /* link to parent, protected by cgroup_lock() */
struct cgroupfs_root *root;
- struct list_head sibling;
/* list of cftype_sets */
struct list_head cftsets;
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 329fde8..fb1193b 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -283,10 +283,6 @@ static int notify_on_release(const struct cgroup *cgrp)
for ((i) = 0; (i) < CGROUP_BUILTIN_SUBSYS_COUNT && \
(((ss) = cgroup_subsys[i]) || true); (i)++)
-/* iterate each subsystem attached to a hierarchy */
-#define for_each_root_subsys(root, ss) \
- list_for_each_entry((ss), &(root)->subsys_list, sibling)
-
/* iterate across the active hierarchies */
#define for_each_active_root(root) \
list_for_each_entry((root), &cgroup_roots, root_list)
@@ -1033,7 +1029,6 @@ static int rebind_subsystems(struct cgroupfs_root *root,
cgroup_css(cgroup_dummy_top, ss));
cgroup_css(cgrp, ss)->cgroup = cgrp;
- list_move(&ss->sibling, &root->subsys_list);
ss->root = root;
if (ss->bind)
ss->bind(cgroup_css(cgrp, ss));
@@ -1052,7 +1047,6 @@ static int rebind_subsystems(struct cgroupfs_root *root,
RCU_INIT_POINTER(cgrp->subsys[i], NULL);
cgroup_subsys[i]->root = &cgroup_dummy_root;
- list_move(&ss->sibling, &cgroup_dummy_root.subsys_list);
/* subsystem is now free - drop reference on module */
module_put(ss->module);
@@ -1079,10 +1073,12 @@ static int cgroup_show_options(struct seq_file *seq, struct dentry *dentry)
{
struct cgroupfs_root *root = dentry->d_sb->s_fs_info;
struct cgroup_subsys *ss;
+ int ssid;
mutex_lock(&cgroup_root_mutex);
- for_each_root_subsys(root, ss)
- seq_printf(seq, ",%s", ss->name);
+ for_each_subsys(ss, ssid)
+ if (root->subsys_mask & (1 << ssid))
+ seq_printf(seq, ",%s", ss->name);
if (root->flags & CGRP_ROOT_SANE_BEHAVIOR)
seq_puts(seq, ",sane_behavior");
if (root->flags & CGRP_ROOT_NOPREFIX)
@@ -1352,7 +1348,6 @@ static void init_cgroup_root(struct cgroupfs_root *root)
{
struct cgroup *cgrp = &root->top_cgroup;
- INIT_LIST_HEAD(&root->subsys_list);
INIT_LIST_HEAD(&root->root_list);
root->number_of_cgroups = 1;
cgrp->root = root;
@@ -4151,7 +4146,7 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
struct cgroup *cgrp;
struct cgroup_name *name;
struct cgroupfs_root *root = parent->root;
- int err = 0;
+ int ssid, err = 0;
struct cgroup_subsys *ss;
struct super_block *sb = root->sb;
@@ -4237,10 +4232,12 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
goto err_destroy;
/* let's create and online css's */
- for_each_root_subsys(root, ss) {
- err = create_css(cgrp, ss);
- if (err)
- goto err_destroy;
+ for_each_subsys(ss, ssid) {
+ if (root->subsys_mask & (1 << ssid)) {
+ err = create_css(cgrp, ss);
+ if (err)
+ goto err_destroy;
+ }
}
mutex_unlock(&cgroup_mutex);
@@ -4536,7 +4533,6 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss)
cgroup_init_cftsets(ss);
/* Create the top cgroup state for this subsystem */
- list_add(&ss->sibling, &cgroup_dummy_root.subsys_list);
ss->root = &cgroup_dummy_root;
css = ss->css_alloc(cgroup_css(cgroup_dummy_top, ss));
/* We don't handle early failures gracefully */
@@ -4626,7 +4622,6 @@ int __init_or_module cgroup_load_subsys(struct cgroup_subsys *ss)
return PTR_ERR(css);
}
- list_add(&ss->sibling, &cgroup_dummy_root.subsys_list);
ss->root = &cgroup_dummy_root;
/* our new subsystem will be attached to the dummy hierarchy. */
@@ -4702,9 +4697,6 @@ void cgroup_unload_subsys(struct cgroup_subsys *ss)
/* deassign the subsys_id */
cgroup_subsys[ss->subsys_id] = NULL;
- /* remove subsystem from the dummy root's list of subsystems */
- list_del_init(&ss->sibling);
-
/*
* disentangle the css from all css_sets attached to the dummy
* top. as in loading, we need to pay our respects to the hashtable
@@ -4901,11 +4893,12 @@ int proc_cgroup_show(struct seq_file *m, void *v)
for_each_active_root(root) {
struct cgroup_subsys *ss;
struct cgroup *cgrp;
- int count = 0;
+ int ssid, count = 0;
seq_printf(m, "%d:", root->hierarchy_id);
- for_each_root_subsys(root, ss)
- seq_printf(m, "%s%s", count++ ? "," : "", ss->name);
+ for_each_subsys(ss, ssid)
+ if (root->subsys_mask & (1 << ssid))
+ seq_printf(m, "%s%s", count++ ? "," : "", ss->name);
if (strlen(root->name))
seq_printf(m, "%sname=%s", count ? "," : "",
root->name);
--
1.8.4.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCHSET REPOST cgroup/for-3.14] cgroup: factor out css creation into create_css()
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
` (5 preceding siblings ...)
2013-12-06 20:27 ` [PATCH 7/7] cgroup: remove for_each_root_subsys() Tejun Heo
@ 2013-12-09 9:15 ` Li Zefan
6 siblings, 0 replies; 9+ messages in thread
From: Li Zefan @ 2013-12-09 9:15 UTC (permalink / raw)
To: Tejun Heo
Cc: vdavydov-bzQdu9zFT3WakBO8gow8eQ, cgroups-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA
On 2013/12/7 4:27, Tejun Heo wrote:
> Hello,
>
> This is repost of the following.
>
> http://thread.gmane.org/gmane.linux.kernel.cgroups/8981
>
> It got reviewed and acked then but I somehow forgot apply and Vladimir
> reporting the same bug that the first patch in the original patch
> fixed reminded me. The first patch is already applied to
> cgroup/for-3.13-fixes which is pulled into for-3.14 for this series.
Yeah, that's why I have a vivid memory that the bug has already been
fixed when I saw Vladimir's bug report. :)
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2013-12-09 9:15 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-06 20:27 [PATCHSET REPOST cgroup/for-3.14] cgroup: factor out css creation into create_css() Tejun Heo
[not found] ` <1386361672-27791-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2013-12-06 20:27 ` [PATCH 1/7] cgroup: css iterations and css_from_dir() are safe under cgroup_mutex Tejun Heo
2013-12-06 20:27 ` [PATCH 2/7] cgroup: make for_each_subsys() useable under cgroup_root_mutex Tejun Heo
2013-12-06 20:27 ` [PATCH 3/7] cgroup: reorder operations in cgroup_create() Tejun Heo
2013-12-06 20:27 ` [PATCH 4/7] cgroup: combine css handling loops " Tejun Heo
2013-12-06 20:27 ` [PATCH 5/7] cgroup: factor out cgroup_subsys_state creation into create_css() Tejun Heo
2013-12-06 20:27 ` [PATCH 7/7] cgroup: remove for_each_root_subsys() Tejun Heo
2013-12-09 9:15 ` [PATCHSET REPOST cgroup/for-3.14] cgroup: factor out css creation into create_css() Li Zefan
2013-12-06 20:27 ` [PATCH 6/7] cgroup: implement for_each_css() Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).