* [PATCH 1/4] cgroup: remove some useless forward declarations
@ 2014-09-17 10:18 Li Zefan
[not found] ` <54195FE1.9030607-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 6+ messages in thread
From: Li Zefan @ 2014-09-17 10:18 UTC (permalink / raw)
To: Tejun Heo; +Cc: cgroups, LKML
Signed-off-by: Zefan Li <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
include/linux/cgroup.h | 1 -
kernel/cgroup.c | 2 --
2 files changed, 3 deletions(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index b5223c5..f7898e0 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -27,7 +27,6 @@
struct cgroup_root;
struct cgroup_subsys;
-struct inode;
struct cgroup;
extern int cgroup_init_early(void);
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 940aced..0ce9d9e 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -185,7 +185,6 @@ static int need_forkexit_callback __read_mostly;
static struct cftype cgroup_dfl_base_files[];
static struct cftype cgroup_legacy_base_files[];
-static void cgroup_put(struct cgroup *cgrp);
static int rebind_subsystems(struct cgroup_root *dst_root,
unsigned int ss_mask);
static int cgroup_destroy_locked(struct cgroup *cgrp);
@@ -195,7 +194,6 @@ static void css_release(struct percpu_ref *ref);
static void kill_css(struct cgroup_subsys_state *css);
static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[],
bool is_add);
-static void cgroup_pidlist_destroy_all(struct cgroup *cgrp);
/* IDR wrappers which synchronize using cgroup_idr_lock */
static int cgroup_idr_alloc(struct idr *idr, void *ptr, int start, int end,
--
1.8.0.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 2/4] ccgroup: remove redundant code in cgroup_rmdir()
[not found] ` <54195FE1.9030607-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2014-09-17 10:18 ` Li Zefan
2014-09-17 10:19 ` [PATCH 3/4] cgroup: remove bogus comments Li Zefan
2014-09-17 10:20 ` [PATCH 4/4] cgroup: reuse css->destroy_work for release agent Li Zefan
2 siblings, 0 replies; 6+ messages in thread
From: Li Zefan @ 2014-09-17 10:18 UTC (permalink / raw)
To: Tejun Heo; +Cc: cgroups, LKML
We no longer clear kn->priv in cgroup_rmdir(), so we don't need
to get an extra refcnt.
Signed-off-by: Zefan Li <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
kernel/cgroup.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 0ce9d9e..26b8cb9 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4841,13 +4841,10 @@ static int cgroup_rmdir(struct kernfs_node *kn)
cgrp = cgroup_kn_lock_live(kn);
if (!cgrp)
return 0;
- cgroup_get(cgrp); /* for @kn->priv clearing */
ret = cgroup_destroy_locked(cgrp);
cgroup_kn_unlock(kn);
-
- cgroup_put(cgrp);
return ret;
}
--
1.8.0.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 3/4] cgroup: remove bogus comments
[not found] ` <54195FE1.9030607-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2014-09-17 10:18 ` [PATCH 2/4] ccgroup: remove redundant code in cgroup_rmdir() Li Zefan
@ 2014-09-17 10:19 ` Li Zefan
[not found] ` <5419602C.4080303-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2014-09-17 10:20 ` [PATCH 4/4] cgroup: reuse css->destroy_work for release agent Li Zefan
2 siblings, 1 reply; 6+ messages in thread
From: Li Zefan @ 2014-09-17 10:19 UTC (permalink / raw)
To: Tejun Heo; +Cc: cgroups, LKML
We never grab cgroup mutex in fork and exit paths no matter whether
notify_on_release is set or not.
Signed-off-by: Zefan Li <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
kernel/cgroup.c | 8 --------
1 file changed, 8 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 26b8cb9..1abb554 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -967,14 +967,6 @@ static struct cgroup *task_cgroup_from_root(struct task_struct *task,
* knows that the cgroup won't be removed, as cgroup_rmdir()
* needs that mutex.
*
- * The fork and exit callbacks cgroup_fork() and cgroup_exit(), don't
- * (usually) take cgroup_mutex. These are the two most performance
- * critical pieces of code here. The exception occurs on cgroup_exit(),
- * when a task in a notify_on_release cgroup exits. Then cgroup_mutex
- * is taken, and if the cgroup count is zero, a usermode call made
- * to the release agent with the name of the cgroup (path relative to
- * the root of cgroup file system) as the argument.
- *
* A cgroup can only be deleted if both its 'count' of using tasks
* is zero, and its list of 'children' cgroups is empty. Since all
* tasks in the system use _some_ cgroup, and since there is always at
--
1.8.0.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 4/4] cgroup: reuse css->destroy_work for release agent
[not found] ` <54195FE1.9030607-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2014-09-17 10:18 ` [PATCH 2/4] ccgroup: remove redundant code in cgroup_rmdir() Li Zefan
2014-09-17 10:19 ` [PATCH 3/4] cgroup: remove bogus comments Li Zefan
@ 2014-09-17 10:20 ` Li Zefan
[not found] ` <5419606D.4070707-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2 siblings, 1 reply; 6+ messages in thread
From: Li Zefan @ 2014-09-17 10:20 UTC (permalink / raw)
To: Tejun Heo; +Cc: cgroups, LKML
Currently we use a global work to schedule release agent on removable
cgroups. We can change to reuse css->destroy_work to do this, which
saves a few lines of code.
Signed-off-by: Zefan Li <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
include/linux/cgroup.h | 7 ----
kernel/cgroup.c | 108 ++++++++++++++++++-------------------------------
2 files changed, 39 insertions(+), 76 deletions(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index f7898e0..97da407 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -234,13 +234,6 @@ struct cgroup {
struct list_head e_csets[CGROUP_SUBSYS_COUNT];
/*
- * Linked list running through all cgroups that can
- * potentially be reaped by the release agent. Protected by
- * release_list_lock
- */
- struct list_head release_list;
-
- /*
* list of pidlists, up to two for each namespace (one for procs, one
* for tasks); created on demand.
*/
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 1abb554..5b6566c 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -392,12 +392,7 @@ static int notify_on_release(const struct cgroup *cgrp)
; \
else
-/* the list of cgroups eligible for automatic release. Protected by
- * release_list_lock */
-static LIST_HEAD(release_list);
-static DEFINE_RAW_SPINLOCK(release_list_lock);
static void cgroup_release_agent(struct work_struct *work);
-static DECLARE_WORK(release_agent_work, cgroup_release_agent);
static void check_for_release(struct cgroup *cgrp);
/*
@@ -1577,7 +1572,6 @@ static void init_cgroup_housekeeping(struct cgroup *cgrp)
INIT_LIST_HEAD(&cgrp->self.sibling);
INIT_LIST_HEAD(&cgrp->self.children);
INIT_LIST_HEAD(&cgrp->cset_links);
- INIT_LIST_HEAD(&cgrp->release_list);
INIT_LIST_HEAD(&cgrp->pidlists);
mutex_init(&cgrp->pidlist_mutex);
cgrp->self.cgroup = cgrp;
@@ -1587,6 +1581,7 @@ static void init_cgroup_housekeeping(struct cgroup *cgrp)
INIT_LIST_HEAD(&cgrp->e_csets[ssid]);
init_waitqueue_head(&cgrp->offline_waitq);
+ INIT_WORK(&cgrp->self.destroy_work, cgroup_release_agent);
}
static void init_cgroup_root(struct cgroup_root *root,
@@ -4804,12 +4799,6 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
for_each_css(css, ssid, cgrp)
kill_css(css);
- /* CSS_ONLINE is clear, remove from ->release_list for the last time */
- raw_spin_lock(&release_list_lock);
- if (!list_empty(&cgrp->release_list))
- list_del_init(&cgrp->release_list);
- raw_spin_unlock(&release_list_lock);
-
/*
* Remove @cgrp directory along with the base files. @cgrp has an
* extra ref on its kn.
@@ -5274,21 +5263,14 @@ static void check_for_release(struct cgroup *cgrp)
if (cgroup_is_releasable(cgrp) && list_empty(&cgrp->cset_links) &&
!css_has_online_children(&cgrp->self)) {
/*
- * Control Group is currently removeable. If it's not
- * already queued for a userspace notification, queue
- * it now
+ * get a reference, so the cgroup can only be freed
+ * after the release work is done.
*/
- int need_schedule_work = 0;
+ if (!cgroup_tryget(cgrp))
+ return;
- raw_spin_lock(&release_list_lock);
- if (!cgroup_is_dead(cgrp) &&
- list_empty(&cgrp->release_list)) {
- list_add(&cgrp->release_list, &release_list);
- need_schedule_work = 1;
- }
- raw_spin_unlock(&release_list_lock);
- if (need_schedule_work)
- schedule_work(&release_agent_work);
+ if (!queue_work(cgroup_destroy_wq, &cgrp->self.destroy_work))
+ cgroup_put(cgrp);
}
}
@@ -5317,52 +5299,40 @@ static void check_for_release(struct cgroup *cgrp)
*/
static void cgroup_release_agent(struct work_struct *work)
{
- BUG_ON(work != &release_agent_work);
+ struct cgroup_subsys_state *css =
+ container_of(work, struct cgroup_subsys_state, destroy_work);
+ struct cgroup *cgrp = css->cgroup;
+ char *pathbuf = NULL, *agentbuf = NULL, *path;
+ char *argv[3], *envp[3];
+
mutex_lock(&cgroup_mutex);
- raw_spin_lock(&release_list_lock);
- while (!list_empty(&release_list)) {
- char *argv[3], *envp[3];
- int i;
- char *pathbuf = NULL, *agentbuf = NULL, *path;
- struct cgroup *cgrp = list_entry(release_list.next,
- struct cgroup,
- release_list);
- list_del_init(&cgrp->release_list);
- raw_spin_unlock(&release_list_lock);
- pathbuf = kmalloc(PATH_MAX, GFP_KERNEL);
- if (!pathbuf)
- goto continue_free;
- path = cgroup_path(cgrp, pathbuf, PATH_MAX);
- if (!path)
- goto continue_free;
- agentbuf = kstrdup(cgrp->root->release_agent_path, GFP_KERNEL);
- if (!agentbuf)
- goto continue_free;
-
- i = 0;
- argv[i++] = agentbuf;
- argv[i++] = path;
- argv[i] = NULL;
-
- i = 0;
- /* minimal command environment */
- envp[i++] = "HOME=/";
- envp[i++] = "PATH=/sbin:/bin:/usr/sbin:/usr/bin";
- envp[i] = NULL;
-
- /* Drop the lock while we invoke the usermode helper,
- * since the exec could involve hitting disk and hence
- * be a slow process */
- mutex_unlock(&cgroup_mutex);
- call_usermodehelper(argv[0], argv, envp, UMH_WAIT_EXEC);
- mutex_lock(&cgroup_mutex);
- continue_free:
- kfree(pathbuf);
- kfree(agentbuf);
- raw_spin_lock(&release_list_lock);
- }
- raw_spin_unlock(&release_list_lock);
+
+ pathbuf = kmalloc(PATH_MAX, GFP_KERNEL);
+ agentbuf = kstrdup(cgrp->root->release_agent_path, GFP_KERNEL);
+ if (!pathbuf || !agentbuf)
+ goto out;
+
+ path = cgroup_path(cgrp, pathbuf, PATH_MAX);
+ if (!path)
+ goto out;
+
+ argv[0] = agentbuf;
+ argv[1] = path;
+ argv[2] = NULL;
+
+ /* minimal command environment */
+ envp[0] = "HOME=/";
+ envp[1] = "PATH=/sbin:/bin:/usr/sbin:/usr/bin";
+ envp[2] = NULL;
+
mutex_unlock(&cgroup_mutex);
+ call_usermodehelper(argv[0], argv, envp, UMH_WAIT_EXEC);
+out:
+ kfree(agentbuf);
+ kfree(pathbuf);
+
+ /* drop the reference we got before we queue the work. */
+ cgroup_put(cgrp);
}
static int __init cgroup_disable(char *str)
--
1.8.0.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 3/4] cgroup: remove bogus comments
[not found] ` <5419602C.4080303-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2014-09-17 21:36 ` Tejun Heo
0 siblings, 0 replies; 6+ messages in thread
From: Tejun Heo @ 2014-09-17 21:36 UTC (permalink / raw)
To: Li Zefan; +Cc: cgroups, LKML
On Wed, Sep 17, 2014 at 06:19:24PM +0800, Li Zefan wrote:
> We never grab cgroup mutex in fork and exit paths no matter whether
> notify_on_release is set or not.
>
> Signed-off-by: Zefan Li <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Applied 1-3 to cgroup/for-3.18.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 4/4] cgroup: reuse css->destroy_work for release agent
[not found] ` <5419606D.4070707-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2014-09-17 21:37 ` Tejun Heo
0 siblings, 0 replies; 6+ messages in thread
From: Tejun Heo @ 2014-09-17 21:37 UTC (permalink / raw)
To: Li Zefan; +Cc: cgroups, LKML
On Wed, Sep 17, 2014 at 06:20:29PM +0800, Li Zefan wrote:
> Currently we use a global work to schedule release agent on removable
> cgroups. We can change to reuse css->destroy_work to do this, which
> saves a few lines of code.
>
> Signed-off-by: Zefan Li <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
...
> @@ -1587,6 +1581,7 @@ static void init_cgroup_housekeeping(struct cgroup *cgrp)
> INIT_LIST_HEAD(&cgrp->e_csets[ssid]);
>
> init_waitqueue_head(&cgrp->offline_waitq);
> + INIT_WORK(&cgrp->self.destroy_work, cgroup_release_agent);
It's weird to overload destroy_work like this as invoking
release_agent isn't necessarily related to destruction. I like the
change overall but can we please just use a separate work item?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2014-09-17 21:37 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-09-17 10:18 [PATCH 1/4] cgroup: remove some useless forward declarations Li Zefan
[not found] ` <54195FE1.9030607-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2014-09-17 10:18 ` [PATCH 2/4] ccgroup: remove redundant code in cgroup_rmdir() Li Zefan
2014-09-17 10:19 ` [PATCH 3/4] cgroup: remove bogus comments Li Zefan
[not found] ` <5419602C.4080303-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2014-09-17 21:36 ` Tejun Heo
2014-09-17 10:20 ` [PATCH 4/4] cgroup: reuse css->destroy_work for release agent Li Zefan
[not found] ` <5419606D.4070707-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2014-09-17 21:37 ` Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).