* [RFC PATCH 5/5] refresh VM committed space after a task migration
@ 2008-06-09 23:33 Andrea Righi
2008-06-10 17:41 ` Dave Hansen
0 siblings, 1 reply; 3+ messages in thread
From: Andrea Righi @ 2008-06-09 23:33 UTC (permalink / raw)
To: balbir
Cc: menage, kamezawa.hiroyu, kosaki.motohiro, xemul, linux-kernel,
containers, Andrea Righi
Update VM committed space statistics when a task is migrated from a cgroup to
another. To implement this feature we must keep track of the space committed by
each task (that is directly accounted in the task_struct).
Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
---
include/linux/sched.h | 3 +++
kernel/fork.c | 3 +++
mm/memcontrol.c | 6 ++++++
3 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index ae0be3c..8b458df 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1277,6 +1277,9 @@ struct task_struct {
/* cg_list protected by css_set_lock and tsk->alloc_lock */
struct list_head cg_list;
#endif
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+ atomic_long_t vm_committed_space;
+#endif
#ifdef CONFIG_FUTEX
struct robust_list_head __user *robust_list;
#ifdef CONFIG_COMPAT
diff --git a/kernel/fork.c b/kernel/fork.c
index eaffa56..9fafbdb 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -219,6 +219,9 @@ static struct task_struct *dup_task_struct(struct task_struct *orig)
/* One for us, one for whoever does the "release_task()" (usually parent) */
atomic_set(&tsk->usage,2);
atomic_set(&tsk->fs_excl, 0);
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+ atomic_long_set(&tsk->vm_committed_space, 0);
+#endif
#ifdef CONFIG_BLK_DEV_IO_TRACE
tsk->btrace_seq = 0;
#endif
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e3e34e9..bc4923e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1334,6 +1334,7 @@ static void mem_cgroup_move_task(struct cgroup_subsys *ss,
{
struct mm_struct *mm;
struct mem_cgroup *mem, *old_mem;
+ long committed;
if (mem_cgroup_subsys.disabled)
return;
@@ -1355,6 +1356,11 @@ static void mem_cgroup_move_task(struct cgroup_subsys *ss,
if (!thread_group_leader(p))
goto out;
+ preempt_disable();
+ committed = atomic_long_read(&p->vm_committed_space);
+ atomic_long_sub(committed, &old_mem->vmacct.vm_committed_space);
+ atomic_long_add(committed, &mem->vmacct.vm_committed_space);
+ preempt_enable();
out:
mmput(mm);
}
--
1.5.4.3
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [RFC PATCH 5/5] refresh VM committed space after a task migration
2008-06-09 23:33 [RFC PATCH 5/5] refresh VM committed space after a task migration Andrea Righi
@ 2008-06-10 17:41 ` Dave Hansen
2008-06-11 10:37 ` Andrea Righi
0 siblings, 1 reply; 3+ messages in thread
From: Dave Hansen @ 2008-06-10 17:41 UTC (permalink / raw)
To: Andrea Righi
Cc: balbir, linux-kernel, kosaki.motohiro, containers, menage, xemul
On Tue, 2008-06-10 at 01:33 +0200, Andrea Righi wrote:
> + preempt_disable();
> + committed = atomic_long_read(&p->vm_committed_space);
> + atomic_long_sub(committed, &old_mem->vmacct.vm_committed_space);
> + atomic_long_add(committed, &mem->vmacct.vm_committed_space);
> + preempt_enable();
> out:
> mmput(mm);
> }
Why bother with the preempt stuff here? What does the actually protect
against? I assume that you're trying to keep other tasks that might run
on this CPU from seeing weird, inconsistent numbers in here. Is there
some other looks that keeps *other* cpus from seeing this?
In any case, I think it needs a big, fat comment.
-- Dave
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [RFC PATCH 5/5] refresh VM committed space after a task migration
2008-06-10 17:41 ` Dave Hansen
@ 2008-06-11 10:37 ` Andrea Righi
0 siblings, 0 replies; 3+ messages in thread
From: Andrea Righi @ 2008-06-11 10:37 UTC (permalink / raw)
To: Dave Hansen
Cc: balbir, linux-kernel, kosaki.motohiro, containers, menage, xemul
Dave Hansen wrote:
> On Tue, 2008-06-10 at 01:33 +0200, Andrea Righi wrote:
>> + preempt_disable();
>> + committed = atomic_long_read(&p->vm_committed_space);
>> + atomic_long_sub(committed, &old_mem->vmacct.vm_committed_space);
>> + atomic_long_add(committed, &mem->vmacct.vm_committed_space);
>> + preempt_enable();
>> out:
>> mmput(mm);
>> }
>
> Why bother with the preempt stuff here? What does the actually protect
> against? I assume that you're trying to keep other tasks that might run
> on this CPU from seeing weird, inconsistent numbers in here. Is there
> some other looks that keeps *other* cpus from seeing this?
>
> In any case, I think it needs a big, fat comment.
Yes, true, mem_cgroup_move_task() is called after the task->cgroups
pointer has been changed. So, even if task changes its committed space
between the atomic_long_sub() and atomic_long_add() it will be correctly
accounted in the new cgroup.
-Andrea
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2008-06-11 10:38 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-06-09 23:33 [RFC PATCH 5/5] refresh VM committed space after a task migration Andrea Righi
2008-06-10 17:41 ` Dave Hansen
2008-06-11 10:37 ` Andrea Righi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox