* [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups
@ 2015-06-03 2:38 Tejun Heo
2015-06-03 2:38 ` [PATCH -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int Tejun Heo
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Tejun Heo @ 2015-06-03 2:38 UTC (permalink / raw)
To: Johannes Weiner, Michal Hocko; +Cc: cgroups, linux-mm, Andrew Morton
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int 2015-06-03 2:38 [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups Tejun Heo @ 2015-06-03 2:38 ` Tejun Heo 2015-06-03 13:55 ` Michal Hocko 2015-06-03 15:21 ` [PATCH v2 " Tejun Heo 2015-06-03 13:48 ` [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups Michal Hocko 2015-06-03 15:19 ` [PATCH v2 " Tejun Heo 2 siblings, 2 replies; 9+ messages in thread From: Tejun Heo @ 2015-06-03 2:38 UTC (permalink / raw) To: Johannes Weiner, Michal Hocko; +Cc: cgroups, linux-mm, Andrew Morton ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int 2015-06-03 2:38 ` [PATCH -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int Tejun Heo @ 2015-06-03 13:55 ` Michal Hocko 2015-06-03 15:21 ` [PATCH v2 " Tejun Heo 1 sibling, 0 replies; 9+ messages in thread From: Michal Hocko @ 2015-06-03 13:55 UTC (permalink / raw) To: Tejun Heo; +Cc: Johannes Weiner, cgroups, linux-mm, Andrew Morton On Wed 03-06-15 11:38:59, Tejun Heo wrote: > From 5456f353297d6f10b45fd794674b09dd5ab502ca Mon Sep 17 00:00:00 2001 > From: Tejun Heo <tj@kernel.org> > Date: Tue, 2 Jun 2015 09:29:11 -0400 > > memcg->under_oom tracks whether the memcg is under OOM conditions and > is an atomic_t counter managed with mem_cgroup_[un]mark_under_oom(). > While atomic_t appears to be simple synchronization-wise, when used as > a synchronization construct like here, it's trickier and more > error-prone due to weak memory ordering rules, especially around > atomic_read(), and false sense of security. > > For example, both non-trivial read sites of memcg->under_oom are a bit > problematic although not being actually broken. > > * mem_cgroup_oom_register_event() > > It isn't explicit what guarantees the memory ordering between event > addition and memcg->under_oom check. This isn't broken only because > memcg_oom_lock is used for both event list and memcg->oom_lock. > > * memcg_oom_recover() > > The lockless test doesn't have any explanation why this would be > safe. > > mem_cgroup_[un]mark_under_oom() are very cold paths and there's no > point in avoiding locking memcg_oom_lock there. This patch converts > memcg->under_oom from atomic_t to int, puts their modifications under > memcg_oom_lock and documents why the lockless test in > memcg_oom_recover() is safe. > > Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Michal Hocko <mhocko@suse.cz> > --- > mm/memcontrol.c | 29 +++++++++++++++++++++-------- > 1 file changed, 21 insertions(+), 8 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 9f39647..4de6647 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -285,8 +285,9 @@ struct mem_cgroup { > */ > bool use_hierarchy; > > + /* protected by memcg_oom_lock */ > bool oom_lock; > - atomic_t under_oom; > + int under_oom; > > int swappiness; > /* OOM-Killer disable */ > @@ -1809,8 +1810,10 @@ static void mem_cgroup_mark_under_oom(struct mem_cgroup *memcg) > { > struct mem_cgroup *iter; > > + spin_lock(&memcg_oom_lock); > for_each_mem_cgroup_tree(iter, memcg) > - atomic_inc(&iter->under_oom); > + iter->under_oom++; > + spin_unlock(&memcg_oom_lock); > } > > static void mem_cgroup_unmark_under_oom(struct mem_cgroup *memcg) > @@ -1819,11 +1822,13 @@ static void mem_cgroup_unmark_under_oom(struct mem_cgroup *memcg) > > /* > * When a new child is created while the hierarchy is under oom, > - * mem_cgroup_oom_lock() may not be called. We have to use > - * atomic_add_unless() here. > + * mem_cgroup_oom_lock() may not be called. Watch for underflow. > */ > + spin_lock(&memcg_oom_lock); > for_each_mem_cgroup_tree(iter, memcg) > - atomic_add_unless(&iter->under_oom, -1, 0); > + if (iter->under_oom > 0) > + iter->under_oom--; > + spin_unlock(&memcg_oom_lock); > } > > static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq); > @@ -1857,7 +1862,15 @@ static void memcg_wakeup_oom(struct mem_cgroup *memcg) > > static void memcg_oom_recover(struct mem_cgroup *memcg) > { > - if (memcg && atomic_read(&memcg->under_oom)) > + /* > + * For the following lockless ->under_oom test, the only required > + * guarantee is that it must see the state asserted by an OOM when > + * this function is called as a result of userland actions > + * triggered by the notification of the OOM. This is trivially > + * achieved by invoking mem_cgroup_mark_under_oom() before > + * triggering notification. > + */ > + if (memcg && memcg->under_oom) > memcg_wakeup_oom(memcg); > } > > @@ -3866,7 +3879,7 @@ static int mem_cgroup_oom_register_event(struct mem_cgroup *memcg, > list_add(&event->list, &memcg->oom_notify); > > /* already in OOM ? */ > - if (atomic_read(&memcg->under_oom)) > + if (memcg->under_oom) > eventfd_signal(eventfd, 1); > spin_unlock(&memcg_oom_lock); > > @@ -3895,7 +3908,7 @@ static int mem_cgroup_oom_control_read(struct seq_file *sf, void *v) > struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(sf)); > > seq_printf(sf, "oom_kill_disable %d\n", memcg->oom_kill_disable); > - seq_printf(sf, "under_oom %d\n", (bool)atomic_read(&memcg->under_oom)); > + seq_printf(sf, "under_oom %d\n", (bool)memcg->under_oom); > return 0; > } > > -- > 2.4.2 > -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int 2015-06-03 2:38 ` [PATCH -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int Tejun Heo 2015-06-03 13:55 ` Michal Hocko @ 2015-06-03 15:21 ` Tejun Heo 2015-06-03 20:32 ` Michal Hocko 1 sibling, 1 reply; 9+ messages in thread From: Tejun Heo @ 2015-06-03 15:21 UTC (permalink / raw) To: Johannes Weiner, Michal Hocko; +Cc: cgroups, linux-mm, Andrew Morton memcg->under_oom tracks whether the memcg is under OOM conditions and is an atomic_t counter managed with mem_cgroup_[un]mark_under_oom(). While atomic_t appears to be simple synchronization-wise, when used as a synchronization construct like here, it's trickier and more error-prone due to weak memory ordering rules, especially around atomic_read(), and false sense of security. For example, both non-trivial read sites of memcg->under_oom are a bit problematic although not being actually broken. * mem_cgroup_oom_register_event() It isn't explicit what guarantees the memory ordering between event addition and memcg->under_oom check. This isn't broken only because memcg_oom_lock is used for both event list and memcg->oom_lock. * memcg_oom_recover() The lockless test doesn't have any explanation why this would be safe. mem_cgroup_[un]mark_under_oom() are very cold paths and there's no point in avoiding locking memcg_oom_lock there. This patch converts memcg->under_oom from atomic_t to int, puts their modifications under memcg_oom_lock and documents why the lockless test in memcg_oom_recover() is safe. Signed-off-by: Tejun Heo <tj@kernel.org> --- Update of the 1/2 patch causes a trivial context conflict. Refreshed. Thanks. mm/memcontrol.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-) --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -285,8 +285,9 @@ struct mem_cgroup { */ bool use_hierarchy; + /* protected by memcg_oom_lock */ bool oom_lock; - atomic_t under_oom; + int under_oom; int swappiness; /* OOM-Killer disable */ @@ -1809,8 +1810,10 @@ static void mem_cgroup_mark_under_oom(st { struct mem_cgroup *iter; + spin_lock(&memcg_oom_lock); for_each_mem_cgroup_tree(iter, memcg) - atomic_inc(&iter->under_oom); + iter->under_oom++; + spin_unlock(&memcg_oom_lock); } static void mem_cgroup_unmark_under_oom(struct mem_cgroup *memcg) @@ -1819,11 +1822,13 @@ static void mem_cgroup_unmark_under_oom( /* * When a new child is created while the hierarchy is under oom, - * mem_cgroup_oom_lock() may not be called. We have to use - * atomic_add_unless() here. + * mem_cgroup_oom_lock() may not be called. Watch for underflow. */ + spin_lock(&memcg_oom_lock); for_each_mem_cgroup_tree(iter, memcg) - atomic_add_unless(&iter->under_oom, -1, 0); + if (iter->under_oom > 0) + iter->under_oom--; + spin_unlock(&memcg_oom_lock); } static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq); @@ -1851,7 +1856,15 @@ static int memcg_oom_wake_function(wait_ static void memcg_oom_recover(struct mem_cgroup *memcg) { - if (memcg && atomic_read(&memcg->under_oom)) + /* + * For the following lockless ->under_oom test, the only required + * guarantee is that it must see the state asserted by an OOM when + * this function is called as a result of userland actions + * triggered by the notification of the OOM. This is trivially + * achieved by invoking mem_cgroup_mark_under_oom() before + * triggering notification. + */ + if (memcg && memcg->under_oom) __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg); } @@ -3860,7 +3873,7 @@ static int mem_cgroup_oom_register_event list_add(&event->list, &memcg->oom_notify); /* already in OOM ? */ - if (atomic_read(&memcg->under_oom)) + if (memcg->under_oom) eventfd_signal(eventfd, 1); spin_unlock(&memcg_oom_lock); @@ -3889,7 +3902,7 @@ static int mem_cgroup_oom_control_read(s struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(sf)); seq_printf(sf, "oom_kill_disable %d\n", memcg->oom_kill_disable); - seq_printf(sf, "under_oom %d\n", (bool)atomic_read(&memcg->under_oom)); + seq_printf(sf, "under_oom %d\n", (bool)memcg->under_oom); return 0; } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int 2015-06-03 15:21 ` [PATCH v2 " Tejun Heo @ 2015-06-03 20:32 ` Michal Hocko 0 siblings, 0 replies; 9+ messages in thread From: Michal Hocko @ 2015-06-03 20:32 UTC (permalink / raw) To: Tejun Heo; +Cc: Johannes Weiner, cgroups, linux-mm, Andrew Morton On Thu 04-06-15 00:21:01, Tejun Heo wrote: > memcg->under_oom tracks whether the memcg is under OOM conditions and > is an atomic_t counter managed with mem_cgroup_[un]mark_under_oom(). > While atomic_t appears to be simple synchronization-wise, when used as > a synchronization construct like here, it's trickier and more > error-prone due to weak memory ordering rules, especially around > atomic_read(), and false sense of security. > > For example, both non-trivial read sites of memcg->under_oom are a bit > problematic although not being actually broken. > > * mem_cgroup_oom_register_event() > > It isn't explicit what guarantees the memory ordering between event > addition and memcg->under_oom check. This isn't broken only because > memcg_oom_lock is used for both event list and memcg->oom_lock. > > * memcg_oom_recover() > > The lockless test doesn't have any explanation why this would be > safe. > > mem_cgroup_[un]mark_under_oom() are very cold paths and there's no > point in avoiding locking memcg_oom_lock there. This patch converts > memcg->under_oom from atomic_t to int, puts their modifications under > memcg_oom_lock and documents why the lockless test in > memcg_oom_recover() is safe. > > Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Michal Hocko <mhocko@suse.cz> Thanks! > --- > Update of the 1/2 patch causes a trivial context conflict. Refreshed. > > Thanks. > > mm/memcontrol.c | 29 +++++++++++++++++++++-------- > 1 file changed, 21 insertions(+), 8 deletions(-) > > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -285,8 +285,9 @@ struct mem_cgroup { > */ > bool use_hierarchy; > > + /* protected by memcg_oom_lock */ > bool oom_lock; > - atomic_t under_oom; > + int under_oom; > > int swappiness; > /* OOM-Killer disable */ > @@ -1809,8 +1810,10 @@ static void mem_cgroup_mark_under_oom(st > { > struct mem_cgroup *iter; > > + spin_lock(&memcg_oom_lock); > for_each_mem_cgroup_tree(iter, memcg) > - atomic_inc(&iter->under_oom); > + iter->under_oom++; > + spin_unlock(&memcg_oom_lock); > } > > static void mem_cgroup_unmark_under_oom(struct mem_cgroup *memcg) > @@ -1819,11 +1822,13 @@ static void mem_cgroup_unmark_under_oom( > > /* > * When a new child is created while the hierarchy is under oom, > - * mem_cgroup_oom_lock() may not be called. We have to use > - * atomic_add_unless() here. > + * mem_cgroup_oom_lock() may not be called. Watch for underflow. > */ > + spin_lock(&memcg_oom_lock); > for_each_mem_cgroup_tree(iter, memcg) > - atomic_add_unless(&iter->under_oom, -1, 0); > + if (iter->under_oom > 0) > + iter->under_oom--; > + spin_unlock(&memcg_oom_lock); > } > > static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq); > @@ -1851,7 +1856,15 @@ static int memcg_oom_wake_function(wait_ > > static void memcg_oom_recover(struct mem_cgroup *memcg) > { > - if (memcg && atomic_read(&memcg->under_oom)) > + /* > + * For the following lockless ->under_oom test, the only required > + * guarantee is that it must see the state asserted by an OOM when > + * this function is called as a result of userland actions > + * triggered by the notification of the OOM. This is trivially > + * achieved by invoking mem_cgroup_mark_under_oom() before > + * triggering notification. > + */ > + if (memcg && memcg->under_oom) > __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg); > } > > @@ -3860,7 +3873,7 @@ static int mem_cgroup_oom_register_event > list_add(&event->list, &memcg->oom_notify); > > /* already in OOM ? */ > - if (atomic_read(&memcg->under_oom)) > + if (memcg->under_oom) > eventfd_signal(eventfd, 1); > spin_unlock(&memcg_oom_lock); > > @@ -3889,7 +3902,7 @@ static int mem_cgroup_oom_control_read(s > struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(sf)); > > seq_printf(sf, "oom_kill_disable %d\n", memcg->oom_kill_disable); > - seq_printf(sf, "under_oom %d\n", (bool)atomic_read(&memcg->under_oom)); > + seq_printf(sf, "under_oom %d\n", (bool)memcg->under_oom); > return 0; > } > -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups 2015-06-03 2:38 [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups Tejun Heo 2015-06-03 2:38 ` [PATCH -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int Tejun Heo @ 2015-06-03 13:48 ` Michal Hocko 2015-06-03 13:56 ` Michal Hocko 2015-06-03 15:19 ` [PATCH v2 " Tejun Heo 2 siblings, 1 reply; 9+ messages in thread From: Michal Hocko @ 2015-06-03 13:48 UTC (permalink / raw) To: Tejun Heo; +Cc: Johannes Weiner, cgroups, linux-mm, Andrew Morton On Wed 03-06-15 11:38:24, Tejun Heo wrote: > From 92c2a5d90ecc5eeed0224a8f6ba533c621ac3ffa Mon Sep 17 00:00:00 2001 > From: Tejun Heo <tj@kernel.org> > Date: Tue, 2 Jun 2015 09:29:11 -0400 > > Since 4942642080ea ("mm: memcg: handle non-error OOM situations more > gracefully"), nobody uses mem_cgroup->oom_wakeups. Remove it. > > Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Michal Hocko <mhocko@suse.cz> > --- > mm/memcontrol.c | 2 -- > 1 file changed, 2 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 86648a7..9f39647 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -287,7 +287,6 @@ struct mem_cgroup { > > bool oom_lock; > atomic_t under_oom; > - atomic_t oom_wakeups; > > int swappiness; > /* OOM-Killer disable */ > @@ -1852,7 +1851,6 @@ static int memcg_oom_wake_function(wait_queue_t *wait, > > static void memcg_wakeup_oom(struct mem_cgroup *memcg) > { > - atomic_inc(&memcg->oom_wakeups); > /* for filtering, pass "memcg" as argument. */ > __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg); > } > -- > 2.4.2 > -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups 2015-06-03 13:48 ` [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups Michal Hocko @ 2015-06-03 13:56 ` Michal Hocko 0 siblings, 0 replies; 9+ messages in thread From: Michal Hocko @ 2015-06-03 13:56 UTC (permalink / raw) To: Tejun Heo; +Cc: Johannes Weiner, cgroups, linux-mm, Andrew Morton On Wed 03-06-15 15:48:30, Michal Hocko wrote: > On Wed 03-06-15 11:38:24, Tejun Heo wrote: > > From 92c2a5d90ecc5eeed0224a8f6ba533c621ac3ffa Mon Sep 17 00:00:00 2001 > > From: Tejun Heo <tj@kernel.org> > > Date: Tue, 2 Jun 2015 09:29:11 -0400 > > > > Since 4942642080ea ("mm: memcg: handle non-error OOM situations more > > gracefully"), nobody uses mem_cgroup->oom_wakeups. Remove it. > > > > Signed-off-by: Tejun Heo <tj@kernel.org> > > Acked-by: Michal Hocko <mhocko@suse.cz> Could you also inline __wake_up from memcg_wakeup_oom into its only caller while you are touching that code, please? > > > --- > > mm/memcontrol.c | 2 -- > > 1 file changed, 2 deletions(-) > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 86648a7..9f39647 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -287,7 +287,6 @@ struct mem_cgroup { > > > > bool oom_lock; > > atomic_t under_oom; > > - atomic_t oom_wakeups; > > > > int swappiness; > > /* OOM-Killer disable */ > > @@ -1852,7 +1851,6 @@ static int memcg_oom_wake_function(wait_queue_t *wait, > > > > static void memcg_wakeup_oom(struct mem_cgroup *memcg) > > { > > - atomic_inc(&memcg->oom_wakeups); > > /* for filtering, pass "memcg" as argument. */ > > __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg); > > } > > -- > > 2.4.2 > > > > -- > Michal Hocko > SUSE Labs -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups 2015-06-03 2:38 [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups Tejun Heo 2015-06-03 2:38 ` [PATCH -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int Tejun Heo 2015-06-03 13:48 ` [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups Michal Hocko @ 2015-06-03 15:19 ` Tejun Heo 2015-06-03 20:31 ` Michal Hocko 2 siblings, 1 reply; 9+ messages in thread From: Tejun Heo @ 2015-06-03 15:19 UTC (permalink / raw) To: Johannes Weiner, Michal Hocko; +Cc: cgroups, linux-mm, Andrew Morton Since 4942642080ea ("mm: memcg: handle non-error OOM situations more gracefully"), nobody uses mem_cgroup->oom_wakeups. Remove it. While at it, also fold memcg_wakeup_oom() into memcg_oom_recover() which is its only user. This cleanup was suggested by Michal. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Michal Hocko <mhocko@suse.cz> --- Patch updated. I dropped the comment as it's kinda obvious from the context and the use of __wake_up(). Thanks. mm/memcontrol.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -287,7 +287,6 @@ struct mem_cgroup { bool oom_lock; atomic_t under_oom; - atomic_t oom_wakeups; int swappiness; /* OOM-Killer disable */ @@ -1850,17 +1849,10 @@ static int memcg_oom_wake_function(wait_ return autoremove_wake_function(wait, mode, sync, arg); } -static void memcg_wakeup_oom(struct mem_cgroup *memcg) -{ - atomic_inc(&memcg->oom_wakeups); - /* for filtering, pass "memcg" as argument. */ - __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg); -} - static void memcg_oom_recover(struct mem_cgroup *memcg) { if (memcg && atomic_read(&memcg->under_oom)) - memcg_wakeup_oom(memcg); + __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg); } static void mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups 2015-06-03 15:19 ` [PATCH v2 " Tejun Heo @ 2015-06-03 20:31 ` Michal Hocko 0 siblings, 0 replies; 9+ messages in thread From: Michal Hocko @ 2015-06-03 20:31 UTC (permalink / raw) To: Tejun Heo; +Cc: Johannes Weiner, cgroups, linux-mm, Andrew Morton On Thu 04-06-15 00:19:53, Tejun Heo wrote: > Since 4942642080ea ("mm: memcg: handle non-error OOM situations more > gracefully"), nobody uses mem_cgroup->oom_wakeups. Remove it. > > While at it, also fold memcg_wakeup_oom() into memcg_oom_recover() > which is its only user. This cleanup was suggested by Michal. > > Signed-off-by: Tejun Heo <tj@kernel.org> > Cc: Michal Hocko <mhocko@suse.cz> Acked-by: Michal Hocko <mhocko@suse.cz> Thanks! > --- > Patch updated. I dropped the comment as it's kinda obvious from the > context and the use of __wake_up(). > > Thanks. > > mm/memcontrol.c | 10 +--------- > 1 file changed, 1 insertion(+), 9 deletions(-) > > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -287,7 +287,6 @@ struct mem_cgroup { > > bool oom_lock; > atomic_t under_oom; > - atomic_t oom_wakeups; > > int swappiness; > /* OOM-Killer disable */ > @@ -1850,17 +1849,10 @@ static int memcg_oom_wake_function(wait_ > return autoremove_wake_function(wait, mode, sync, arg); > } > > -static void memcg_wakeup_oom(struct mem_cgroup *memcg) > -{ > - atomic_inc(&memcg->oom_wakeups); > - /* for filtering, pass "memcg" as argument. */ > - __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg); > -} > - > static void memcg_oom_recover(struct mem_cgroup *memcg) > { > if (memcg && atomic_read(&memcg->under_oom)) > - memcg_wakeup_oom(memcg); > + __wake_up(&memcg_oom_waitq, TASK_NORMAL, 0, memcg); > } > > static void mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order) > -- > To unsubscribe from this list: send the line "unsubscribe cgroups" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2015-06-03 20:32 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-06-03 2:38 [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups Tejun Heo 2015-06-03 2:38 ` [PATCH -mm 2/2] memcg: convert mem_cgroup->under_oom from atomic_t to int Tejun Heo 2015-06-03 13:55 ` Michal Hocko 2015-06-03 15:21 ` [PATCH v2 " Tejun Heo 2015-06-03 20:32 ` Michal Hocko 2015-06-03 13:48 ` [PATCH -mm 1/2] memcg: remove unused mem_cgroup->oom_wakeups Michal Hocko 2015-06-03 13:56 ` Michal Hocko 2015-06-03 15:19 ` [PATCH v2 " Tejun Heo 2015-06-03 20:31 ` Michal Hocko
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).