From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756066AbdETShv (ORCPT ); Sat, 20 May 2017 14:37:51 -0400 Received: from fallback14.m.smailru.net ([94.100.179.44]:45134 "EHLO fallback.mail.ru" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751207AbdETSht (ORCPT ); Sat, 20 May 2017 14:37:49 -0400 Date: Sat, 20 May 2017 21:37:29 +0300 From: Vladimir Davydov To: Roman Gushchin Cc: Johannes Weiner , Tejun Heo , Li Zefan , Michal Hocko , Tetsuo Handa , kernel-team@fb.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [RFC PATCH] mm, oom: cgroup-aware OOM-killer Message-ID: <20170520183729.GA3195@esperanza> References: <1495124884-28974-1-git-send-email-guro@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1495124884-28974-1-git-send-email-guro@fb.com> X-7FA49CB5: 0D63561A33F958A58640F31ED6B06FFEC46B7F649BCBBA574B8562CE49689C14725E5C173C3A84C3F1A05ED85206240DA325EA407ACB398325A4AB119743A3B3C4224003CC836476C0CAF46E325F83A50BF2EBBBDD9D6B0F05F538519369F3743B503F486389A921A5CC5B56E945C8DA X-Mailru-Sender: 9656D3ECB9E1E8183158E8855D9522F8B36F9084FAE713B3E071978D7591AD97064B8FEF34F63F30828FD2C11D88D97AE66B5C1DBFD5D09D40AAEF093C5D9B0763179752534DC8697402F9BA4338D657ED14614B50AE0675 X-Mras: OK X-7FA49CB5: 0D63561A33F958A5A5EDD39B49621065F367E400BEBB169E1D4C199B71AAC7EE462275124DF8B9C9DE2850DD75B2526BE5BFE6E7EFDEDCD789D4C264860C145E X-Mailru-Sender: A5480F10D64C9005265567A7F4CD7519DF91477C2F8C5C189F6B0F853DB5C9DEA82939391C8226F2828FD2C11D88D97AE66B5C1DBFD5D09D40AAEF093C5D9B0763179752534DC8697402F9BA4338D657ED14614B50AE0675 X-Mras: OK Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello Roman, On Thu, May 18, 2017 at 05:28:04PM +0100, Roman Gushchin wrote: ... > +5-2-4. Cgroup-aware OOM Killer > + > +Cgroup v2 memory controller implements a cgroup-aware OOM killer. > +It means that it treats memory cgroups as memory consumers > +rather then individual processes. Under the OOM conditions it tries > +to find an elegible leaf memory cgroup, and kill all processes > +in this cgroup. If it's not possible (e.g. all processes belong > +to the root cgroup), it falls back to the traditional per-process > +behaviour. I agree that the current OOM victim selection algorithm is totally unfair in a system using containers and it has been crying for rework for the last few years now, so it's great to see this finally coming. However, I don't reckon that killing a whole leaf cgroup is always the best practice. It does make sense when cgroups are used for containerizing services or applications, because a service is unlikely to remain operational after one of its processes is gone, but one can also use cgroups to containerize processes started by a user. Kicking a user out for one of her process has gone mad doesn't sound right to me. Another example when the policy you're suggesting fails in my opinion is in case a service (cgroup) consists of sub-services (sub-cgroups) that run processes. The main service may stop working normally if one of its sub-services is killed. So it might make sense to kill not just an individual process or a leaf cgroup, but the whole main service with all its sub-services. And both kinds of workloads (services/applications and individual processes run by users) can co-exist on the same host - consider the default systemd setup, for instance. IMHO it would be better to give users a choice regarding what they really want for a particular cgroup in case of OOM - killing the whole cgroup or one of its descendants. For example, we could introduce a per-cgroup flag that would tell the kernel whether the cgroup can tolerate killing a descendant or not. If it can, the kernel will pick the fattest sub-cgroup or process and check it. If it cannot, it will kill the whole cgroup and all its processes and sub-cgroups. > + > +The memory controller tries to make the best choise of a victim cgroup. > +In general, it tries to select the largest cgroup, matching given > +node/zone requirements, but the concrete algorithm is not defined, > +and may be changed later. > + > +This affects both system- and cgroup-wide OOMs. For a cgroup-wide OOM > +the memory controller considers only cgroups belonging to a sub-tree > +of the OOM-ing cgroup, including itself. ... > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index c131f7e..8d07481 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2625,6 +2625,75 @@ static inline bool memcg_has_children(struct mem_cgroup *memcg) > return ret; > } > > +bool mem_cgroup_select_oom_victim(struct oom_control *oc) > +{ > + struct mem_cgroup *iter; > + unsigned long chosen_memcg_points; > + > + oc->chosen_memcg = NULL; > + > + if (mem_cgroup_disabled()) > + return false; > + > + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) > + return false; > + > + pr_info("Choosing a victim memcg because of %s", > + oc->memcg ? > + "memory limit reached of cgroup " : > + "out of memory\n"); > + if (oc->memcg) { > + pr_cont_cgroup_path(oc->memcg->css.cgroup); > + pr_cont("\n"); > + } > + > + chosen_memcg_points = 0; > + > + for_each_mem_cgroup_tree(iter, oc->memcg) { > + unsigned long points; > + int nid; > + > + if (mem_cgroup_is_root(iter)) > + continue; > + > + if (memcg_has_children(iter)) > + continue; > + > + points = 0; > + for_each_node_state(nid, N_MEMORY) { > + if (oc->nodemask && !node_isset(nid, *oc->nodemask)) > + continue; > + points += mem_cgroup_node_nr_lru_pages(iter, nid, > + LRU_ALL_ANON | BIT(LRU_UNEVICTABLE)); > + } > + points += mem_cgroup_get_nr_swap_pages(iter); I guess we should also take into account kmem as well (unreclaimable slabs, kernel stacks, socket buffers). > + > + pr_info("Memcg "); > + pr_cont_cgroup_path(iter->css.cgroup); > + pr_cont(": %lu\n", points); > + > + if (points > chosen_memcg_points) { > + if (oc->chosen_memcg) > + css_put(&oc->chosen_memcg->css); > + > + oc->chosen_memcg = iter; > + css_get(&iter->css); > + > + chosen_memcg_points = points; > + } > + } > + > + if (oc->chosen_memcg) { > + pr_info("Kill memcg "); > + pr_cont_cgroup_path(oc->chosen_memcg->css.cgroup); > + pr_cont(" (%lu)\n", chosen_memcg_points); > + } else { > + pr_info("No elegible memory cgroup found\n"); > + } > + > + return !!oc->chosen_memcg; > +}