From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AD66C10F29 for ; Tue, 17 Mar 2020 07:52:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C525720663 for ; Tue, 17 Mar 2020 07:52:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C525720663 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4694A6B0005; Tue, 17 Mar 2020 03:52:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4199C6B0006; Tue, 17 Mar 2020 03:52:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32FED6B0007; Tue, 17 Mar 2020 03:52:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0064.hostedemail.com [216.40.44.64]) by kanga.kvack.org (Postfix) with ESMTP id 17A0A6B0005 for ; Tue, 17 Mar 2020 03:52:16 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EFCDA824805A for ; Tue, 17 Mar 2020 07:52:15 +0000 (UTC) X-FDA: 76604086230.08.crowd23_4df42464a9a56 X-HE-Tag: crowd23_4df42464a9a56 X-Filterd-Recvd-Size: 5703 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Tue, 17 Mar 2020 07:52:15 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id b2so18067168wrj.10 for ; Tue, 17 Mar 2020 00:52:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=FuaQPF1ZWPKes0ddeJfIKVDXd4T44v9s14fJWIErxcE=; b=LcPJ6UYobPy32sDd7Gv5x8tRlSIKe3GT/ITGcQjVLY5oi/8d8HgKh082X0vdj1klzx Q2aeCx5hE4OTCwoTs5oG6pj7FrshIF9mxaRhyIBPM7XQHlE3Sb5Mh+tR79pEJDBae4yV Wpnh0HVU9HcHQpWGNpb+QZrSMUTECPrWQqtDp3OuiV9zLA42VA+t7YQudmUMGqS9Q1ma i7yhUpnwdcKDDH0Z8CywSK54dSFz88I1lVEDgffL6qCAZcu/+K2lDLFRJG4y7xSWyBWR Fcq5yK2abKG7KEBksi+2CrND5brk+PfGR3Dpk72mHYgdLhad/k3MVm3mADInEhd3C+hj ZTEA== X-Gm-Message-State: ANhLgQ1SJFOw7Ft4ITfR+MRrS9m9UYz1+H6fPV22F9b1u5M+lswlUViu 5qs8o9O9S5TpNS2A5DANH8E= X-Google-Smtp-Source: ADFU+vuAYroYBbemsZbZfWLaUtPDVfvJGwI/+NUsFg3a4LiTnMlH4PpyoJvKi6hxXHz8t2ZqTxqXeQ== X-Received: by 2002:adf:ed04:: with SMTP id a4mr4372448wro.76.1584431534404; Tue, 17 Mar 2020 00:52:14 -0700 (PDT) Received: from localhost (ip-37-188-255-121.eurotel.cz. [37.188.255.121]) by smtp.gmail.com with ESMTPSA id z11sm2875547wmc.30.2020.03.17.00.52.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Mar 2020 00:52:13 -0700 (PDT) Date: Tue, 17 Mar 2020 08:52:12 +0100 From: Michal Hocko To: Roman Gushchin Cc: Andrew Morton , linux-mm@kvack.org, kernel-team@fb.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration Message-ID: <20200317075212.GC26018@dhcp22.suse.cz> References: <20200316223510.3176148-1-guro@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200316223510.3176148-1-guro@fb.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon 16-03-20 15:35:10, Roman Gushchin wrote: > If a task is getting moved out of the OOMing cgroup, it might > result in unexpected OOM killings if memory.oom.group is used > anywhere in the cgroup tree. > > Imagine the following example: > > A (oom.group = 1) > / \ > (OOM) B C > > Let's say B's memory.max is exceeded and it's OOMing. The OOM killer > selects a task in B as a victim, but someone asynchronously moves > the task into C. I can see Reported-by here, does that mean that the race really happened in real workloads? If yes, I would be really curious. Mostly because moving tasks outside of the oom domain is quite questionable without charge migration. > mem_cgroup_get_oom_group() will iterate over all > ancestors of C up to the root cgroup. In theory it had to stop > at the oom_domain level - the memory cgroup which is OOMing. > But because B is not an ancestor of C, it's not happening. > Instead it chooses A (because it's oom.group is set), and kills > all tasks in A. This behavior is wrong because the OOM happened in B, > so there is no reason to kill anything outside. > > Fix this by checking it the memory cgroup to which the task belongs > is a descendant of the oom_domain. If not, memory.oom.group should > be ignored, and the OOM killer should kill only the victim task. I was about to suggest storing the memcg in oom_evaluate_task but then I have realized that this would be both more complex and I am not yet sure it would be better so much better after all. The thing is that killing the selected task makes a lot of sense because it was the largest consumer. No matter it has run away. On the other hand if your B was oom.group = 1 then one could expect that any OOM killer event in that group will result in the whole group tear down. This is however a gray zone because we do emit MEMCG_OOM event but MEMCG_OOM_KILL event will go to the victim's at-the-time memcg. So the observer B could think that the oom was resolved without killing while observer C would see a kill event without oom. That being said, please try to think about the above. I will give it some more time as well. Killing the selected victim is the obviously correct thing and your patch does that so it is correct in that regard but I believe that the group oom behavior in the original oom domain remains an open question. Fixes: 3d8b38eb81ca ("mm, oom: introduce memory.oom.group") > Signed-off-by: Roman Gushchin > Reported-by: Dan Schatzberg > --- > mm/memcontrol.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index daa399be4688..d8c4b7aa4e73 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1930,6 +1930,14 @@ struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim, > if (memcg == root_mem_cgroup) > goto out; > > + /* > + * If the victim task has been asynchronously moved to a different > + * memory cgroup, we might end up killing tasks outside oom_domain. > + * In this case it's better to ignore memory.group.oom. > + */ > + if (unlikely(!mem_cgroup_is_descendant(memcg, oom_domain))) > + goto out; > + > /* > * Traverse the memory cgroup hierarchy from the victim task's > * cgroup up to the OOMing cgroup (or root) to find the > -- > 2.24.1 -- Michal Hocko SUSE Labs