From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 429A9198822 for ; Wed, 18 Dec 2024 10:22:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.48 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734517367; cv=none; b=KiqGAm1m1LdLuraFL7mX2fYVWYtyT4P96SdjIy5sGTW2P0yg37UV807PPy3lWOJmgDWiBPmFplGlM2TiX+pj9REcmTMDlIWqWqxTFt2ocewrwsnSABtIF6td56CPXAIHSqeYAaHR31fRM8FOYMZc/M1eMKPGofhBq/pE9ERv1E8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734517367; c=relaxed/simple; bh=9bzkDyasgQRPQbe0QqxrPH1hlqz7hBcQVSTkwNrJGE4=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=HkfMTOFBF/n5lAA9hKSLYoL4wpkjjZsNONRR/YLIVT7oOuRpMBjhDnrGrg7FoRp5VHBnAAWmWMafjhu45ryEtCnNenMCK7EZ56zh7XxP5TE2OwK6JZprtQayOFsVg/v1IFa6prNCHhL2du74kgWj3pPudVzIJYcgNblPyLDxBB8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=Q1DUgQDR; arc=none smtp.client-ip=209.85.208.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="Q1DUgQDR" Received: by mail-ed1-f48.google.com with SMTP id 4fb4d7f45d1cf-5d3e829ff44so1110050a12.0 for ; Wed, 18 Dec 2024 02:22:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1734517363; x=1735122163; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=oYOKzNBnl4zkAfK+4bxUQampaW88vV82YML9h86Utj4=; b=Q1DUgQDR3mLmpPGEeYpNQRvlBYJ0ttMi4zFZAi4jvUECSzdUsH6vGfvz+FfM/P0XZP fFUt/sa5N9rrNDOvcrN690saQMCOHiiL2FYqpx5lw/mkuUPpu87AHKWEl4tE2YB/lg6V WNNK+GPEknDzEWvn9o9bGypVtv3HPuS5Bhd0xEeKr885yF129aouLUU5P/qL4jfW5tgN ABdYBR+qjCCrSKOuCkix7xVjU18lSaXa7RxVkW6PWFTCx4TANhr7nDe1GUJtzSACcRCc sfDDtr7o+hgco+toUql5fc+FkvKjnTHLrcia1vP3H9ogtFcw1OuW/6y0KtimiJbOkT+D /jKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734517363; x=1735122163; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=oYOKzNBnl4zkAfK+4bxUQampaW88vV82YML9h86Utj4=; b=vJ/oMjI69xZesyxhxxjkl9GQoAVfU3BlZ4d8ahhss5MMw5D/bvAi2qy6Qg8C87ptuE xQCn3wk3sC4Zu3/2BzJ37p+Qc2YKcTZumrUux0C/g6Ci+398q2wEeNs7DBCCpdxUqyCv aq4lez3GrJQtCHzIl+uIPmyzbUzRrHKGqS7m0/EtDlSWmAOkMWejCw16FM2JN50RM3QC 6CNOtQ7x5WdRaEeMu2qLDKG1QrT9QQjd+jmOFaWgR73ZRYrquuFS213ESDnSt0Wjq4g+ Vi1CRTV1UDuPGAP+89mzyiriajFpVvJnwnLoiS3mu+0p7z207Vv+e3T4bcWRltZc/Xr7 WoYw== X-Forwarded-Encrypted: i=1; AJvYcCVQ7//PLwKvhJbBmceVc4fi5afD4T7wpyyuKlQvr61nQOY9ZQQZm3qst9JUd1Tbkl2Mvugxoml2riIMEqI=@vger.kernel.org X-Gm-Message-State: AOJu0Yza1ynzkhv+XXdxctsiyme12tDHpWEMuf1ozCnA/GKt5qPKv8t+ 9DgheCvtuP8c35VVqrXoEWV9Z8sjUjOi4DPN+CbJ4pnRMET48SbQqlXfV/W8Tuw= X-Gm-Gg: ASbGncucpBTyIQu+2Ncfa0KMlstiaAUbSt66C/1T6QKZmN/pTkq3dseZ6t1hsuIN0cd TzxCfrVr+JKNG599mp/DWvVPZrxYQUM7+/yl/8yqkOU9rvfDRqnwXBsjYEOrIt0AOsdeSFV6J2Q FBW6Kys8uD/e1n+uHXaLe/BsOWPq5Ghx173XVZPO6za+JlXwz07WEV3RrxO87sCup3Mj6BMcJOU TjkckgzehK2qA9jjC9Xoamd5GIC6iRkUNS73qB5kifrtuVGWbZrY6RgWNViH1AnKBs= X-Google-Smtp-Source: AGHT+IEHMf20e4IOkXuBAj5Qk+7IMn4EhZCqjZWbJBo6BUh8KaMpOPPLLrs0IiDTRqJYNXF2qFtlWw== X-Received: by 2002:a17:906:c093:b0:aab:edc2:ccef with SMTP id a640c23a62f3a-aabedc2cdffmr301073466b.2.1734517363519; Wed, 18 Dec 2024 02:22:43 -0800 (PST) Received: from localhost (109-81-89-64.rct.o2.cz. [109.81.89.64]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-aab9638ecfesm548407766b.146.2024.12.18.02.22.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2024 02:22:43 -0800 (PST) Date: Wed, 18 Dec 2024 11:22:42 +0100 From: Michal Hocko To: Chen Ridong Cc: Tejun Heo , akpm@linux-foundation.org, hannes@cmpxchg.org, yosryahmed@google.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, davidf@vimeo.com, vbabka@suse.cz, handai.szj@taobao.com, rientjes@google.com, kamezawa.hiroyu@jp.fujitsu.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, chenridong@huawei.com, wangweiyang2@huawei.com Subject: Re: [PATCH v1] memcg: fix soft lockup in the OOM process Message-ID: References: <20241217121828.3219752-1-chenridong@huaweicloud.com> <872c5042-01d6-4ff3-94bc-8df94e1e941c@huaweicloud.com> <02f7d744-f123-4523-b170-c2062b5746c8@huaweicloud.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <02f7d744-f123-4523-b170-c2062b5746c8@huaweicloud.com> On Wed 18-12-24 17:00:38, Chen Ridong wrote: > > > On 2024/12/18 15:56, Michal Hocko wrote: > > On Wed 18-12-24 15:44:34, Chen Ridong wrote: > >> > >> > >> On 2024/12/17 20:54, Michal Hocko wrote: > >>> On Tue 17-12-24 12:18:28, Chen Ridong wrote: > >>> [...] > >>>> diff --git a/mm/oom_kill.c b/mm/oom_kill.c > >>>> index 1c485beb0b93..14260381cccc 100644 > >>>> --- a/mm/oom_kill.c > >>>> +++ b/mm/oom_kill.c > >>>> @@ -390,6 +390,7 @@ static int dump_task(struct task_struct *p, void *arg) > >>>> if (!is_memcg_oom(oc) && !oom_cpuset_eligible(p, oc)) > >>>> return 0; > >>>> > >>>> + cond_resched(); > >>>> task = find_lock_task_mm(p); > >>>> if (!task) { > >>>> /* > >>> > >>> This is called from RCU read lock for the global OOM killer path and I > >>> do not think you can schedule there. I do not remember specifics of task > >>> traversal for crgoup path but I guess that you might need to silence the > >>> soft lockup detector instead or come up with a different iteration > >>> scheme. > >> > >> Thank you, Michal. > >> > >> I made a mistake. I added cond_resched in the mem_cgroup_scan_tasks > >> function below the fn, but after reconsideration, it may cause > >> unnecessary scheduling for other callers of mem_cgroup_scan_tasks. > >> Therefore, I moved it into the dump_task function. However, I missed the > >> RCU lock from the global OOM. > >> > >> I think we can use touch_nmi_watchdog in place of cond_resched, which > >> can silence the soft lockup detector. Do you think that is acceptable? > > > > It is certainly a way to go. Not the best one at that though. Maybe we > > need different solution for the global and for the memcg OOMs. During > > the global OOM we rarely care about latency as the whole system is > > likely to struggle. Memcg ooms are much more likely. Having that many > > tasks in a memcg certainly requires a further partitioning so if > > configured properly the OOM latency shouldn't be visible much. But I am > > wondering whether the cgroup task iteration could use cond_resched while > > the global one would touch_nmi_watchdog for every N iterations. I might > > be missing something but I do not see any locking required outside of > > css_task_iter_*. > > Do you mean like that: I've had something like this (untested) in mind diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7b3503d12aaf..37abc94abd2e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1167,10 +1167,14 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg, for_each_mem_cgroup_tree(iter, memcg) { struct css_task_iter it; struct task_struct *task; + unsigned int i = 0 css_task_iter_start(&iter->css, CSS_TASK_ITER_PROCS, &it); - while (!ret && (task = css_task_iter_next(&it))) + while (!ret && (task = css_task_iter_next(&it))) { ret = fn(task, arg); + if (++i % 1000) + cond_resched(); + } css_task_iter_end(&it); if (ret) { mem_cgroup_iter_break(memcg, iter); diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 1c485beb0b93..3bf2304ed20c 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -430,10 +430,14 @@ static void dump_tasks(struct oom_control *oc) mem_cgroup_scan_tasks(oc->memcg, dump_task, oc); else { struct task_struct *p; + unsigned int i = 0; rcu_read_lock(); - for_each_process(p) + for_each_process(p) { + if (++i % 1000) + touch_softlockup_watchdog(); dump_task(p, oc); + } rcu_read_unlock(); } } -- Michal Hocko SUSE Labs