From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EACE8C54E58 for ; Wed, 13 Mar 2024 03:23:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7079D8E0011; Tue, 12 Mar 2024 23:23:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B7B46B02C1; Tue, 12 Mar 2024 23:23:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 558E38E0011; Tue, 12 Mar 2024 23:23:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 472256B02C0 for ; Tue, 12 Mar 2024 23:23:01 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E0BA540942 for ; Wed, 13 Mar 2024 03:23:00 +0000 (UTC) X-FDA: 81890569320.10.AC60431 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf13.hostedemail.com (Postfix) with ESMTP id 0AAAE20008 for ; Wed, 13 Mar 2024 03:22:58 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=Igx1HjNb; spf=pass (imf13.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.51 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710300179; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F3s4TTJvMJqkMSqpK1DSgijjltVsFyttY+B0U/uTxh8=; b=Y8eSPASRdZtMdDZxFHiyXxztiJdhgmU2zLbaJcR4BDLwS6r2lFq4gmhrBB8KSv4m6BAala MPvSEBnqcVoAUy8+VQCkjgtLaIwESJisqVJYqbFZJCFd3cKv+R1+m17ut32jttZC5m2Rug Plq/Um7+G9/STSergRUQ8icHOyC3F9U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710300179; a=rsa-sha256; cv=none; b=hBNLkaX0LHWCwF1LJhwI8nM5/+icbDLPHOk/oAQw0tMQAGgIpUJ5VmTcggirPfB8Ij7pqw x/mENisJXQ1vRRtoL50amOhZVgb7fUnnqC9b4gtmNGI1di4IgCdiyCmPRPjXhzilfwGpop BOw2/t0OiBV7mSNKFKD8qoZPFtcGH7k= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=Igx1HjNb; spf=pass (imf13.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.51 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qv1-f51.google.com with SMTP id 6a1803df08f44-68f901192afso25644056d6.1 for ; Tue, 12 Mar 2024 20:22:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710300178; x=1710904978; darn=kvack.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=F3s4TTJvMJqkMSqpK1DSgijjltVsFyttY+B0U/uTxh8=; b=Igx1HjNbYPGDGCCmPKGcClM75844yzg0rEjo98dWDTCkOMj75LDmjOhFUzFLt9Nmzb 25IRZo6tUqtJKmg99XyiMp/JpvdEBBcnZT2dv+Uu8QFwFmfO63/SlMWNrvTMH04eKN0J GYVlMj1VKDcGJB5jpUr3jDAy7fb/wMaA3sdh/MRSiUjeUvBMiKvkwVeiN5UEBfzB1vk+ 5PvYrhr1LIPwUrAy2BlJwTmNRcQPXsMt9B0ag4yjHTVTHKTDx4P+ip8IJqEAQLO9UB16 kp7mH3WOrh434/9KUbZKZqGHDgynfQxlfylLCgT7w5wm0lcQMJVQC88MWW4HGW0vKhCp /TJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710300178; x=1710904978; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=F3s4TTJvMJqkMSqpK1DSgijjltVsFyttY+B0U/uTxh8=; b=C7IhuG8pB8Bi7GcIWQ9DRUUOK3HtsGEVKP/SXf6GS2F4T+lkjk6GodGL6EsxV8mlTd /6Qz6h4DjRp8waybVfSYT6W4ZSWhB7HpNhuPz12oDDnP3115a3rlHwi3ST1lxHSPAPaV 70w41kyMeCE6ETHCfKuvGYofnCpk9rxR4aiaK3Q1NYH1B2k7PYFvLhK+0EvdZSmSu5S1 N0R3EHqGcL8IMFLhB/ObYpZWAEEdu1Nst5sf5ACyWdPFD1y6MIkBPzPiVKFSEU2chcgx lYLINQO62zuxNTGHcJvSbgiTKCdMx0GC2M/FY2q4vb3Ywn+J6Hhi9VqUJyuq2cr5qQDP NIWg== X-Forwarded-Encrypted: i=1; AJvYcCW3tZ2Dp/bxv+Wg57jo9CEr+oTAML3GJ0JoJgB2SxGoX3xo+UdV07uvg4Jcssfl+FBT9e/BjaB4SPdLkIOT4S0dUOA= X-Gm-Message-State: AOJu0YwAlba1OrVhCiq2SZV6aoRtgCR+s8eztsVezYB7RUx91ho8X3AP oD3T8+6qQZ/BHRHT6IMHCkOG69UYVRrNaR13Nwy2aKbXHocRJQBSElDQce80tvw= X-Google-Smtp-Source: AGHT+IE25aWzB9YRFFO0A4gFZzCu3BbWwIOuemelHGf/0tverskxS3u0zwO6AT/bjd6L66vmac86kA== X-Received: by 2002:ad4:4c04:0:b0:690:db76:9710 with SMTP id bz4-20020ad44c04000000b00690db769710mr5489122qvb.51.1710300178009; Tue, 12 Mar 2024 20:22:58 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id m18-20020a05621402b200b00690ce64c0a0sm2806862qvv.30.2024.03.12.20.22.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Mar 2024 20:22:57 -0700 (PDT) Date: Tue, 12 Mar 2024 23:22:52 -0400 From: Johannes Weiner To: Yu Zhao Cc: Axel Rasmussen , Yafang Shao , Chris Down , cgroups@vger.kernel.org, kernel-team@fb.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: MGLRU premature memcg OOM on slow writes Message-ID: <20240313032252.GC65481@cmpxchg.org> References: <20240229235134.2447718-1-axelrasmussen@google.com> <20240312210822.GB65481@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Queue-Id: 0AAAE20008 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: be9ffyzjbpzbyiiu4473tux9h5m5kbfs X-HE-Tag: 1710300178-953084 X-HE-Meta: U2FsdGVkX198Jb93JiuQHqURVSb+7cGdnaKMMKj8VO8eMzztIcun+u2kBE4RpL0lrVo75wXQO8BUCs+wwI5attNZTLvUY5/n9N8eUGPy7y1M575DletRuN7VecKkDJ5ae/JEMzlYLQDyhAxbjeevO06alZ1Qdnx8oBS/o06TASIj6CTvnNllDEtIjM/i2QDSmE/U442mBfuJTHuiXZLuwB8y0cKoReu51+09u48vwhTrDADpPfiySpgzp5uEmpklQfpZBo+wjq9KtZdilfOYMhw4Pxoc7taRBEIi/wGixgbQc1ABOmTldUAarQcM8ikRclfgeAe1pAd0uMZYwyafc/AXzpTy2mJ8L0wYW8yXkaR+XCQ4+K+R1A7KAWxxFDZcUgMW04w1bveQIoB7j9GwoecwAp3hSBx6dvOB+w7mt/+6pdvpu92VESg3vzyWLIecL0QA9eS3ixdnqVI4NNaV65bk4QI4HGqjYbrlHtkWmHJxjwZaSEraee99NLQnUxJU+ahp7Bf/pcxz5m9jWGakDZtT/+L+RwavLxt0zxsrfJ+0hJ81N6Uhj9Dk0joxI76TH5wqAXm6//Z0gYrvZRSJd4x+Ngc9GXRBsbDxcP8ZJyf3QGGDbR75zHgziYdD8p1lH0yRDHhaH0CcMAvLPY13MOnnEPGL1Ze+vjpnZC0+GjgZMpdA6QaG7GVXkAegMIQrSRAuUADlwZ29go+lveIxGkjuiRCzvCPswQsy4iXnTSC9blJBoVyYrTLS+U0mjAgU5SQJYIEXJe9iLAhzzamH/YDsOGYv6B5vPX5AJhJBPQSVj08LKd1qkLwK4fZ0RtoYIdMQtciBGiBRIlFvsN4MxzAXV/mz/fJIrK5kZU5YpGPfUNjxDwyr3X8qM9mbWvfzlHc2XVITTGKEtbSX7By6N83w8xEr6aTWiou3ZlnMEAOjAjtlNrzQ1ujAy+WdDcNTfQkvNuaNPIFPtQs5G7q Q4t6NQLw HOK+4/osvSSHvCqpcfyygiE95JD8sCVZguMAVXPVo2dys2QdQGmdRvH3RW1u7PrMa693lWVkUNwUbSzb/gDjE65a3bDiSpH2srj0d9lT/9+s6cGXvSOiwkFFGZxMGjdR/oVqaLTGfoV+QHebWncsOQB7JBdPzcFTuY4DU92syYrE+kjXpbE9Cl92rtzXTba8sdi65LoC54bKoBPrmG1rSaxj5Jw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Mar 12, 2024 at 10:08:13PM -0400, Yu Zhao wrote: > On Tue, Mar 12, 2024 at 5:08 PM Johannes Weiner wrote: > > > > On Tue, Mar 12, 2024 at 02:07:04PM -0600, Yu Zhao wrote: > > > Yes, these two are among the differences between the active/inactive > > > LRU and MGLRU, but their roles, IMO, are not as important as the LRU > > > positions of dirty pages. The active/inactive LRU moves dirty pages > > > all the way to the end of the line (reclaim happens at the front) > > > whereas MGLRU moves them into the middle, during direct reclaim. The > > > rationale for MGLRU was that this way those dirty pages would still > > > be counted as "inactive" (or cold). > > > > Note that activating the page is not a statement on the page's > > hotness. It's simply to park it away from the scanner. We could as > > well have moved it to the unevictable list - this is just easier. > > > > folio_end_writeback() will call folio_rotate_reclaimable() and move it > > back to the inactive tail, to make it the very next reclaim target as > > soon as it's clean. > > > > > This theory can be quickly verified by comparing how much > > > nr_vmscan_immediate_reclaim grows, i.e., > > > > > > Before the copy > > > grep nr_vmscan_immediate_reclaim /proc/vmstat > > > And then after the copy > > > grep nr_vmscan_immediate_reclaim /proc/vmstat > > > > > > The growth should be trivial for MGLRU and nontrivial for the > > > active/inactive LRU. > > > > > > If this is indeed the case, I'd appreciate very much if anyone could > > > try the following (I'll try it myself too later next week). > > > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > index 4255619a1a31..020f5d98b9a1 100644 > > > --- a/mm/vmscan.c > > > +++ b/mm/vmscan.c > > > @@ -4273,10 +4273,13 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c > > > } > > > > > > /* waiting for writeback */ > > > - if (folio_test_locked(folio) || folio_test_writeback(folio) || > > > - (type == LRU_GEN_FILE && folio_test_dirty(folio))) { > > > - gen = folio_inc_gen(lruvec, folio, true); > > > - list_move(&folio->lru, &lrugen->folios[gen][type][zone]); > > > + if (folio_test_writeback(folio) || (type == LRU_GEN_FILE && folio_test_dirty(folio))) { > > > + DEFINE_MAX_SEQ(lruvec); > > > + int old_gen, new_gen = lru_gen_from_seq(max_seq); > > > + > > > + old_gen = folio_update_gen(folio, new_gen); > > > + lru_gen_update_size(lruvec, folio, old_gen, new_gen); > > > + list_move(&folio->lru, &lrugen->folios[new_gen][type][zone]); > > > return true; > > > > Right, because MGLRU sorts these pages out before calling the scanner, > > so they never get marked for immediate reclaim. > > > > But that also implies they won't get rotated back to the tail when > > writeback finishes. > > Those dirty pages are marked by PG_reclaim either by > > folio_inc_gen() > { > ... > if (reclaiming) > new_flags |= BIT(PG_reclaim); > ... > } > > or [1], which I missed initially. So they should be rotated on writeback > finishing up. > > [1] https://lore.kernel.org/linux-mm/ZfC2612ZYwwxpOmR@google.com/ Ah, I missed that! Thanks. > > Doesn't that mean that you now have pages that > > > > a) came from the oldest generation and were only deferred due to their > > writeback state, and > > > > b) are now clean and should be reclaimed. But since they're > > permanently advanced to the next gen, you'll instead reclaim pages > > that were originally ahead of them, and likely hotter. > > > > Isn't that an age inversion? > > > > Back to the broader question though: if reclaim demand outstrips clean > > pages and the only viable candidates are dirty ones (e.g. an > > allocation spike in the presence of dirty/writeback pages), there only > > seem to be 3 options: > > > > 1) sleep-wait for writeback > > 2) continue scanning, aka busy-wait for writeback + age inversions > > 3) find nothing and declare OOM > > > > Since you're not doing 1), it must be one of the other two, no? One > > way or another it has to either pace-match to IO completions, or OOM. > > Yes, and in this case, 2) is possible but 3) is very likely. > > MGLRU doesn't do 1) for sure (in the reclaim path of course). I didn't > find any throttling on dirty pages for cgroup v2 either in the > active/inactive LRU -- I assume Chris was on v2, and hence my take on > throttling on dirty pages in the reclaim path not being the key for > his case. It's kind of spread out, but it's there: shrink_folio_list() will bump nr_dirty on dirty pages, and nr_congested if immediate reclaim folios cycle back around. shrink_inactive_list() will wake the flushers if all the dirty pages it encountered are still unqueued. shrink_node() will set LRUVEC_CGROUP_CONGESTED, and then call reclaim_throttle() on it. (As Chris points out, though, the throttle call was not long ago changed from VMSCAN_THROTTLE_WRITEBACK to VMSCAN_THROTTLE_CONGESTED, and appears a bit more fragile now than it used to be. Probably worth following up on this.)