From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 340343EBF13 for ; Tue, 31 Mar 2026 09:30:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774949411; cv=none; b=qriLxhYzS6bitAA3UjzMvWgq07EBa9O43Ic8TlW+hk2xEgCKAVp1dIWG05oY9X0hJ/HO0XrSF7x4KuRJv5grk5SxRhdS/ce+BB1ZeXOj5Nilohx1F4q6nT9Sevi5HscQtwTxPjTQSIAZdoxj28TDDQb+2Z+lvZ084sMRahcHZ8E= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774949411; c=relaxed/simple; bh=Pd5O9ql6EVezTFMEGkP6nLvyDauzcz7T5uQ8Uva6dQc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=I4QVLwy24BF37GXBZd8/0/LqMqknUcFah5rfkiUqwcvNj0VZsdQCebSvk8ntnpaB3MvYJki5cV5qaBb0NqNe4DQfEKpsJP+qLM6Q+yhm9NDIHTBxzmb7cHtyShzq9soYdgFWTw2SGbc3RtbXXPr/bVakFguffALhd7eL7+ljlP8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ClrgBGUs; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ClrgBGUs" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-82418b0178cso2633650b3a.1 for ; Tue, 31 Mar 2026 02:30:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1774949410; x=1775554210; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=fDJbRrmR6P4+bOcvvHxpRXwGXp36zFt9hwZGamgixpA=; b=ClrgBGUs+3mBoz2YZsAIDMyjjkhWLUQmAc61UAglZVkcLmOkteoYsVS+A/dg06xvIV AwAx8cx2QQpdnmgfZSNe9NCiVjLliBBrekaNL4E3F5nLG3z2ZfdZ/yjx+ALBWF9ObMll G6Lc56yQft5yVqL5VQ7iMUq1E98JgjIzRQE1uruQIjj6Bo8giz98A4mDrRmTQuylRO5M 2vEPJV40BOJPfZmicxeuTytBaC3wt5mAl+du+lvrf+fqdGAnr/0QPb9iUWC797t1trVm mskgEYKmNGbMELCDjaz2hi5OSocin+jwLJggFynJ+JRfR7iozGt0ygZA4/Wg1/XiD8Z3 vJUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774949410; x=1775554210; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fDJbRrmR6P4+bOcvvHxpRXwGXp36zFt9hwZGamgixpA=; b=dboWuj7BcQK6RMx3GpXauL1h48Zf+xMfkkP71Gw3W64B3N3BPxOmR2G/ocG3IxswcJ LKr4a6u1UI4wJjMexgd2cLFxICCTDEDNhIDBuEA32oins0j+DnYhEpX+zWtWOAz5GFxg G7vCEHgPIXcuWE3qZ+ppd1k2OB75mYT6AfvLAw9zkgqmq/0lUhEKH9mO0QYSJTBQO6O6 F+/KFgxVrdx3C9kuZEvuhu6iiqsdc3fgQU+unAWV1Gsh9TaBz1l0S73pf3m6KSun4Rvi dbOctCyGJ2buWTPWHZvINuoZAnnwbAcsNbY1yTA9QnZOzjkCiu9CBn+9293kN+jZo1Qz qCOw== X-Forwarded-Encrypted: i=1; AJvYcCWS69LJ6Jel99pIbVo2Y1W87hDuF4WJ++BMok+he2x1Wmt/BTWgn5nQufClJOaDmxVGyTfl3yaDAFQKuHM=@vger.kernel.org X-Gm-Message-State: AOJu0Yxn+Wb5ZqCjjVoh0skc96dtEF/7mV8aakr+wVNjXplj1zC32as/ 5NmlWl/XJJ4fIGE4fRtzZeBBoJrpQ200rJMt6w8MGLy4Zf30pOLSlsun X-Gm-Gg: ATEYQzxjXAIGyM3zKj28TYh5BV4ZbjlgxalLzbMS1gIW2aWnyWGVpEkv/7t8cnYGo9p x8FOGtOO0Q83FqbeHYHaO5pabiDg+3WqkanueLa+NMznhYLpTB9R1z9TnPpgL81/92NRx/ISury PHqvRE4YelGm3nDeNjM8oM8AXkV2HEfCwdGuuvWjc3AEOsdS8tGZzT1+WvmsCbZhPj4p58YOGB6 SwsLSpXx9W/5EC2CWguvdQTMBE8xyvv7oi9pw52k4AAaMusF+KnC5IpxNDt2qOY6Epgs76YqTey 6bPd+8CDIdQpAv+el6h2oFqQWHd9DrvAe31m0+LTXJodKAKT3f98ror17oVkFZ4xDzrgJueOroM MBjPIUrC8A8KCP2gZg6s+/5SguoI9tWIMUgizSb7kc8quiVCNxdohpHQP1no7QNeMLRUcxRr1YH D729S0078Zq/mG4xieogtxonUuVd4rXBWTYktBF9LD5EJ6PXgaoBz6u+nb/OlQNkPfVdCY X-Received: by 2002:a05:6a00:8b96:b0:82c:9f7e:518c with SMTP id d2e1a72fcca58-82c9f7e5570mr9014194b3a.25.1774949409128; Tue, 31 Mar 2026 02:30:09 -0700 (PDT) Received: from KASONG-MC4 ([43.132.141.20]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82ca862dee1sm10374642b3a.61.2026.03.31.02.29.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2026 02:30:08 -0700 (PDT) Date: Tue, 31 Mar 2026 17:29:55 +0800 From: Kairui Song To: Baolin Wang Cc: kasong@tencent.com, linux-mm@kvack.org, Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Barry Song , David Stevens , Chen Ridong , Leno Hou , Yafang Shao , Yu Zhao , Zicheng Wang , Kalesh Singh , Suren Baghdasaryan , Chris Li , Vernon Yang , linux-kernel@vger.kernel.org, Qi Zheng Subject: Re: [PATCH v2 12/12] mm/vmscan: unify writeback reclaim statistic and throttling Message-ID: References: <20260329-mglru-reclaim-v2-0-b53a3678513c@tencent.com> <20260329-mglru-reclaim-v2-12-b53a3678513c@tencent.com> <052ae271-509c-42c3-877e-ac8822b314e5@linux.alibaba.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <052ae271-509c-42c3-877e-ac8822b314e5@linux.alibaba.com> On Tue, Mar 31, 2026 at 05:24:39PM +0800, Baolin Wang wrote: > > > On 3/29/26 3:52 AM, Kairui Song via B4 Relay wrote: > > From: Kairui Song > > > > Currently MGLRU and non-MGLRU handle the reclaim statistic and > > writeback handling very differently, especially throttling. > > Basically MGLRU just ignored the throttling part. > > > > Let's just unify this part, use a helper to deduplicate the code > > so both setups will share the same behavior. Also remove the > > folio_clear_reclaim in isolate_folio which was actively invalidating > > the congestion control. PG_reclaim is now handled by shrink_folio_list, > > keeping it in isolate_folio is not helpful. > > > > Test using following reproducer using bash: > > > > echo "Setup a slow device using dm delay" > > dd if=/dev/zero of=/var/tmp/backing bs=1M count=2048 > > LOOP=$(losetup --show -f /var/tmp/backing) > > mkfs.ext4 -q $LOOP > > echo "0 $(blockdev --getsz $LOOP) delay $LOOP 0 0 $LOOP 0 1000" | \ > > dmsetup create slow_dev > > mkdir -p /mnt/slow && mount /dev/mapper/slow_dev /mnt/slow > > > > echo "Start writeback pressure" > > sync && echo 3 > /proc/sys/vm/drop_caches > > mkdir /sys/fs/cgroup/test_wb > > echo 128M > /sys/fs/cgroup/test_wb/memory.max > > (echo $BASHPID > /sys/fs/cgroup/test_wb/cgroup.procs && \ > > dd if=/dev/zero of=/mnt/slow/testfile bs=1M count=192) > > > > echo "Clean up" > > echo "0 $(blockdev --getsz $LOOP) error" | dmsetup load slow_dev > > dmsetup resume slow_dev > > umount -l /mnt/slow && sync > > dmsetup remove slow_dev > > > > Before this commit, `dd` will get OOM killed immediately if > > MGLRU is enabled. Classic LRU is fine. > > > > After this commit, congestion control is now effective and no more > > spin on LRU or premature OOM. > > > > Stress test on other workloads also looking good. > > > > Suggested-by: Chen Ridong > > Signed-off-by: Kairui Song > > --- > > mm/vmscan.c | 93 +++++++++++++++++++++++++++---------------------------------- > > 1 file changed, 41 insertions(+), 52 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 1783da54ada1..83c8fdf8fdc4 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -1942,6 +1942,44 @@ static int current_may_throttle(void) > > return !(current->flags & PF_LOCAL_THROTTLE); > > } > > +static void handle_reclaim_writeback(unsigned long nr_taken, > > + struct pglist_data *pgdat, > > + struct scan_control *sc, > > + struct reclaim_stat *stat) > > +{ > > + /* > > + * If dirty folios are scanned that are not queued for IO, it > > + * implies that flushers are not doing their job. This can > > + * happen when memory pressure pushes dirty folios to the end of > > + * the LRU before the dirty limits are breached and the dirty > > + * data has expired. It can also happen when the proportion of > > + * dirty folios grows not through writes but through memory > > + * pressure reclaiming all the clean cache. And in some cases, > > + * the flushers simply cannot keep up with the allocation > > + * rate. Nudge the flusher threads in case they are asleep. > > + */ > > + if (stat->nr_unqueued_dirty == nr_taken && nr_taken) { > > + wakeup_flusher_threads(WB_REASON_VMSCAN); > > + /* > > + * For cgroupv1 dirty throttling is achieved by waking up > > + * the kernel flusher here and later waiting on folios > > + * which are in writeback to finish (see shrink_folio_list()). > > + * > > + * Flusher may not be able to issue writeback quickly > > + * enough for cgroupv1 writeback throttling to work > > + * on a large system. > > + */ > > + if (!writeback_throttling_sane(sc)) > > + reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); > > + } > > + > > + sc->nr.dirty += stat->nr_dirty; > > + sc->nr.congested += stat->nr_congested; > > + sc->nr.writeback += stat->nr_writeback; > > + sc->nr.immediate += stat->nr_immediate; > > + sc->nr.taken += nr_taken; > > +} > > + > > /* > > * shrink_inactive_list() is a helper for shrink_node(). It returns the number > > * of reclaimed pages > > @@ -2005,39 +2043,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, > > lruvec_lock_irq(lruvec); > > lru_note_cost_unlock_irq(lruvec, file, stat.nr_pageout, > > nr_scanned - nr_reclaimed); > > - > > - /* > > - * If dirty folios are scanned that are not queued for IO, it > > - * implies that flushers are not doing their job. This can > > - * happen when memory pressure pushes dirty folios to the end of > > - * the LRU before the dirty limits are breached and the dirty > > - * data has expired. It can also happen when the proportion of > > - * dirty folios grows not through writes but through memory > > - * pressure reclaiming all the clean cache. And in some cases, > > - * the flushers simply cannot keep up with the allocation > > - * rate. Nudge the flusher threads in case they are asleep. > > - */ > > - if (stat.nr_unqueued_dirty == nr_taken) { > > - wakeup_flusher_threads(WB_REASON_VMSCAN); > > - /* > > - * For cgroupv1 dirty throttling is achieved by waking up > > - * the kernel flusher here and later waiting on folios > > - * which are in writeback to finish (see shrink_folio_list()). > > - * > > - * Flusher may not be able to issue writeback quickly > > - * enough for cgroupv1 writeback throttling to work > > - * on a large system. > > - */ > > - if (!writeback_throttling_sane(sc)) > > - reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); > > - } > > - > > - sc->nr.dirty += stat.nr_dirty; > > - sc->nr.congested += stat.nr_congested; > > - sc->nr.writeback += stat.nr_writeback; > > - sc->nr.immediate += stat.nr_immediate; > > - sc->nr.taken += nr_taken; > > - > > + handle_reclaim_writeback(nr_taken, pgdat, sc, &stat); > > trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, > > nr_scanned, nr_reclaimed, &stat, sc->priority, file); > > return nr_reclaimed; > > @@ -4651,9 +4657,6 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca > > if (!folio_test_referenced(folio)) > > set_mask_bits(&folio->flags.f, LRU_REFS_MASK, 0); > > - /* for shrink_folio_list() */ > > - folio_clear_reclaim(folio); > > IMO, Moving this change into patch 8 would make more sense. Otherwise LGTM. Thanks for the review! I made it a separate patch so we can better identify which part had the performance gain, and patch 8 can keep the review by. Patch 8 is still good without this, a few counters are updated with no user, kind of wasted but that's harmless.