From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69A22C43334 for ; Tue, 14 Jun 2022 07:18:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354072AbiFNHSL (ORCPT ); Tue, 14 Jun 2022 03:18:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353927AbiFNHRo (ORCPT ); Tue, 14 Jun 2022 03:17:44 -0400 Received: from mail-io1-xd4a.google.com (mail-io1-xd4a.google.com [IPv6:2607:f8b0:4864:20::d4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D1DA27FFE for ; Tue, 14 Jun 2022 00:17:39 -0700 (PDT) Received: by mail-io1-xd4a.google.com with SMTP id s189-20020a6b2cc6000000b00669add3c306so3884623ios.21 for ; Tue, 14 Jun 2022 00:17:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=LLoCoXjO4TFz1E04Z+IXKrU3JiHaDtUTrEVan6vhRdc=; b=W0sMiG9BsCs2t5uHoWKuvB2mjJMsLlD3a1zIJUqGcOvRHr5+crwoyFLZ34dKQvN3cj MWp0GUNKu82a9ZGcBJfhYTyYqe4LaGMbU9Og8VAi2CLnVEigDWeprC99DzU5Y+h9E1Uj E6XsGvdaVpVhdjydb7UCpVdc1c/Zde+8lw2B5kmHY3accrTRu/Th+xFAnPNuhLjBbhTU fRzE26FVl8qg3ybASts/APNOhbfAvMRcywctB31PlRT2PpZ4Eyx43nJLSnQLQJwYaLxZ Sneo37HMTx38LvhwLxKocltzmGNVCWE3sCbKomht4twohEAFfqQlU41LRHMgV1lv9a2/ 7byA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=LLoCoXjO4TFz1E04Z+IXKrU3JiHaDtUTrEVan6vhRdc=; b=qhDe/rDtdBbEcDZrjOJOSWf4CcWDZScOsiGWHaC8afzdNBawoseEFPVKkOcYYA94aL FAJ9soAZ1gaFFkSJhufZ0LFJolwEd4VytNCJdaiHqUE3Zm37RZ8/2vLZaKfdaWSRckDs /6GgaN2XV1NMGQsXYJBlFNsnMH8g8wIIwAqE6YG1Zi8IBF/6on83cqsAVES0otiDLzS7 b1VS/+N8362G8YSlxkF6X4KDNXjirQKmWD9RqqhCePCSM8A6oVToDYSTA114crN58GzE 3IG32+mHzuYuM43cBIifHx8lJJYKUwjuVdHx/ycuqsoaq0fxziQrSFwF5Ala82lBjCYS bX0g== X-Gm-Message-State: AOAM533+j1OXkaIHEVMaZcqdPj3mtaUNMxMziNjkSQnoo1d7VC/mGPuP uee/AoQE9c+Yer72sjenLRYCHSxLFgo= X-Google-Smtp-Source: ABdhPJwnG2X3wGXTZvkVxucFv3E9UlBWU9X45J/77DhA7aSS4xATWLjQOu+ZYvWpE2P+2YHEIijs3yyD04Y= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:eaa7:1f3f:e74a:2a26]) (user=yuzhao job=sendgmr) by 2002:a05:6638:2053:b0:331:6410:1e6d with SMTP id t19-20020a056638205300b0033164101e6dmr2118404jaj.98.1655191059251; Tue, 14 Jun 2022 00:17:39 -0700 (PDT) Date: Tue, 14 Jun 2022 01:16:46 -0600 In-Reply-To: <20220614071650.206064-1-yuzhao@google.com> Message-Id: <20220614071650.206064-10-yuzhao@google.com> Mime-Version: 1.0 References: <20220614071650.206064-1-yuzhao@google.com> X-Mailer: git-send-email 2.36.1.476.g0c4daa206d-goog Subject: [PATCH v12 09/14] mm: multi-gen LRU: optimize multiple memcgs From: Yu Zhao To: Andrew Morton Cc: Andi Kleen , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Peter Zijlstra , Tejun Heo , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, page-reclaim@google.com, Yu Zhao , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , "=?UTF-8?q?Holger=20Hoffst=C3=A4tte?=" , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org When multiple memcgs are available, it is possible to make better choices based on generations and tiers and therefore improve the overall performance under global memory pressure. This patch adds a rudimentary optimization to select memcgs that can drop single-use unmapped clean pages first. Doing so reduces the chance of going into the aging path or swapping. These two decisions can be costly. A typical example that benefits from this optimization is a server running mixed types of workloads, e.g., heavy anon workload in one memcg and heavy buffered I/O workload in the other. Though this optimization can be applied to both kswapd and direct reclaim, it is only added to kswapd to keep the patchset manageable. Later improvements will cover the direct reclaim path. Server benchmark results: Mixed workloads: fio (buffered I/O): +[19, 21]% IOPS BW patch1-8: 1880k 7343MiB/s patch1-9: 2252k 8796MiB/s memcached (anon): +[119, 123]% Ops/sec KB/sec patch1-8: 862768.65 33514.68 patch1-9: 1911022.12 74234.54 Mixed workloads: fio (buffered I/O): +[75, 77]% IOPS BW 5.19-rc1: 1279k 4996MiB/s patch1-9: 2252k 8796MiB/s memcached (anon): +[13, 15]% Ops/sec KB/sec 5.19-rc1: 1673524.04 65008.87 patch1-9: 1911022.12 74234.54 Configurations: (changes since patch 6) cat mixed.sh modprobe brd rd_nr=3D2 rd_size=3D56623104 swapoff -a mkswap /dev/ram0 swapon /dev/ram0 mkfs.ext4 /dev/ram1 mount -t ext4 /dev/ram1 /mnt memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=3D1 \ --key-maximum=3D50000000 --key-pattern=3DP:P -c 1 -t 36 \ --ratio 1:0 --pipeline 8 -d 2000 fio -name=3Dmglru --numjobs=3D36 --directory=3D/mnt --size=3D1408m \ --buffered=3D1 --ioengine=3Dio_uring --iodepth=3D128 \ --iodepth_batch_submit=3D32 --iodepth_batch_complete=3D32 \ --rw=3Drandread --random_distribution=3Drandom --norandommap \ --time_based --ramp_time=3D10m --runtime=3D90m --group_reporting & pid=3D$! sleep 200 memtier_benchmark -S /var/run/memcached/memcached.sock \ -P memcache_binary -n allkeys --key-minimum=3D1 \ --key-maximum=3D50000000 --key-pattern=3DR:R -c 1 -t 36 \ --ratio 0:1 --pipeline 8 --randomize --distinct-client-seed kill -INT $pid wait Client benchmark results: no change (CONFIG_MEMCG=3Dn) Signed-off-by: Yu Zhao Acked-by: Brian Geffon Acked-by: Jan Alexander Steffens (heftig) Acked-by: Oleksandr Natalenko Acked-by: Steven Barrett Acked-by: Suleiman Souhlal Tested-by: Daniel Byrne Tested-by: Donald Carr Tested-by: Holger Hoffst=C3=A4tte Tested-by: Konstantin Kharlamov Tested-by: Shuang Zhai Tested-by: Sofia Trinh Tested-by: Vaibhav Jain --- mm/vmscan.c | 55 ++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 46 insertions(+), 9 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index b086105d485d..4746c4874795 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -129,6 +129,13 @@ struct scan_control { /* Always discard instead of demoting to lower tier memory */ unsigned int no_demotion:1; =20 +#ifdef CONFIG_LRU_GEN + /* help make better choices when multiple memcgs are available */ + unsigned int memcgs_need_aging:1; + unsigned int memcgs_need_swapping:1; + unsigned int memcgs_avoid_swapping:1; +#endif + /* Allocation order */ s8 order; =20 @@ -4370,6 +4377,22 @@ static void lru_gen_age_node(struct pglist_data *pgd= at, struct scan_control *sc) =20 VM_WARN_ON_ONCE(!current_is_kswapd()); =20 + /* + * To reduce the chance of going into the aging path or swapping, which + * can be costly, optimistically skip them unless their corresponding + * flags were cleared in the eviction path. This improves the overall + * performance when multiple memcgs are available. + */ + if (!sc->memcgs_need_aging) { + sc->memcgs_need_aging =3D true; + sc->memcgs_avoid_swapping =3D !sc->memcgs_need_swapping; + sc->memcgs_need_swapping =3D true; + return; + } + + sc->memcgs_need_swapping =3D true; + sc->memcgs_avoid_swapping =3D true; + set_mm_walk(pgdat); =20 memcg =3D mem_cgroup_iter(NULL, NULL, NULL); @@ -4775,7 +4798,8 @@ static int isolate_folios(struct lruvec *lruvec, stru= ct scan_control *sc, int sw return scanned; } =20 -static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, in= t swappiness) +static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, in= t swappiness, + bool *need_swapping) { int type; int scanned; @@ -4837,14 +4861,16 @@ static int evict_folios(struct lruvec *lruvec, stru= ct scan_control *sc, int swap =20 sc->nr_reclaimed +=3D reclaimed; =20 + if (type =3D=3D LRU_GEN_ANON && need_swapping) + *need_swapping =3D true; + return scanned; } =20 static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_con= trol *sc, - bool can_swap, unsigned long reclaimed) + bool can_swap, unsigned long reclaimed, bool *need_aging) { int priority; - bool need_aging; unsigned long nr_to_scan; struct mem_cgroup *memcg =3D lruvec_memcg(lruvec); DEFINE_MAX_SEQ(lruvec); @@ -4854,7 +4880,7 @@ static unsigned long get_nr_to_scan(struct lruvec *lr= uvec, struct scan_control * (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) return 0; =20 - nr_to_scan =3D get_nr_evictable(lruvec, max_seq, min_seq, can_swap, &need= _aging); + nr_to_scan =3D get_nr_evictable(lruvec, max_seq, min_seq, can_swap, need_= aging); if (!nr_to_scan) return 0; =20 @@ -4870,7 +4896,7 @@ static unsigned long get_nr_to_scan(struct lruvec *lr= uvec, struct scan_control * if (!nr_to_scan) return 0; =20 - if (!need_aging) + if (!*need_aging) return nr_to_scan; =20 /* skip the aging path at the default priority */ @@ -4890,6 +4916,8 @@ static unsigned long get_nr_to_scan(struct lruvec *lr= uvec, struct scan_control * static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_contr= ol *sc) { struct blk_plug plug; + bool need_aging =3D false; + bool need_swapping =3D false; unsigned long scanned =3D 0; unsigned long reclaimed =3D sc->nr_reclaimed; =20 @@ -4911,21 +4939,30 @@ static void lru_gen_shrink_lruvec(struct lruvec *lr= uvec, struct scan_control *sc else swappiness =3D 0; =20 - nr_to_scan =3D get_nr_to_scan(lruvec, sc, swappiness, reclaimed); + nr_to_scan =3D get_nr_to_scan(lruvec, sc, swappiness, reclaimed, &need_a= ging); if (!nr_to_scan) - break; + goto done; =20 - delta =3D evict_folios(lruvec, sc, swappiness); + delta =3D evict_folios(lruvec, sc, swappiness, &need_swapping); if (!delta) - break; + goto done; =20 scanned +=3D delta; if (scanned >=3D nr_to_scan) break; =20 + if (sc->memcgs_avoid_swapping && swappiness < 200 && need_swapping) + break; + cond_resched(); } =20 + /* see the comment in lru_gen_age_node() */ + if (!need_aging) + sc->memcgs_need_aging =3D false; + if (!need_swapping) + sc->memcgs_need_swapping =3D false; +done: clear_mm_walk(); =20 blk_finish_plug(&plug); --=20 2.36.1.476.g0c4daa206d-goog