From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54D22CDB474 for ; Fri, 20 Oct 2023 03:34:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E43598D01B6; Thu, 19 Oct 2023 23:34:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DF5298D0003; Thu, 19 Oct 2023 23:34:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE27A8D01B6; Thu, 19 Oct 2023 23:34:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BEA9C8D0003 for ; Thu, 19 Oct 2023 23:34:01 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 91CFFA0697 for ; Fri, 20 Oct 2023 03:34:01 +0000 (UTC) X-FDA: 81364421082.02.467A530 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) by imf15.hostedemail.com (Postfix) with ESMTP id 26D68A000B for ; Fri, 20 Oct 2023 03:33:58 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697772840; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=X69mgPgN7mOXP5p3htSnSlYPD1bNMj2/DYqkY4K7SvY=; b=Aed+R1LVq9aoSjFmr9Q8EevUMnam6gizQjku17B131icu2wBwFGUivnbumZZ8NivvfkZl8 5Lws1SgA9GUqmZjMLricR9VTmcjcVuE3Law4k0RABcLSLWwxFiSUwuEISqkjtEAz0h9Mqa xd9N4eHfz47HE5cN2nqkPpyCYOJsLbE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697772840; a=rsa-sha256; cv=none; b=k5YkPxy/8mE0pmTFuoPIicbHToWxhPi/pPm5OUqlJyqVXh78WrglFMC8MA5Ie75hDBoipw 2/G16HQFYB5Es5O+1TPrXbWlSuXsui48PjkyjRLAMj2lqUhWrv9VvNt9H/yE2IHsYletsh H73coR/LCgtrmSPex93q6zB5EIi0CUI= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0VuVtohf_1697772834; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VuVtohf_1697772834) by smtp.aliyun-inc.com; Fri, 20 Oct 2023 11:33:55 +0800 From: Baolin Wang To: akpm@linux-foundation.org Cc: mgorman@techsingularity.net, hughd@google.com, vbabka@suse.cz, ying.huang@intel.com, ziy@nvidia.com, fengwei.yin@intel.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] mm: migrate: record the mlocked page status to remove unnecessary lru drain Date: Fri, 20 Oct 2023 11:33:47 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: yi3pxbwmdmxmjn3et1jnfgywm85n6ed1 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 26D68A000B X-Rspam-User: X-HE-Tag: 1697772838-562641 X-HE-Meta: U2FsdGVkX19NMPsm8NaFaaNfNQQVQlwOOMnCixq4y66evmBJ5kw/zT6sCagw3HdLGWy0BzbKtTdWzfRuqd12Mpx3BJZ8+N475WJ+ytHlVcFjrN2gfqwaKYC44M64K4IgqebuPnz+JOZwF0FB5l8FJMounkqAEcHPIhwXoMaiFrqBcyLcz3sOCN5AOSJrl3MJZ0k/h4uh4+mxo9jF1F0X/stlGQd9nsuTb5J49ZwqfqR6B6PS9dUW+/SwTwhi7/UIpp6sHpttW1TObATstuinPPYUNl4x5jylgWe3qR9P/mROlaNmVpwqCbv3LxPyOaOA1IWbCjpqapnCybExUkEnK12W08ny+l8ayw0cAReg8m1SJwMcNK5Xd4x9EXNOq0RS/xWXqt5pgoWXq7CHj4sJe0q+rPoZAWHt0aMpoGnR9c9W9r+Mgvs4H/zesJaNI1me7H7oJal0xvA6zqXwk+aoXR1nOSr6SeAlDqmtr1YZQ2al9pazCcCHKVlLdsuWB/zrbtTxxJW2gBZyiBGDZqk87ocUBpsXE+ShWkeV7iOpaRpJnY4IcWrnvDAqu/AnkLLjSnK7SuYS48jrCKFsWohX0UZnIvCtSZGMuLocD1TVAIr7ztNmGeiTlbyYFUTSe8oFLnl0mt4dbM7a3D0Es/OfUMlP1t2MtVl4M6qYKTl4zEQWj+ZPmafLz2aF6sPgMxJqrGH2+yVBLl8wKhb4r1fv8UqHTIhje2/+49iD9X+FiGEI5hZu9ZDuLbUdcXlg791bB/9xwPYyF5aTsKQ1xRWJPMf21jA7E/sR3H+0k3Uz+fP+9cdUa/EOB8gXDO+Vq0tgVEecs0JUCSY8ErV4OppPPqPINrbf09T8kfdAVc+ShgtvVGfEKdQxL8XjHzluzHikqTgDcQ9g95TKGxq8dFaOmVTw1Ht0cGXR7B8vsIbiCnrGxw2jgvHwalbGaC5M1S2YYhT/Tk1D+rD9I1vIYvC 47xxpulb PPyt3kC3dMISAOwvkUnEKyee0bA7z21vr8XhSnuQp8ZqcT+NrOp2fse5dVPbJkSqt7ALBKlEhesylw24= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When doing compaction, I found the lru_add_drain() is an obvious hotspot when migrating pages. The distribution of this hotspot is as follows: - 18.75% compact_zone - 17.39% migrate_pages - 13.79% migrate_pages_batch - 11.66% migrate_folio_move - 7.02% lru_add_drain + 7.02% lru_add_drain_cpu + 3.00% move_to_new_folio 1.23% rmap_walk + 1.92% migrate_folio_unmap + 3.20% migrate_pages_sync + 0.90% isolate_migratepages The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU immediately, to help to build up the correct newpage->mlock_count in remove_migration_ptes() for mlocked pages. However, if there are no mlocked pages are migrating, then we can avoid this lru drain operation, especailly for the heavy concurrent scenarios. So we can record the source pages' mlocked status in migrate_folio_unmap(), and only drain the lru list when the mlocked status is set in migrate_folio_move(). In addition, the page was already isolated from lru when migrating, so checking the mlocked status is stable by folio_test_mlocked() in migrate_folio_unmap(). After this patch, I can see the hotpot of the lru_add_drain() is gone: - 9.41% migrate_pages_batch - 6.15% migrate_folio_move - 3.64% move_to_new_folio + 1.80% migrate_folio_extra + 1.70% buffer_migrate_folio + 1.41% rmap_walk + 0.62% folio_add_lru + 3.07% migrate_folio_unmap Meanwhile, the compaction latency shows some improvements when running thpscale: base patched Amean fault-both-1 1131.22 ( 0.00%) 1112.55 * 1.65%* Amean fault-both-3 2489.75 ( 0.00%) 2324.15 * 6.65%* Amean fault-both-5 3257.37 ( 0.00%) 3183.18 * 2.28%* Amean fault-both-7 4257.99 ( 0.00%) 4079.04 * 4.20%* Amean fault-both-12 6614.02 ( 0.00%) 6075.60 * 8.14%* Amean fault-both-18 10607.78 ( 0.00%) 8978.86 * 15.36%* Amean fault-both-24 14911.65 ( 0.00%) 11619.55 * 22.08%* Amean fault-both-30 14954.67 ( 0.00%) 14925.66 * 0.19%* Amean fault-both-32 16654.87 ( 0.00%) 15580.31 * 6.45%* Signed-off-by: Baolin Wang --- Chages from v1: - Use separate flags in __migrate_folio_record() to avoid to pack flags in each call site per Ying. --- mm/migrate.c | 47 +++++++++++++++++++++++++++++++++++------------ 1 file changed, 35 insertions(+), 12 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 125194f5af0f..fac96139dbba 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1027,22 +1027,39 @@ union migration_ptr { struct anon_vma *anon_vma; struct address_space *mapping; }; + +enum { + PAGE_WAS_MAPPED = 1 << 0, + PAGE_WAS_MLOCKED = 1 << 1, +}; + static void __migrate_folio_record(struct folio *dst, - unsigned long page_was_mapped, + unsigned int page_was_mapped, + unsigned int page_was_mlocked, struct anon_vma *anon_vma) { union migration_ptr ptr = { .anon_vma = anon_vma }; + unsigned long page_flags = 0; + + if (page_was_mapped) + page_flags |= PAGE_WAS_MAPPED; + if (page_was_mlocked) + page_flags |= PAGE_WAS_MLOCKED; dst->mapping = ptr.mapping; - dst->private = (void *)page_was_mapped; + dst->private = (void *)page_flags; } static void __migrate_folio_extract(struct folio *dst, int *page_was_mappedp, + int *page_was_mlocked, struct anon_vma **anon_vmap) { union migration_ptr ptr = { .mapping = dst->mapping }; + unsigned long page_flags = (unsigned long)dst->private; + *anon_vmap = ptr.anon_vma; - *page_was_mappedp = (unsigned long)dst->private; + *page_was_mappedp = page_flags & PAGE_WAS_MAPPED ? 1 : 0; + *page_was_mlocked = page_flags & PAGE_WAS_MLOCKED ? 1 : 0; dst->mapping = NULL; dst->private = NULL; } @@ -1103,7 +1120,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, { struct folio *dst; int rc = -EAGAIN; - int page_was_mapped = 0; + int page_was_mapped = 0, page_was_mlocked = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__folio_test_movable(src); bool locked = false; @@ -1157,6 +1174,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, folio_lock(src); } locked = true; + page_was_mlocked = folio_test_mlocked(src); if (folio_test_writeback(src)) { /* @@ -1206,7 +1224,8 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, dst_locked = true; if (unlikely(!is_lru)) { - __migrate_folio_record(dst, page_was_mapped, anon_vma); + __migrate_folio_record(dst, page_was_mapped, + page_was_mlocked, anon_vma); return MIGRATEPAGE_UNMAP; } @@ -1236,7 +1255,8 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, } if (!folio_mapped(src)) { - __migrate_folio_record(dst, page_was_mapped, anon_vma); + __migrate_folio_record(dst, page_was_mapped, + page_was_mlocked, anon_vma); return MIGRATEPAGE_UNMAP; } @@ -1261,12 +1281,13 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, struct list_head *ret) { int rc; - int page_was_mapped = 0; + int page_was_mapped = 0, page_was_mlocked = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__folio_test_movable(src); struct list_head *prev; - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); + __migrate_folio_extract(dst, &page_was_mapped, + &page_was_mlocked, &anon_vma); prev = dst->lru.prev; list_del(&dst->lru); @@ -1287,7 +1308,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, * isolated from the unevictable LRU: but this case is the easiest. */ folio_add_lru(dst); - if (page_was_mapped) + if (page_was_mlocked) lru_add_drain(); if (page_was_mapped) @@ -1322,7 +1343,8 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private, */ if (rc == -EAGAIN) { list_add(&dst->lru, prev); - __migrate_folio_record(dst, page_was_mapped, anon_vma); + __migrate_folio_record(dst, page_was_mapped, + page_was_mlocked, anon_vma); return rc; } @@ -1799,10 +1821,11 @@ static int migrate_pages_batch(struct list_head *from, dst = list_first_entry(&dst_folios, struct folio, lru); dst2 = list_next_entry(dst, lru); list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - int page_was_mapped = 0; + int page_was_mapped = 0, page_was_mlocked = 0; struct anon_vma *anon_vma = NULL; - __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); + __migrate_folio_extract(dst, &page_was_mapped, + &page_was_mlocked, &anon_vma); migrate_folio_undo_src(folio, page_was_mapped, anon_vma, true, ret_folios); list_del(&dst->lru); -- 2.39.3