From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D1A0C83F09 for ; Thu, 10 Jul 2025 03:38:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C52726B00AC; Wed, 9 Jul 2025 23:38:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C02D06B00AF; Wed, 9 Jul 2025 23:38:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACA866B00B0; Wed, 9 Jul 2025 23:38:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 992EC6B00AC for ; Wed, 9 Jul 2025 23:38:03 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6748810BAD5 for ; Thu, 10 Jul 2025 03:38:03 +0000 (UTC) X-FDA: 83646946446.19.6589C73 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by imf25.hostedemail.com (Postfix) with ESMTP id 87DE0A000B for ; Thu, 10 Jul 2025 03:38:01 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JFOZwjq3; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752118681; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zLyM92xjF/qE8PKYeKzMy6+qx0MOrSpvWpyzMUmH5dc=; b=osyUmz7QKQnCdh0ve7V5eaxoXtBt4oIJhGemsYbGIj86ux/9eYK9QkBD10Ugvq+M2hrhbQ 2pJ0MfEZxU9i1FxAy+n+yteUJeg+F7FVNh//XAUQcN3zQU4r+xzDa39c6o1InSQzhGuJyx O2jH2IiljcjSPg3oRWGB+vn4wUdNqgw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752118681; a=rsa-sha256; cv=none; b=CnqgA10Ch+RSadqiVu66TYuIkyitlZbvC3Bcnh6DneiCXIFZP7MEAC+66rvxdcLmqIpnzK Z6UGw/lW0LDsx64M5ABCd4JAQCG8CubVy+JUhv5hPk3c+U9Z3aJkujFGpiyUszJM/X4mFh h7u8dusiRBHvwMXqgLd8EzRW9v2GMhI= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JFOZwjq3; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.215.170 as permitted sender) smtp.mailfrom=ryncsn@gmail.com Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-b390136ed88so497048a12.2 for ; Wed, 09 Jul 2025 20:38:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752118680; x=1752723480; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=zLyM92xjF/qE8PKYeKzMy6+qx0MOrSpvWpyzMUmH5dc=; b=JFOZwjq3shNA58pAIOEoT5ADajRysFX7HITSuArMo4spAN69Yc2rGpymHjdrG2TZ/k RLRKN40OebWaqYK8ivfywOQo34CT0OMt/LDKLTy07+V+lVucF0bVlyewN0Dz5z493Ybr jimsh+jd6sKRq7MkzZehUhSrOA1UNEMY3ATiiYJIn2Xow/wBm+qEDB7M+cue9ifcA0E/ BSvrrHdJAAwDnFY6kykQT937cGSJNrn9wMYj6Fv6Xa11EFVz2w8hD6mjBqXz/dCg9NP8 GjT8acFEuo6X8yIYIf4nMCW5RCVh61DOcJ4hH1ULbGQcUq307kYxkiHhSN9Xu5wZeYyl dXqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752118680; x=1752723480; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=zLyM92xjF/qE8PKYeKzMy6+qx0MOrSpvWpyzMUmH5dc=; b=XGm62c4mRdW6RLlOKnHazGKM8Lkh0l969T/J5v6Sj0/0bZDLigXs+lWh2+ofVx4vBN p7gNIEQpOHEO7ODcVyGOvn+lRVvzaxaPKR/pfSuVaTcmtdF7Hq2tpD4fJIGOWwp2e/Kb A0EL9GY3YNDedCBLWBEGzGJQFaayUBYgxpqnY7DtNP+YR3eXVHjPC+lf8DFYqMuQlYfS 8T00wB5o+5dx9khGuJ6FOIpOQi2aD/OBg4eYgm1y+64l6FyV/0Ht3JEDVaYy7nlnKKsD RA1zZ8vT9XjMwbcDZJCx8PTY0v8rmTgCRmd4N2c5dj4DEAtDMn36V0FBb9vEqsUG7kD8 7mjg== X-Gm-Message-State: AOJu0Yw+m+kR/VGHmSIcHWZS6DtEREH7jvOxlnddPCgh4Ci4/+mT3Oxz gPyexBEL0l1P/Mh1Z6EpGZ0hQaOG3OhHBJ0HxFNk6IcVJDk4HcjMqaHdEpXYp6BWOqM= X-Gm-Gg: ASbGncvJsf09KckKci50TXQCNlsGuqZvuWAywjYPETFRLuD5vvQGbs9Tjbozv3x/QsD r71e5r0jBv0kkOu2xdplSymAdXJk4dUO+mze7iF+tzY7IH4tHXznj83CLXeK5yXAFP3QL9FHhtO KJtHdAUQ71JmDj4l7Twmm5LIRHi1NU1JfWRJrufpOayhrTpBaEcaNkuQwiwPFSLygq+1ptb0DKu CsBrDRzgoBmHsBk7vMxcPq4jvXtUMXEkQ8mQHgiIL8VPS5QVWwg650YpxTAYeITrQCaXYGBP3lq Q6icmih3UcD4suiLHFqdXgwB4NrRQPG9NtsQBihySCzBowfF/bCIaDsnWChvUpPw4G3kJQ9ZJpX y X-Google-Smtp-Source: AGHT+IGMdjnBHsobKiw3IFD0J93kBWsIfXdtph/gYTDQk8Jc7sEx6r4eioU+JuxSODPnP116JqFVNg== X-Received: by 2002:a17:90b:4c09:b0:311:eb85:96f0 with SMTP id 98e67ed59e1d1-31c3c2f3dc0mr3518018a91.29.1752118679531; Wed, 09 Jul 2025 20:37:59 -0700 (PDT) Received: from KASONG-MC4 ([43.132.141.20]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-31c300689aasm3716320a91.13.2025.07.09.20.37.56 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 09 Jul 2025 20:37:58 -0700 (PDT) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , Baolin Wang , Matthew Wilcox , Kemeng Shi , Chris Li , Nhat Pham , Baoquan He , Barry Song , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v5 7/8] mm/shmem, swap: rework swap entry and index calculation for large swapin Date: Thu, 10 Jul 2025 11:37:05 +0800 Message-ID: <20250710033706.71042-8-ryncsn@gmail.com> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250710033706.71042-1-ryncsn@gmail.com> References: <20250710033706.71042-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: 87DE0A000B X-Rspamd-Server: rspam03 X-Stat-Signature: 5c6txat4ijmd39dddp1ofuek5kz6b3qw X-HE-Tag: 1752118681-151862 X-HE-Meta: U2FsdGVkX1/YfeMbN1xg2IxZvTUPfUtduvgrul76HZQomWZQnKKQ12ZoB50wALuSdTOS36fNqIaotW0vLjcQzeqqV09M/0qDbuGSBuXKZNZD/HpX2/QiWtuG67RZCFGqogNI5mlP4NQ/3V/scpQzDhn/ulXWyvppSV4iZIIzSDKfLIw/YRt6N4sNjgpzEgeyG+fMxOMQHjuMfecS1JInIXTZVBqFUhxLRJINh3zwBitRFXFX82Y13r64jSIfAErRBF4cO4i0Z6eGaG2XODHW1f6QrgSwq1F2Xs0pryImLBGGKnjBVPycqSdztIuVTriqDcMef9+VOFFz62d4araIvbYuXr7+5XcZxR15OwwMYzHbsrmh6Ww20eGxPURGCNTqYmNVO8lYH8LlhsDPnnIkdBQLuKrbqjpAaG6yRuSDDWA+5sOAlSqHSBlS4Qe43UHpmXFz5tSVcJtidS5UURJS0tAAiZ28k86lPej5jkPBSLRw139jI7ys2VydXdtgSGUth44gg2e09yr1MWi5Z46AKki+F27KmvdwSessdNhodSkskQxwVDSZmMMcYRl1z5pnb3xxJvMileOsA9EApHLvSrg4Go1/PYUIguqSnNH/FmAFT1TP2m7XddkT41AL/IhLy2HWmModsbQejNXGhk39oJmKYEMOAy+SVREe8cwPF6ruFys/tb69XCMmBSsl8CSCSvodcYT766dP4SSEi9KFaQiUYgMJM5Y1Jtm5hpDmjhqVph+7CIWRMniuUqObb+V4rQwGS+pwQBYKIudmsyibqJlTE587cHY0/iwcSIMed6e6PCuH2J+6IZwqQCpW/4wBPOgC2hzzl2UF96TE+ZQNG4pskEy1sZFdl/meRMKJXmf4QhtTVJTpU9i+Pk2LxXe3nivuXDP76Pv/IFi8eihYbMEAPGBnkZRkTXP3tX5P6DcZnnRd2YFadHikzSVHRbvXVERE+CzpoUY+SC5EDvd vsI6cAAy ex60qxkpEij3GHBk32+I6lwodDICWpTr7fUpMKk6vE5U6SPkgzpG6WIudNV1cZXL0pAyfcRb3RJ9jkx8viyVzBW540hzTtbtzfsdyM38IPpBGba0BzaMrCSkGUfY1Gvh1MeQQl2E5JRSKGlPFxJAoYa3R1TIXBsFxlDzP2TwVnwhr/GkLzTYntUqZPGftLKmesz86wn6LdkFZL0ckwziCXIS+TemuRZXEWY8fc9acDVnWlboVr7gRI6qik/lmS3PGHcgT8+DKeNKOY3yTAZ2Aw5bX7dtU2i58S7tb4hqS0wpPDRI5v+F+zKGvaXxmbdisdLZprVo+3+6Cl9lM26NoxujIXzgnUZN4lOkdFFMbgFrPsrrLAIbfEBIu+1kN6oeq7+sRf8wEAc7a2LjLoDKyGcrkoqOsDO4eGGOIBmUeGYcFCE/utGPguMLGXOfOnN952326ggcAi8SQTNOoaQtwszLRFS/eGPuGLkYUIns1SmUxhb12DDQrCr1rORuV4Yowl8ejIKak6GU7p5R5mNm31DXJVJhpbCy11RQdOG5kXNzzKpLk75/V3L9wfZiNQU89vgtpvbAlCaKRUrj7t92zuKT2bT9QrCdjPqNI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Instead of calculating the swap entry differently in different swapin paths, calculate it early before the swap cache lookup and use that for the lookup and later swapin. And after swapin have brought a folio, simply round it down against the size of the folio. This is simple and effective enough to verify the swap value. A folio's swap entry is always aligned by its size. Any kind of parallel split or race is acceptable because the final shmem_add_to_page_cache ensures that all entries covered by the folio are correct, and thus there will be no data corruption. This also prevents false positive cache lookup. If a shmem read request's index points to the middle of a large swap entry, previously, shmem will try the swap cache lookup using the large swap entry's starting value (which is the first sub swap entry of this large entry). This will lead to false positive lookup results if only the first few swap entries are cached but the actual requested swap entry pointed by the index is uncached. This is not a rare event, as swap readahead always tries to cache order 0 folios when possible. And this shouldn't cause any increased repeated faults. Instead, no matter how the shmem mapping is split in parallel, as long as the mapping still contains the right entries, the swapin will succeed. The final object size and stack usage are also reduced due to simplified code: ./scripts/bloat-o-meter mm/shmem.o.old mm/shmem.o add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-233 (-233) Function old new delta shmem_swapin_folio 4040 3807 -233 Total: Before=33152, After=32919, chg -0.70% Stack usage (Before vs After): mm/shmem.c:2277:12:shmem_swapin_folio 264 static mm/shmem.c:2277:12:shmem_swapin_folio 256 static And while at it, round down the index too if swap entry is round down. The index is used either for folio reallocation or confirming the mapping content. In either case, it should be aligned with the swap folio. Signed-off-by: Kairui Song --- mm/shmem.c | 66 ++++++++++++++++++++++++++---------------------------- 1 file changed, 32 insertions(+), 34 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 80f5b8c73eb8..9c50607ac455 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2265,7 +2265,7 @@ static int shmem_split_large_entry(struct inode *inode, pgoff_t index, if (xas_error(&xas)) return xas_error(&xas); - return entry_order; + return 0; } /* @@ -2286,7 +2286,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, struct swap_info_struct *si; struct folio *folio = NULL; bool skip_swapcache = false; - int error, nr_pages, order, split_order; + int error, nr_pages, order; pgoff_t offset; VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); @@ -2294,11 +2294,11 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, swap = index_entry; *foliop = NULL; - if (is_poisoned_swp_entry(swap)) + if (is_poisoned_swp_entry(index_entry)) return -EIO; - si = get_swap_device(swap); - order = shmem_confirm_swap(mapping, index, swap); + si = get_swap_device(index_entry); + order = shmem_confirm_swap(mapping, index, index_entry); if (unlikely(!si)) { if (order < 0) return -EEXIST; @@ -2310,6 +2310,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, return -EEXIST; } + /* index may point to the middle of a large entry, get the sub entry */ + if (order) { + offset = index - round_down(index, 1 << order); + swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); + } + /* Look it up and read it in.. */ folio = swap_cache_get_folio(swap, NULL, 0); if (!folio) { @@ -2322,7 +2328,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) { /* Direct mTHP swapin skipping swap cache & readhaed */ - folio = shmem_swap_alloc_folio(inode, vma, index, swap, order, gfp); + folio = shmem_swap_alloc_folio(inode, vma, index, + index_entry, order, gfp); if (IS_ERR(folio)) { error = PTR_ERR(folio); folio = NULL; @@ -2330,16 +2337,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, } skip_swapcache = true; } else { - /* - * Cached swapin only supports order 0 folio, it is - * necessary to recalculate the new swap entry based on - * the offset, as the swapin index might be unalgined. - */ - if (order) { - offset = index - round_down(index, 1 << order); - swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); - } - + /* Cached swapin only supports order 0 folio */ folio = shmem_swapin_cluster(swap, gfp, info, index); if (!folio) { error = -ENOMEM; @@ -2356,23 +2354,25 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, * large swap entries. In such cases, we should split the * large swap entry to prevent possible data corruption. */ - split_order = shmem_split_large_entry(inode, index, index_entry, gfp); - if (split_order < 0) { - error = split_order; + error = shmem_split_large_entry(inode, index, index_entry, gfp); + if (error) goto failed_nolock; - } + } - /* - * If the large swap entry has already been split, it is - * necessary to recalculate the new swap entry based on - * the old order alignment. - */ - if (split_order > 0) { - offset = index - round_down(index, 1 << split_order); - swap = swp_entry(swp_type(swap), swp_offset(index_entry) + offset); - } - } else if (order < folio_order(folio)) { - swap.val = round_down(swap.val, 1 << folio_order(folio)); + /* + * If the folio is large, round down swap and index by folio size. + * No matter what race occurs, the swap layer ensures we either get + * a valid folio that has its swap entry aligned by size, or a + * temporarily invalid one which we'll abort very soon and retry. + * + * shmem_add_to_page_cache ensures the whole range contains expected + * entries and prevents any corruption, so any race split is fine + * too, it will succeed as long as the entries are still there. + */ + nr_pages = folio_nr_pages(folio); + if (nr_pages > 1) { + swap.val = round_down(swap.val, nr_pages); + index = round_down(index, nr_pages); } /* We have to do this with folio locked to prevent races */ @@ -2387,7 +2387,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, goto failed; } folio_wait_writeback(folio); - nr_pages = folio_nr_pages(folio); /* * Some architectures may have to restore extra metadata to the @@ -2401,8 +2400,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, goto failed; } - error = shmem_add_to_page_cache(folio, mapping, - round_down(index, nr_pages), + error = shmem_add_to_page_cache(folio, mapping, index, swp_to_radix_entry(swap), gfp); if (error) goto failed; -- 2.50.0