From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from lgeamrelo07.lge.com (lgeamrelo07.lge.com [156.147.51.103]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1E294C9577 for ; Wed, 13 May 2026 17:41:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=156.147.51.103 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778694093; cv=none; b=jyaUOT1gXUU7YDwA7u7LSSqgHbjSn1eCcYSipInkCKleqUJc5PE4Qjc68phWwyOVNyYLdnXTLsZKRRox408pGGgdbU4cZrC7woyqBsXEH3bvLySqgNtrvOwEFimsrwsfjbilCkX+i4vurKpYikl0/ZkCQOyfRPrK3/71YMnGQJU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778694093; c=relaxed/simple; bh=Vr7/Xrv0tMa9g8fmuVhFX2qs3sgA07gCpOAcDwK3QMI=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=IJZQD3iWVGAoXuppcPmNojac2b9BgNvZmpREnfGL80FlC7zhzH30sX2kijRZCQC6TRtmhYDLfb+SmT1kcnQgEs3BEaLp9pTRQ6xJFoDJONNalXVHuAJQgUrWAHmASilhXFyFKUJvTCVYqB8Y/M7eO8CBDJZFA9N0bvXj71MoPoA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=lge.com; spf=pass smtp.mailfrom=lge.com; arc=none smtp.client-ip=156.147.51.103 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=lge.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=lge.com Received: from unknown (HELO yjaykim-PowerEdge-T330) (10.177.112.156) by 156.147.51.103 with ESMTP; 14 May 2026 02:26:29 +0900 X-Original-SENDERIP: 10.177.112.156 X-Original-MAILFROM: youngjun.park@lge.com Date: Thu, 14 May 2026 02:26:29 +0900 From: YoungJun Park To: kasong@tencent.com Cc: linux-mm@kvack.org, Andrew Morton , David Hildenbrand , Zi Yan , Baolin Wang , Barry Song , Hugh Dickins , Chris Li , Kemeng Shi , Nhat Pham , Baoquan He , Johannes Weiner , Chengming Zhou , Roman Gushchin , Shakeel Butt , Muchun Song , Qi Zheng , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Yosry Ahmed , Lorenzo Stoakes , Dev Jain , Lance Yang , Michal Hocko , Michal Hocko , Suren Baghdasaryan , Axel Rasmussen Subject: Re: [PATCH v3 04/12] mm, swap: add support for stable large allocation in swap cache directly Message-ID: References: <20260421-swap-table-p4-v3-0-2f23759a76bc@tencent.com> <20260421-swap-table-p4-v3-4-2f23759a76bc@tencent.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260421-swap-table-p4-v3-4-2f23759a76bc@tencent.com> On Tue, Apr 21, 2026 at 02:16:48PM +0800, Kairui Song via B4 Relay wrote: ... > static struct folio *swap_cache_read_folio(swp_entry_t entry, gfp_t gfp, > struct mempolicy *mpol, pgoff_t ilx, > struct swap_iocb **plug, bool readahead) > { > - struct swap_info_struct *si = __swap_entry_to_info(entry); > struct folio *folio; > > /* Check the swap cache again for readahead path. */ > @@ -594,16 +700,12 @@ static struct folio *swap_cache_read_folio(swp_entry_t entry, gfp_t gfp, > if (folio) > return folio; > - /* Skip allocation for unused and bad swap slot for readahead. */ > - if (!swap_entry_swapped(si, entry)) > - return NULL; > - Hello Kairui With the swap_entry_swapped() check gone, the swap_cache_get_folio() above the do-while is now just a duplicate of the loop's first iteration. Might as well drop it (and the now-stale "again for readahead path" comment) here. Best regrads Youngjun Park > do { > folio = swap_cache_get_folio(entry); > if (folio) > return folio; > > - folio = swap_cache_alloc_folio(entry, gfp, mpol, ilx); > + folio = swap_cache_alloc_folio(entry, gfp, 0, NULL, mpol, ilx); > } while (IS_ERR(folio) && PTR_ERR(folio) == -EEXIST);