From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E724CC00140 for ; Mon, 8 Aug 2022 19:35:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC210940018; Mon, 8 Aug 2022 15:35:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C23056B0075; Mon, 8 Aug 2022 15:35:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2FE9940018; Mon, 8 Aug 2022 15:35:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 869886B0074 for ; Mon, 8 Aug 2022 15:35:46 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 607BC80122 for ; Mon, 8 Aug 2022 19:35:46 +0000 (UTC) X-FDA: 79777430292.28.8C2CE49 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 5EBAC2017B for ; Mon, 8 Aug 2022 19:35:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=idEOnUugnwQ9RQyH3KgSQbPxcJC+h+rJ4btW2BC2IeM=; b=EU2EeJVeqbhS75dv0RdAiI2wcu 4R6kglbfPviSFfo7uoYqY/lobW6CtFPWuGEQLmQ8JJ/8v0OLKNdy77Hs3idoS6TsygbdGxmGZy9k1 9XQh+bKOAWwLba/FXeCyW/go26e+Tv3eh7Y+rm7JGwZ0YVAVnbl/5L7sJbaEeMa1K/qTPg/e5LESN HQI2ABc0ky/W1Rezz0g6oTPGASp9pttM33xKVTZD8yW75WJnVkfp3s5Aqj5r5Oaj2odPX8maFFvJV OS9l+cLugNjFyVWd8Fhfy1m7Q5z9MCMFO1cR25eSvDnHHpsh8d93mUv7bBCtfqt2p6za+f6oidYp4 QOruIAiQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oL8Wp-00EAua-O8; Mon, 08 Aug 2022 19:35:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , hughd@google.com Subject: [PATCH 15/59] mm/swap: Convert add_to_swap_cache() to take a folio Date: Mon, 8 Aug 2022 20:33:43 +0100 Message-Id: <20220808193430.3378317-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808193430.3378317-1-willy@infradead.org> References: <20220808193430.3378317-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659987345; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=idEOnUugnwQ9RQyH3KgSQbPxcJC+h+rJ4btW2BC2IeM=; b=bXlphIQ38zTMKMl6kDSHeqjJlnbY+9cnmfyoPBh/rzTzBv8dCl8qZ9dbfMfr/L7Cs7ifFP B/QAPu9FGm7KSOAe9JujSuGVMCzN/hH4BTfeFZQQsadb9NwQ/U25JHleZJD6qtHxjbdZCz 9tfNVDjmSG+Gvn9DXevO9M07eDUZKbw= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EU2EeJVe; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659987345; a=rsa-sha256; cv=none; b=b9wR5vbc+OtjO1c9hoBW5aVh+sDQry9c2Io8rchnFWjk91wGK3SZRXrjRS4ZWZujRwNqF1 0nKHQKuHox+Lj0HbgYkE1tjz87KC16cCiK3cz6Ka9pBPGTPp5oW3ldcmL8GX2uH+51i5M7 M269KkWXb0CLV8nWQz2sYdJYamFVJ5Q= X-Rspamd-Queue-Id: 5EBAC2017B X-Rspamd-Server: rspam03 X-Rspam-User: Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=EU2EeJVe; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: 6c8iia818xitkw7txjpwbt4rqkmuowr4 X-HE-Tag: 1659987345-817919 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With all callers using folios, we can convert add_to_swap_cache() to take a folio and use it throughout. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 2 +- mm/swap.h | 4 ++-- mm/swap_state.c | 34 +++++++++++++++++----------------- 3 files changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index eec32307984d..49c3f59b5b76 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1406,7 +1406,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) if (list_empty(&info->swaplist)) list_add(&info->swaplist, &shmem_swaplist); - if (add_to_swap_cache(&folio->page, swap, + if (add_to_swap_cache(folio, swap, __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN, NULL) == 0) { spin_lock_irq(&info->lock); diff --git a/mm/swap.h b/mm/swap.h index 17936e068c1c..0e023765e110 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -34,7 +34,7 @@ extern struct address_space *swapper_spaces[]; void show_swap_cache_info(void); bool add_to_swap(struct folio *folio); void *get_shadow_from_swap_cache(swp_entry_t entry); -int add_to_swap_cache(struct page *page, swp_entry_t entry, +int add_to_swap_cache(struct folio *folio, swp_entry_t entry, gfp_t gfp, void **shadowp); void __delete_from_swap_cache(struct folio *folio, swp_entry_t entry, void *shadow); @@ -124,7 +124,7 @@ static inline void *get_shadow_from_swap_cache(swp_entry_t entry) return NULL; } -static inline int add_to_swap_cache(struct page *page, swp_entry_t entry, +static inline int add_to_swap_cache(struct folio *folio, swp_entry_t entry, gfp_t gfp_mask, void **shadowp) { return -1; diff --git a/mm/swap_state.c b/mm/swap_state.c index b1e181fc5268..ecf1accc2fb1 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -85,21 +85,21 @@ void *get_shadow_from_swap_cache(swp_entry_t entry) * add_to_swap_cache resembles filemap_add_folio on swapper_space, * but sets SwapCache flag and private instead of mapping and index. */ -int add_to_swap_cache(struct page *page, swp_entry_t entry, +int add_to_swap_cache(struct folio *folio, swp_entry_t entry, gfp_t gfp, void **shadowp) { struct address_space *address_space = swap_address_space(entry); pgoff_t idx = swp_offset(entry); - XA_STATE_ORDER(xas, &address_space->i_pages, idx, compound_order(page)); - unsigned long i, nr = thp_nr_pages(page); + XA_STATE_ORDER(xas, &address_space->i_pages, idx, folio_order(folio)); + unsigned long i, nr = folio_nr_pages(folio); void *old; - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(PageSwapCache(page), page); - VM_BUG_ON_PAGE(!PageSwapBacked(page), page); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_swapbacked(folio), folio); - page_ref_add(page, nr); - SetPageSwapCache(page); + folio_ref_add(folio, nr); + folio_set_swapcache(folio); do { xas_lock_irq(&xas); @@ -107,19 +107,19 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, if (xas_error(&xas)) goto unlock; for (i = 0; i < nr; i++) { - VM_BUG_ON_PAGE(xas.xa_index != idx + i, page); + VM_BUG_ON_FOLIO(xas.xa_index != idx + i, folio); old = xas_load(&xas); if (xa_is_value(old)) { if (shadowp) *shadowp = old; } - set_page_private(page + i, entry.val + i); - xas_store(&xas, page); + set_page_private(folio_page(folio, i), entry.val + i); + xas_store(&xas, folio); xas_next(&xas); } address_space->nrpages += nr; - __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); - __mod_lruvec_page_state(page, NR_SWAPCACHE, nr); + __node_stat_mod_folio(folio, NR_FILE_PAGES, nr); + __lruvec_stat_mod_folio(folio, NR_SWAPCACHE, nr); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -127,8 +127,8 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, if (!xas_error(&xas)) return 0; - ClearPageSwapCache(page); - page_ref_sub(page, nr); + folio_clear_swapcache(folio); + folio_ref_sub(folio, nr); return xas_error(&xas); } @@ -194,7 +194,7 @@ bool add_to_swap(struct folio *folio) /* * Add it to the swap cache. */ - err = add_to_swap_cache(&folio->page, entry, + err = add_to_swap_cache(folio, entry, __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN, NULL); if (err) /* @@ -484,7 +484,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, goto fail_unlock; /* May fail (-ENOMEM) if XArray node allocation failed. */ - if (add_to_swap_cache(&folio->page, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) + if (add_to_swap_cache(folio, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) goto fail_unlock; mem_cgroup_swapin_uncharge_swap(entry); -- 2.35.1