From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: linux-mm@kvack.org, cgroups@vger.kernel.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>
Subject: [PATCH v3 10/18] mm/memcg: Convert mem_cgroup_uncharge() to take a folio
Date: Wed, 30 Jun 2021 05:00:26 +0100 [thread overview]
Message-ID: <20210630040034.1155892-11-willy@infradead.org> (raw)
In-Reply-To: <20210630040034.1155892-1-willy@infradead.org>
Convert all the callers to call page_folio().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/memcontrol.h | 4 ++--
mm/filemap.c | 2 +-
mm/khugepaged.c | 4 ++--
mm/memcontrol.c | 14 +++++++-------
mm/memory-failure.c | 2 +-
mm/memremap.c | 2 +-
mm/page_alloc.c | 2 +-
mm/swap.c | 2 +-
8 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 90d48b0e3191..d6386a2b9d7a 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -710,7 +710,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
gfp_t gfp, swp_entry_t entry);
void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);
-void mem_cgroup_uncharge(struct page *page);
+void mem_cgroup_uncharge(struct folio *);
void mem_cgroup_uncharge_list(struct list_head *page_list);
void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);
@@ -1202,7 +1202,7 @@ static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry)
{
}
-static inline void mem_cgroup_uncharge(struct page *page)
+static inline void mem_cgroup_uncharge(struct folio *folio)
{
}
diff --git a/mm/filemap.c b/mm/filemap.c
index 9600bca84162..0008ada132c4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -923,7 +923,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
if (xas_error(&xas)) {
error = xas_error(&xas);
if (charged)
- mem_cgroup_uncharge(page);
+ mem_cgroup_uncharge(page_folio(page));
goto error;
}
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0daa21fbdd71..988a230c7a41 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1212,7 +1212,7 @@ static void collapse_huge_page(struct mm_struct *mm,
mmap_write_unlock(mm);
out_nolock:
if (!IS_ERR_OR_NULL(*hpage))
- mem_cgroup_uncharge(*hpage);
+ mem_cgroup_uncharge(page_folio(*hpage));
trace_mm_collapse_huge_page(mm, isolated, result);
return;
}
@@ -1963,7 +1963,7 @@ static void collapse_file(struct mm_struct *mm,
out:
VM_BUG_ON(!list_empty(&pagelist));
if (!IS_ERR_OR_NULL(*hpage))
- mem_cgroup_uncharge(*hpage);
+ mem_cgroup_uncharge(page_folio(*hpage));
/* TODO: tracepoints */
}
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 21b791935957..90a53f554371 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6892,24 +6892,24 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
}
/**
- * mem_cgroup_uncharge - uncharge a page
- * @page: page to uncharge
+ * mem_cgroup_uncharge - Uncharge a folio.
+ * @folio: Folio to uncharge.
*
- * Uncharge a page previously charged with mem_cgroup_charge().
+ * Uncharge a folio previously charged with folio_charge_cgroup().
*/
-void mem_cgroup_uncharge(struct page *page)
+void mem_cgroup_uncharge(struct folio *folio)
{
struct uncharge_gather ug;
if (mem_cgroup_disabled())
return;
- /* Don't touch page->lru of any random page, pre-check: */
- if (!page_memcg(page))
+ /* Don't touch folio->lru of any random page, pre-check: */
+ if (!folio_memcg(folio))
return;
uncharge_gather_clear(&ug);
- uncharge_folio(page_folio(page), &ug);
+ uncharge_folio(folio, &ug);
uncharge_batch(&ug);
}
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index e5a1531f7f4e..7ada5959b5ad 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -750,7 +750,7 @@ static int delete_from_lru_cache(struct page *p)
* Poisoned page might never drop its ref count to 0 so we have
* to uncharge it manually from its memcg.
*/
- mem_cgroup_uncharge(p);
+ mem_cgroup_uncharge(page_folio(p));
/*
* drop the page count elevated by isolate_lru_page()
diff --git a/mm/memremap.c b/mm/memremap.c
index 15a074ffb8d7..6eac40f9f62a 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -508,7 +508,7 @@ void free_devmap_managed_page(struct page *page)
__ClearPageWaiters(page);
- mem_cgroup_uncharge(page);
+ mem_cgroup_uncharge(page_folio(page));
/*
* When a device_private page is freed, the page->mapping field
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0817d88383d5..5a5fcd4f21a8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -737,7 +737,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
void free_compound_page(struct page *page)
{
- mem_cgroup_uncharge(page);
+ mem_cgroup_uncharge(page_folio(page));
free_the_page(page, compound_order(page));
}
diff --git a/mm/swap.c b/mm/swap.c
index 6954cfebab4f..8ba62a930370 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -94,7 +94,7 @@ static void __page_cache_release(struct page *page)
static void __put_single_page(struct page *page)
{
__page_cache_release(page);
- mem_cgroup_uncharge(page);
+ mem_cgroup_uncharge(page_folio(page));
free_unref_page(page, 0);
}
--
2.30.2
next prev parent reply other threads:[~2021-06-30 4:08 UTC|newest]
Thread overview: 59+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-30 4:00 [PATCH v3 00/18] Folio conversion of memcg Matthew Wilcox (Oracle)
2021-06-30 4:00 ` [PATCH v3 01/18] mm: Add folio_nid() Matthew Wilcox (Oracle)
2021-07-01 6:56 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 02/18] mm/memcg: Remove 'page' parameter to mem_cgroup_charge_statistics() Matthew Wilcox (Oracle)
2021-06-30 14:17 ` Johannes Weiner
2021-06-30 4:00 ` [PATCH v3 03/18] mm/memcg: Use the node id in mem_cgroup_update_tree() Matthew Wilcox (Oracle)
2021-06-30 6:55 ` Michal Hocko
2021-06-30 14:18 ` Johannes Weiner
2021-07-01 6:57 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 04/18] mm/memcg: Remove soft_limit_tree_node() Matthew Wilcox (Oracle)
2021-06-30 6:56 ` Michal Hocko
2021-06-30 14:19 ` Johannes Weiner
2021-07-01 7:09 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 05/18] mm/memcg: Convert memcg_check_events to take a node ID Matthew Wilcox (Oracle)
2021-06-30 6:58 ` Michal Hocko
2021-06-30 6:59 ` Michal Hocko
2021-07-01 7:09 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 06/18] mm/memcg: Add folio_memcg() and related functions Matthew Wilcox (Oracle)
2021-06-30 6:53 ` kernel test robot
2021-07-01 7:12 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 07/18] mm/memcg: Convert commit_charge() to take a folio Matthew Wilcox (Oracle)
2021-06-30 4:00 ` [PATCH v3 08/18] mm/memcg: Convert mem_cgroup_charge() " Matthew Wilcox (Oracle)
2021-06-30 7:17 ` kernel test robot
2021-07-01 7:13 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 09/18] mm/memcg: Convert uncharge_page() to uncharge_folio() Matthew Wilcox (Oracle)
2021-07-01 7:15 ` Christoph Hellwig
2021-06-30 4:00 ` Matthew Wilcox (Oracle) [this message]
2021-06-30 8:46 ` [PATCH v3 10/18] mm/memcg: Convert mem_cgroup_uncharge() to take a folio kernel test robot
2021-07-01 7:17 ` Christoph Hellwig
2021-07-07 12:09 ` Matthew Wilcox
2021-06-30 4:00 ` [PATCH v3 11/18] mm/memcg: Convert mem_cgroup_migrate() to take folios Matthew Wilcox (Oracle)
2021-07-01 7:20 ` Christoph Hellwig
2021-06-30 4:00 ` [PATCH v3 12/18] mm/memcg: Convert mem_cgroup_track_foreign_dirty_slowpath() to folio Matthew Wilcox (Oracle)
2021-06-30 4:00 ` [PATCH v3 13/18] mm/memcg: Add folio_memcg_lock() and folio_memcg_unlock() Matthew Wilcox (Oracle)
2021-06-30 8:32 ` Michal Hocko
2021-07-07 15:10 ` Matthew Wilcox
2021-07-08 7:28 ` Michal Hocko
2021-07-07 17:08 ` Johannes Weiner
2021-07-07 19:28 ` Matthew Wilcox
2021-07-07 20:41 ` Johannes Weiner
2021-07-09 19:37 ` Matthew Wilcox
2021-06-30 4:00 ` [PATCH v3 14/18] mm/memcg: Convert mem_cgroup_move_account() to use a folio Matthew Wilcox (Oracle)
2021-06-30 8:30 ` Michal Hocko
2021-06-30 11:22 ` Matthew Wilcox
2021-06-30 12:20 ` Michal Hocko
2021-06-30 12:31 ` Matthew Wilcox
2021-06-30 12:45 ` Michal Hocko
2021-07-07 15:25 ` Matthew Wilcox
2021-07-08 7:30 ` Michal Hocko
2021-06-30 4:00 ` [PATCH v3 15/18] mm/memcg: Add mem_cgroup_folio_lruvec() Matthew Wilcox (Oracle)
2021-06-30 8:12 ` kernel test robot
2021-06-30 19:18 ` Matthew Wilcox
2021-06-30 21:21 ` Johannes Weiner
2021-06-30 4:00 ` [PATCH v3 16/18] mm/memcg: Add folio_lruvec_lock() and similar functions Matthew Wilcox (Oracle)
2021-06-30 8:36 ` Michal Hocko
2021-06-30 4:00 ` [PATCH v3 17/18] mm/memcg: Add folio_lruvec_relock_irq() and folio_lruvec_relock_irqsave() Matthew Wilcox (Oracle)
2021-06-30 8:39 ` Michal Hocko
2021-06-30 4:00 ` [PATCH v3 18/18] mm/workingset: Convert workingset_activation to take a folio Matthew Wilcox (Oracle)
2021-06-30 8:44 ` [PATCH v3 00/18] Folio conversion of memcg Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210630040034.1155892-11-willy@infradead.org \
--to=willy@infradead.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).