From: Dave Airlie <airlied@gmail.com>
To: dri-devel@lists.freedesktop.org, tj@kernel.org,
christian.koenig@amd.com, Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeel.butt@linux.dev>,
Muchun Song <muchun.song@linux.dev>
Cc: cgroups@vger.kernel.org, Dave Chinner <david@fromorbit.com>,
Waiman Long <longman@redhat.com>,
simona@ffwll.ch
Subject: [PATCH 07/15] memcg: add support for GPU page counters. (v3)
Date: Tue, 2 Sep 2025 14:06:46 +1000 [thread overview]
Message-ID: <20250902041024.2040450-8-airlied@gmail.com> (raw)
In-Reply-To: <20250902041024.2040450-1-airlied@gmail.com>
From: Dave Airlie <airlied@redhat.com>
This introduces 2 new statistics and 3 new memcontrol APIs for dealing
with GPU system memory allocations.
The stats corresponds to the same stats in the global vmstat,
for number of active GPU pages, and number of pages in pools that
can be reclaimed.
The first API charges a order of pages to a objcg, and sets
the objcg on the pages like kmem does, and updates the active/reclaim
statistic.
The second API uncharges a page from the obj cgroup it is currently charged
to.
The third API allows moving a page to/from reclaim and between obj cgroups.
When pages are added to the pool lru, this just updates accounting.
When pages are being removed from a pool lru, they can be taken from
the parent objcg so this allows them to be uncharged from there and transferred
to a new child objcg.
Signed-off-by: Dave Airlie <airlied@redhat.com>
---
v2: use memcg_node_stat_items
v3: fix null ptr dereference in uncharge
---
Documentation/admin-guide/cgroup-v2.rst | 6 ++
include/linux/memcontrol.h | 12 +++
mm/memcontrol.c | 107 ++++++++++++++++++++++++
3 files changed, 125 insertions(+)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 51c0bc4c2dc5..2b5f778fa00d 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1551,6 +1551,12 @@ The following nested keys are defined.
vmalloc (npn)
Amount of memory used for vmap backed memory.
+ gpu_active (npn)
+ Amount of system memory used for GPU devices.
+
+ gpu_reclaim (npn)
+ Amount of system memory cached for GPU devices.
+
shmem
Amount of cached filesystem data that is swap-backed,
such as tmpfs, shm segments, shared anonymous mmap()s
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 785173aa0739..d7cfb9925db5 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1599,6 +1599,18 @@ struct sock;
bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages,
gfp_t gfp_mask);
void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages);
+
+bool mem_cgroup_charge_gpu_page(struct obj_cgroup *objcg, struct page *page,
+ unsigned int nr_pages,
+ gfp_t gfp_mask, bool reclaim);
+void mem_cgroup_uncharge_gpu_page(struct page *page,
+ unsigned int nr_pages,
+ bool reclaim);
+bool mem_cgroup_move_gpu_page_reclaim(struct obj_cgroup *objcg,
+ struct page *page,
+ unsigned int order,
+ bool to_reclaim);
+
#ifdef CONFIG_MEMCG
extern struct static_key_false memcg_sockets_enabled_key;
#define mem_cgroup_sockets_enabled static_branch_unlikely(&memcg_sockets_enabled_key)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8dd7fbed5a94..3d637c7e10cf 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -329,6 +329,8 @@ static const unsigned int memcg_node_stat_items[] = {
#ifdef CONFIG_HUGETLB_PAGE
NR_HUGETLB,
#endif
+ NR_GPU_ACTIVE,
+ NR_GPU_RECLAIM,
};
static const unsigned int memcg_stat_items[] = {
@@ -1340,6 +1342,8 @@ static const struct memory_stat memory_stats[] = {
{ "percpu", MEMCG_PERCPU_B },
{ "sock", MEMCG_SOCK },
{ "vmalloc", MEMCG_VMALLOC },
+ { "gpu_active", NR_GPU_ACTIVE },
+ { "gpu_reclaim", NR_GPU_RECLAIM },
{ "shmem", NR_SHMEM },
#ifdef CONFIG_ZSWAP
{ "zswap", MEMCG_ZSWAP_B },
@@ -5064,6 +5068,109 @@ void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
refill_stock(memcg, nr_pages);
}
+/**
+ * mem_cgroup_charge_gpu_page - charge a page to GPU memory tracking
+ * @objcg: objcg to charge, NULL charges root memcg
+ * @page: page to charge
+ * @order: page allocation order
+ * @gfp_mask: gfp mode
+ * @reclaim: charge the reclaim counter instead of the active one.
+ *
+ * Charge the order sized @page to the objcg. Returns %true if the charge fit within
+ * @objcg's configured limit, %false if it doesn't.
+ */
+bool mem_cgroup_charge_gpu_page(struct obj_cgroup *objcg, struct page *page,
+ unsigned int order, gfp_t gfp_mask, bool reclaim)
+{
+ unsigned int nr_pages = 1 << order;
+ struct mem_cgroup *memcg = NULL;
+ struct lruvec *lruvec;
+ int ret;
+
+ if (objcg) {
+ memcg = get_mem_cgroup_from_objcg(objcg);
+
+ ret = try_charge_memcg(memcg, gfp_mask, nr_pages);
+ if (ret) {
+ mem_cgroup_put(memcg);
+ return false;
+ }
+
+ obj_cgroup_get(objcg);
+ page_set_objcg(page, objcg);
+ }
+
+ lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
+ mod_lruvec_state(lruvec, reclaim ? NR_GPU_RECLAIM : NR_GPU_ACTIVE, nr_pages);
+
+ mem_cgroup_put(memcg);
+ return true;
+}
+EXPORT_SYMBOL_GPL(mem_cgroup_charge_gpu_page);
+
+/**
+ * mem_cgroup_uncharge_gpu_page - uncharge a page from GPU memory tracking
+ * @page: page to uncharge
+ * @order: order of the page allocation
+ * @reclaim: uncharge the reclaim counter instead of the active.
+ */
+void mem_cgroup_uncharge_gpu_page(struct page *page,
+ unsigned int order, bool reclaim)
+{
+ struct obj_cgroup *objcg = page_objcg(page);
+ struct mem_cgroup *memcg;
+ struct lruvec *lruvec;
+ int nr_pages = 1 << order;
+
+ memcg = objcg ? get_mem_cgroup_from_objcg(objcg) : NULL;
+
+ lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
+ mod_lruvec_state(lruvec, reclaim ? NR_GPU_RECLAIM : NR_GPU_ACTIVE, -nr_pages);
+
+ if (memcg && !mem_cgroup_is_root(memcg))
+ refill_stock(memcg, nr_pages);
+ page->memcg_data = 0;
+ obj_cgroup_put(objcg);
+ mem_cgroup_put(memcg);
+}
+EXPORT_SYMBOL_GPL(mem_cgroup_uncharge_gpu_page);
+
+/**
+ * mem_cgroup_move_gpu_reclaim - move pages from gpu to gpu reclaim and back
+ * @new_objcg: objcg to move page to, NULL if just stats update.
+ * @nr_pages: number of pages to move
+ * @to_reclaim: true moves pages into reclaim, false moves them back
+ */
+bool mem_cgroup_move_gpu_page_reclaim(struct obj_cgroup *new_objcg,
+ struct page *page,
+ unsigned int order,
+ bool to_reclaim)
+{
+ struct obj_cgroup *objcg = page_objcg(page);
+
+ if (!objcg)
+ return false;
+
+ if (!new_objcg || objcg == new_objcg) {
+ struct mem_cgroup *memcg = get_mem_cgroup_from_objcg(objcg);
+ struct lruvec *lruvec;
+ unsigned long flags;
+ int nr_pages = 1 << order;
+
+ lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
+ local_irq_save(flags);
+ __mod_lruvec_state(lruvec, to_reclaim ? NR_GPU_RECLAIM : NR_GPU_ACTIVE, nr_pages);
+ __mod_lruvec_state(lruvec, to_reclaim ? NR_GPU_ACTIVE : NR_GPU_RECLAIM, -nr_pages);
+ local_irq_restore(flags);
+ mem_cgroup_put(memcg);
+ return true;
+ } else {
+ mem_cgroup_uncharge_gpu_page(page, order, true);
+ return mem_cgroup_charge_gpu_page(new_objcg, page, order, 0, false);
+ }
+}
+EXPORT_SYMBOL_GPL(mem_cgroup_move_gpu_page_reclaim);
+
static int __init cgroup_memory(char *s)
{
char *token;
--
2.50.1
next prev parent reply other threads:[~2025-09-02 4:11 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-02 4:06 drm/ttm/memcg/lru: enable memcg tracking for ttm and amdgpu driver (complete series v3) Dave Airlie
2025-09-02 4:06 ` [PATCH 01/15] mm: add gpu active/reclaim per-node stat counters (v2) Dave Airlie
2025-09-02 4:06 ` [PATCH 02/15] drm/ttm: use gpu mm stats to track gpu memory allocations. (v4) Dave Airlie
2025-09-02 4:06 ` [PATCH 03/15] ttm/pool: port to list_lru. (v2) Dave Airlie
2025-09-03 0:44 ` kernel test robot
2025-09-02 4:06 ` [PATCH 04/15] ttm/pool: drop numa specific pools Dave Airlie
2025-09-02 4:06 ` [PATCH 05/15] ttm/pool: make pool shrinker NUMA aware Dave Airlie
2025-09-02 4:06 ` [PATCH 06/15] ttm/pool: track allocated_pages per numa node Dave Airlie
2025-09-02 4:06 ` Dave Airlie [this message]
2025-09-02 4:06 ` [PATCH 08/15] ttm: add a memcg accounting flag to the alloc/populate APIs Dave Airlie
2025-09-02 4:06 ` [PATCH 09/15] ttm/pool: initialise the shrinker earlier Dave Airlie
2025-09-02 14:07 ` Christian König
2025-09-04 2:21 ` Dave Airlie
2025-09-02 4:06 ` [PATCH 10/15] ttm: add objcg pointer to bo and tt Dave Airlie
2025-09-02 4:06 ` [PATCH 11/15] ttm/pool: enable memcg tracking and shrinker. (v2) Dave Airlie
2025-09-02 14:23 ` Christian König
2025-09-04 2:25 ` Dave Airlie
2025-09-04 11:29 ` Christian König
2025-09-02 4:06 ` [PATCH 12/15] ttm: hook up memcg placement flags Dave Airlie
2025-09-02 4:06 ` [PATCH 13/15] memcontrol: allow objcg api when memcg is config off Dave Airlie
2025-09-02 4:06 ` [PATCH 14/15] amdgpu: add support for memory cgroups Dave Airlie
2025-09-02 4:06 ` [PATCH 15/15] ttm: add support for a module option to disable memcg integration Dave Airlie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250902041024.2040450-8-airlied@gmail.com \
--to=airlied@gmail.com \
--cc=cgroups@vger.kernel.org \
--cc=christian.koenig@amd.com \
--cc=david@fromorbit.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=hannes@cmpxchg.org \
--cc=longman@redhat.com \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=simona@ffwll.ch \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).