* [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() [not found] <20251113000932.1589073-1-willy@infradead.org> @ 2025-11-13 0:09 ` Matthew Wilcox (Oracle) 2025-11-13 16:14 ` Johannes Weiner 0 siblings, 1 reply; 10+ messages in thread From: Matthew Wilcox (Oracle) @ 2025-11-13 0:09 UTC (permalink / raw) To: Vlastimil Babka, Andrew Morton Cc: Matthew Wilcox (Oracle), Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linux-mm, Johannes Weiner, Michal Hocko, Shakeel Butt, Muchun Song, cgroups In preparation for splitting struct slab from struct page and struct folio, convert the pointer to a slab rather than a folio. This means we can end up passing a NULL slab pointer to mem_cgroup_from_obj_slab() if the pointer is not to a page allocated to slab, and we handle that appropriately by returning NULL. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Muchun Song <muchun.song@linux.dev> Cc: cgroups@vger.kernel.org --- mm/memcontrol.c | 36 +++++++++++++----------------------- 1 file changed, 13 insertions(+), 23 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 025da46d9959..b239d8ad511a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2589,38 +2589,28 @@ static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, } static __always_inline -struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) +struct mem_cgroup *mem_cgroup_from_obj_slab(struct slab *slab, void *p) { /* * Slab objects are accounted individually, not per-page. * Memcg membership data for each individual object is saved in * slab->obj_exts. */ - if (folio_test_slab(folio)) { - struct slabobj_ext *obj_exts; - struct slab *slab; - unsigned int off; - - slab = folio_slab(folio); - obj_exts = slab_obj_exts(slab); - if (!obj_exts) - return NULL; + struct slabobj_ext *obj_exts; + unsigned int off; - off = obj_to_index(slab->slab_cache, slab, p); - if (obj_exts[off].objcg) - return obj_cgroup_memcg(obj_exts[off].objcg); + if (!slab) + return NULL; + obj_exts = slab_obj_exts(slab); + if (!obj_exts) return NULL; - } - /* - * folio_memcg_check() is used here, because in theory we can encounter - * a folio where the slab flag has been cleared already, but - * slab->obj_exts has not been freed yet - * folio_memcg_check() will guarantee that a proper memory - * cgroup pointer or NULL will be returned. - */ - return folio_memcg_check(folio); + off = obj_to_index(slab->slab_cache, slab, p); + if (obj_exts[off].objcg) + return obj_cgroup_memcg(obj_exts[off].objcg); + + return NULL; } /* @@ -2637,7 +2627,7 @@ struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) if (mem_cgroup_disabled()) return NULL; - return mem_cgroup_from_obj_folio(virt_to_folio(p), p); + return mem_cgroup_from_obj_slab(virt_to_slab(p), p); } static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *memcg) -- 2.47.2 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() 2025-11-13 0:09 ` [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() Matthew Wilcox (Oracle) @ 2025-11-13 16:14 ` Johannes Weiner 2025-11-13 16:28 ` Vlastimil Babka 2025-11-13 16:39 ` Matthew Wilcox 0 siblings, 2 replies; 10+ messages in thread From: Johannes Weiner @ 2025-11-13 16:14 UTC (permalink / raw) To: Matthew Wilcox (Oracle) Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linux-mm, Michal Hocko, Shakeel Butt, Muchun Song, cgroups On Thu, Nov 13, 2025 at 12:09:28AM +0000, Matthew Wilcox (Oracle) wrote: > - /* > - * folio_memcg_check() is used here, because in theory we can encounter > - * a folio where the slab flag has been cleared already, but > - * slab->obj_exts has not been freed yet > - * folio_memcg_check() will guarantee that a proper memory > - * cgroup pointer or NULL will be returned. > - */ > - return folio_memcg_check(folio); > + off = obj_to_index(slab->slab_cache, slab, p); > + if (obj_exts[off].objcg) > + return obj_cgroup_memcg(obj_exts[off].objcg); > + > + return NULL; > } > > /* > @@ -2637,7 +2627,7 @@ struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) > if (mem_cgroup_disabled()) > return NULL; > > - return mem_cgroup_from_obj_folio(virt_to_folio(p), p); > + return mem_cgroup_from_obj_slab(virt_to_slab(p), p); The name undoubtedly sucks, but there is a comment above this function that this can be used on non-slab kernel pages as well. E.g. !vmap kernel stack pages -> mod_lruvec_kmem_state -> mem_cgroup_from_obj_slab How about: if ((slab = virt_to_slap(p))) return mem_cgroup_from_obj_slab(slab, p); return folio_memcg_check(virt_to_folio(p), p); ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() 2025-11-13 16:14 ` Johannes Weiner @ 2025-11-13 16:28 ` Vlastimil Babka 2025-11-13 19:42 ` Shakeel Butt 2025-11-13 16:39 ` Matthew Wilcox 1 sibling, 1 reply; 10+ messages in thread From: Vlastimil Babka @ 2025-11-13 16:28 UTC (permalink / raw) To: Johannes Weiner, Matthew Wilcox (Oracle) Cc: Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linux-mm, Michal Hocko, Shakeel Butt, Muchun Song, cgroups On 11/13/25 17:14, Johannes Weiner wrote: > On Thu, Nov 13, 2025 at 12:09:28AM +0000, Matthew Wilcox (Oracle) wrote: >> - /* >> - * folio_memcg_check() is used here, because in theory we can encounter >> - * a folio where the slab flag has been cleared already, but >> - * slab->obj_exts has not been freed yet >> - * folio_memcg_check() will guarantee that a proper memory >> - * cgroup pointer or NULL will be returned. >> - */ I've only noticed this comment and thought it describes something impossible to happen (maybe it was possible in the past) so seemed fine to delete it. >> - return folio_memcg_check(folio); >> + off = obj_to_index(slab->slab_cache, slab, p); >> + if (obj_exts[off].objcg) >> + return obj_cgroup_memcg(obj_exts[off].objcg); >> + >> + return NULL; >> } >> >> /* >> @@ -2637,7 +2627,7 @@ struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) >> if (mem_cgroup_disabled()) >> return NULL; >> >> - return mem_cgroup_from_obj_folio(virt_to_folio(p), p); >> + return mem_cgroup_from_obj_slab(virt_to_slab(p), p); > > The name undoubtedly sucks, but there is a comment above this function > that this can be used on non-slab kernel pages as well. Didn't notice this one. > E.g. !vmap kernel stack pages -> mod_lruvec_kmem_state -> mem_cgroup_from_obj_slab > > How about: > > if ((slab = virt_to_slap(p))) > return mem_cgroup_from_obj_slab(slab, p); > return folio_memcg_check(virt_to_folio(p), p); page_memcg_check() maybe instead? we shouldn't get a tail page here, no? ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() 2025-11-13 16:28 ` Vlastimil Babka @ 2025-11-13 19:42 ` Shakeel Butt 2025-11-13 20:33 ` Matthew Wilcox 0 siblings, 1 reply; 10+ messages in thread From: Shakeel Butt @ 2025-11-13 19:42 UTC (permalink / raw) To: Vlastimil Babka Cc: Johannes Weiner, Matthew Wilcox (Oracle), Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linux-mm, Michal Hocko, Muchun Song, cgroups On Thu, Nov 13, 2025 at 05:28:59PM +0100, Vlastimil Babka wrote: > > E.g. !vmap kernel stack pages -> mod_lruvec_kmem_state -> mem_cgroup_from_obj_slab > > > > How about: > > > > if ((slab = virt_to_slap(p))) > > return mem_cgroup_from_obj_slab(slab, p); > > return folio_memcg_check(virt_to_folio(p), p); > > page_memcg_check() maybe instead? we shouldn't get a tail page here, no? Do you mean page_memcg_check(virt_to_page(p), p)? But virt_to_page(p) can return tail page, right? ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() 2025-11-13 19:42 ` Shakeel Butt @ 2025-11-13 20:33 ` Matthew Wilcox 2025-11-13 21:54 ` Shakeel Butt 0 siblings, 1 reply; 10+ messages in thread From: Matthew Wilcox @ 2025-11-13 20:33 UTC (permalink / raw) To: Shakeel Butt Cc: Vlastimil Babka, Johannes Weiner, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linux-mm, Michal Hocko, Muchun Song, cgroups On Thu, Nov 13, 2025 at 11:42:01AM -0800, Shakeel Butt wrote: > On Thu, Nov 13, 2025 at 05:28:59PM +0100, Vlastimil Babka wrote: > > > E.g. !vmap kernel stack pages -> mod_lruvec_kmem_state -> mem_cgroup_from_obj_slab > > > > > > How about: > > > > > > if ((slab = virt_to_slap(p))) > > > return mem_cgroup_from_obj_slab(slab, p); > > > return folio_memcg_check(virt_to_folio(p), p); > > > > page_memcg_check() maybe instead? we shouldn't get a tail page here, no? > > Do you mean page_memcg_check(virt_to_page(p), p)? But virt_to_page(p) > can return tail page, right? Only if it's legitimate to call mod_lruvec_kmem_state() with "a pointer to somewhere inside the object" rather than "a pointer to the object". For example, it's legitimate to call copy_to_user() with a pointer somewhere inside the object, so the usercopy code has to handle that case. But it's only legitimate to call kfree() with a pointer that's at the start of an object, so kfree() can cheerfully BUG_ON() PageTail(virt_to_page(p)). ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() 2025-11-13 20:33 ` Matthew Wilcox @ 2025-11-13 21:54 ` Shakeel Butt 0 siblings, 0 replies; 10+ messages in thread From: Shakeel Butt @ 2025-11-13 21:54 UTC (permalink / raw) To: Matthew Wilcox Cc: Vlastimil Babka, Johannes Weiner, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linux-mm, Michal Hocko, Muchun Song, cgroups On Thu, Nov 13, 2025 at 08:33:50PM +0000, Matthew Wilcox wrote: > On Thu, Nov 13, 2025 at 11:42:01AM -0800, Shakeel Butt wrote: > > On Thu, Nov 13, 2025 at 05:28:59PM +0100, Vlastimil Babka wrote: > > > > E.g. !vmap kernel stack pages -> mod_lruvec_kmem_state -> mem_cgroup_from_obj_slab > > > > > > > > How about: > > > > > > > > if ((slab = virt_to_slap(p))) > > > > return mem_cgroup_from_obj_slab(slab, p); > > > > return folio_memcg_check(virt_to_folio(p), p); > > > > > > page_memcg_check() maybe instead? we shouldn't get a tail page here, no? > > > > Do you mean page_memcg_check(virt_to_page(p), p)? But virt_to_page(p) > > can return tail page, right? > > Only if it's legitimate to call mod_lruvec_kmem_state() with "a pointer > to somewhere inside the object" rather than "a pointer to the object". > > For example, it's legitimate to call copy_to_user() with a pointer > somewhere inside the object, so the usercopy code has to handle that > case. But it's only legitimate to call kfree() with a pointer that's > at the start of an object, so kfree() can cheerfully BUG_ON() > PageTail(virt_to_page(p)). Yes it makes sense. Though for slab objects we are allowing to pass any address within the object (address of list_head for list_lru) to mem_cgroup_from_slab_obj() (and thus mod_lruvec_kmem_state()) but for normal kernel memory, if we use virt_to_page() then we will restrict to the starting address. Anyways just noticed this pecularity, not suggesting to do anything differently. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() 2025-11-13 16:14 ` Johannes Weiner 2025-11-13 16:28 ` Vlastimil Babka @ 2025-11-13 16:39 ` Matthew Wilcox 2025-11-13 19:16 ` Johannes Weiner 2025-11-24 6:44 ` Harry Yoo 1 sibling, 2 replies; 10+ messages in thread From: Matthew Wilcox @ 2025-11-13 16:39 UTC (permalink / raw) To: Johannes Weiner Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linux-mm, Michal Hocko, Shakeel Butt, Muchun Song, cgroups On Thu, Nov 13, 2025 at 11:14:24AM -0500, Johannes Weiner wrote: > On Thu, Nov 13, 2025 at 12:09:28AM +0000, Matthew Wilcox (Oracle) wrote: > > - /* > > - * folio_memcg_check() is used here, because in theory we can encounter > > - * a folio where the slab flag has been cleared already, but > > - * slab->obj_exts has not been freed yet > > - * folio_memcg_check() will guarantee that a proper memory > > - * cgroup pointer or NULL will be returned. > > - */ > > - return folio_memcg_check(folio); > > + off = obj_to_index(slab->slab_cache, slab, p); > > + if (obj_exts[off].objcg) > > + return obj_cgroup_memcg(obj_exts[off].objcg); > > + > > + return NULL; > > } > > > > /* > > @@ -2637,7 +2627,7 @@ struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) > > if (mem_cgroup_disabled()) > > return NULL; > > > > - return mem_cgroup_from_obj_folio(virt_to_folio(p), p); > > + return mem_cgroup_from_obj_slab(virt_to_slab(p), p); > > The name undoubtedly sucks, but there is a comment above this function > that this can be used on non-slab kernel pages as well. Oh, I see. Usercopy calls this kind of thing 'heap object', so perhaps eventually rename this function to mem_cgroup_from_heap_obj()? > E.g. !vmap kernel stack pages -> mod_lruvec_kmem_state -> mem_cgroup_from_obj_slab That actually seems to be the only user ... and I wasn't testing on a !VMAP_STACK build, so I wouldn't've caught it. > How about: > > if ((slab = virt_to_slap(p))) > return mem_cgroup_from_obj_slab(slab, p); > return folio_memcg_check(virt_to_folio(p), p); Mild updates, here's my counteroffer: commit 6ca8243530e4 Author: Matthew Wilcox (Oracle) <willy@infradead.org> Date: Thu Nov 13 11:30:59 2025 -0500 fix-memcg-slab diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 597673e272a9..a2e6f409c5e8 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2599,9 +2599,6 @@ struct mem_cgroup *mem_cgroup_from_obj_slab(struct slab *slab, void *p) struct slabobj_ext *obj_exts; unsigned int off; - if (!slab) - return NULL; - obj_exts = slab_obj_exts(slab); if (!obj_exts) return NULL; @@ -2624,10 +2621,15 @@ struct mem_cgroup *mem_cgroup_from_obj_slab(struct slab *slab, void *p) */ struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) { + struct slab *slab; + if (mem_cgroup_disabled()) return NULL; - return mem_cgroup_from_obj_slab(virt_to_slab(p), p); + slab = virt_to_slab(p); + if (slab) + return mem_cgroup_from_obj_slab(slab, p); + return folio_memcg_check(virt_to_folio(p)); } static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *memcg) ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() 2025-11-13 16:39 ` Matthew Wilcox @ 2025-11-13 19:16 ` Johannes Weiner 2025-11-13 19:26 ` Vlastimil Babka 2025-11-24 6:44 ` Harry Yoo 1 sibling, 1 reply; 10+ messages in thread From: Johannes Weiner @ 2025-11-13 19:16 UTC (permalink / raw) To: Matthew Wilcox Cc: Vlastimil Babka, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linux-mm, Michal Hocko, Shakeel Butt, Muchun Song, cgroups On Thu, Nov 13, 2025 at 04:39:41PM +0000, Matthew Wilcox wrote: > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2599,9 +2599,6 @@ struct mem_cgroup *mem_cgroup_from_obj_slab(struct slab *slab, void *p) > struct slabobj_ext *obj_exts; > unsigned int off; > > - if (!slab) > - return NULL; > - > obj_exts = slab_obj_exts(slab); > if (!obj_exts) > return NULL; > @@ -2624,10 +2621,15 @@ struct mem_cgroup *mem_cgroup_from_obj_slab(struct slab *slab, void *p) > */ > struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) > { > + struct slab *slab; > + > if (mem_cgroup_disabled()) > return NULL; > > - return mem_cgroup_from_obj_slab(virt_to_slab(p), p); > + slab = virt_to_slab(p); > + if (slab) > + return mem_cgroup_from_obj_slab(slab, p); > + return folio_memcg_check(virt_to_folio(p)); Looks good to me, thanks! With that folded in, for the combined patch: Acked-by: Johannes Weiner <hannes@cmpxchg.org> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() 2025-11-13 19:16 ` Johannes Weiner @ 2025-11-13 19:26 ` Vlastimil Babka 0 siblings, 0 replies; 10+ messages in thread From: Vlastimil Babka @ 2025-11-13 19:26 UTC (permalink / raw) To: Johannes Weiner, Matthew Wilcox Cc: Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, Harry Yoo, linux-mm, Michal Hocko, Shakeel Butt, Muchun Song, cgroups On 11/13/25 20:16, Johannes Weiner wrote: > On Thu, Nov 13, 2025 at 04:39:41PM +0000, Matthew Wilcox wrote: >> --- a/mm/memcontrol.c >> +++ b/mm/memcontrol.c >> @@ -2599,9 +2599,6 @@ struct mem_cgroup *mem_cgroup_from_obj_slab(struct slab *slab, void *p) >> struct slabobj_ext *obj_exts; >> unsigned int off; >> >> - if (!slab) >> - return NULL; >> - >> obj_exts = slab_obj_exts(slab); >> if (!obj_exts) >> return NULL; >> @@ -2624,10 +2621,15 @@ struct mem_cgroup *mem_cgroup_from_obj_slab(struct slab *slab, void *p) >> */ >> struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) >> { >> + struct slab *slab; >> + >> if (mem_cgroup_disabled()) >> return NULL; >> >> - return mem_cgroup_from_obj_slab(virt_to_slab(p), p); >> + slab = virt_to_slab(p); >> + if (slab) >> + return mem_cgroup_from_obj_slab(slab, p); >> + return folio_memcg_check(virt_to_folio(p)); > > Looks good to me, thanks! > > With that folded in, for the combined patch: > > Acked-by: Johannes Weiner <hannes@cmpxchg.org> Thanks, folded. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() 2025-11-13 16:39 ` Matthew Wilcox 2025-11-13 19:16 ` Johannes Weiner @ 2025-11-24 6:44 ` Harry Yoo 1 sibling, 0 replies; 10+ messages in thread From: Harry Yoo @ 2025-11-24 6:44 UTC (permalink / raw) To: Matthew Wilcox Cc: Johannes Weiner, Vlastimil Babka, Andrew Morton, Christoph Lameter, David Rientjes, Roman Gushchin, linux-mm, Michal Hocko, Shakeel Butt, Muchun Song, cgroups On Thu, Nov 13, 2025 at 04:39:41PM +0000, Matthew Wilcox wrote: > On Thu, Nov 13, 2025 at 11:14:24AM -0500, Johannes Weiner wrote: > > On Thu, Nov 13, 2025 at 12:09:28AM +0000, Matthew Wilcox (Oracle) wrote: > > > - /* > > > - * folio_memcg_check() is used here, because in theory we can encounter > > > - * a folio where the slab flag has been cleared already, but > > > - * slab->obj_exts has not been freed yet > > > - * folio_memcg_check() will guarantee that a proper memory > > > - * cgroup pointer or NULL will be returned. > > > - */ > > > - return folio_memcg_check(folio); > > > + off = obj_to_index(slab->slab_cache, slab, p); > > > + if (obj_exts[off].objcg) > > > + return obj_cgroup_memcg(obj_exts[off].objcg); > > > + > > > + return NULL; > > > } > > > > > > /* > > > @@ -2637,7 +2627,7 @@ struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) > > > if (mem_cgroup_disabled()) > > > return NULL; > > > > > > - return mem_cgroup_from_obj_folio(virt_to_folio(p), p); > > > + return mem_cgroup_from_obj_slab(virt_to_slab(p), p); > > > > The name undoubtedly sucks, but there is a comment above this function > > that this can be used on non-slab kernel pages as well. > > Oh, I see. Usercopy calls this kind of thing 'heap object', so > perhaps eventually rename this function to mem_cgroup_from_heap_obj()? > > > E.g. !vmap kernel stack pages -> mod_lruvec_kmem_state -> mem_cgroup_from_obj_slab > > That actually seems to be the only user ... and I wasn't testing on a > !VMAP_STACK build, so I wouldn't've caught it. > > > How about: > > > > if ((slab = virt_to_slap(p))) > > return mem_cgroup_from_obj_slab(slab, p); > > return folio_memcg_check(virt_to_folio(p), p); > > Mild updates, here's my counteroffer: > > commit 6ca8243530e4 > Author: Matthew Wilcox (Oracle) <willy@infradead.org> > Date: Thu Nov 13 11:30:59 2025 -0500 > > fix-memcg-slab With this folded, Reviewed-by: Harry Yoo <harry.yoo@oracle.com> -- Cheers, Harry / Hyeonggon > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 597673e272a9..a2e6f409c5e8 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2599,9 +2599,6 @@ struct mem_cgroup *mem_cgroup_from_obj_slab(struct slab *slab, void *p) > struct slabobj_ext *obj_exts; > unsigned int off; > > - if (!slab) > - return NULL; > - > obj_exts = slab_obj_exts(slab); > if (!obj_exts) > return NULL; > @@ -2624,10 +2621,15 @@ struct mem_cgroup *mem_cgroup_from_obj_slab(struct slab *slab, void *p) > */ > struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) > { > + struct slab *slab; > + > if (mem_cgroup_disabled()) > return NULL; > > - return mem_cgroup_from_obj_slab(virt_to_slab(p), p); > + slab = virt_to_slab(p); > + if (slab) > + return mem_cgroup_from_obj_slab(slab, p); > + return folio_memcg_check(virt_to_folio(p)); > } > > static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *memcg) ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-11-24 6:45 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20251113000932.1589073-1-willy@infradead.org>
2025-11-13 0:09 ` [PATCH v4 14/16] memcg: Convert mem_cgroup_from_obj_folio() to mem_cgroup_from_obj_slab() Matthew Wilcox (Oracle)
2025-11-13 16:14 ` Johannes Weiner
2025-11-13 16:28 ` Vlastimil Babka
2025-11-13 19:42 ` Shakeel Butt
2025-11-13 20:33 ` Matthew Wilcox
2025-11-13 21:54 ` Shakeel Butt
2025-11-13 16:39 ` Matthew Wilcox
2025-11-13 19:16 ` Johannes Weiner
2025-11-13 19:26 ` Vlastimil Babka
2025-11-24 6:44 ` Harry Yoo
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).