From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34C90C83F1A for ; Tue, 22 Jul 2025 01:51:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C5A1C6B0095; Mon, 21 Jul 2025 21:51:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C0A476B0098; Mon, 21 Jul 2025 21:51:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF9456B009A; Mon, 21 Jul 2025 21:51:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9BEF76B0095 for ; Mon, 21 Jul 2025 21:51:00 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6955DC01F2 for ; Tue, 22 Jul 2025 01:51:00 +0000 (UTC) X-FDA: 83690222280.20.B1A592C Received: from us-smtp-delivery-44.mimecast.com (us-smtp-delivery-44.mimecast.com [207.211.30.44]) by imf14.hostedemail.com (Postfix) with ESMTP id 8BBE5100008 for ; Tue, 22 Jul 2025 01:50:58 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; spf=softfail (imf14.hostedemail.com: 207.211.30.44 is neither permitted nor denied by domain of airlied@gmail.com) smtp.mailfrom=airlied@gmail.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=gmail.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1753149058; a=rsa-sha256; cv=none; b=u1fi2Bk+s7kakUGvRveznhxqB5NJeSdxtd14lUCwsROfoz0B/9LBnuMPFiCPIDLSQcf0YS XQcihPeSKROP8WDDe2QqMlR1K6BkiTZ7Je1klNEQfz7+gh9PYqpO4VYLUOHr38ZOMh0piC wbmWS3C43oWDG6DuWOr2g58UGfO/xcY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; spf=softfail (imf14.hostedemail.com: 207.211.30.44 is neither permitted nor denied by domain of airlied@gmail.com) smtp.mailfrom=airlied@gmail.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=gmail.com (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1753149058; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WI0gz5Gjp7Eqpc/bzcdl1s+03ZCIycRs6ApLiiDASbk=; b=Updh8n0zz2nJ/ALxRfKEiIjTSHtLPEvrFLDyCoi+AUcylsXA0JtoqpNQuBo6ScdtlV9y9l rLwd2y6vwH/gcxJWaMGPys1o+RJbd1tdP6xBWFDKtoIZ0Jq3wm7PstQHhWhAxaPMWGk6Vq JS3xoA7VBtsCZZNlj4aPbV4N1DDHMkY= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-231-OT9brfI3N-uMWfIErkrBZg-1; Mon, 21 Jul 2025 21:50:54 -0400 X-MC-Unique: OT9brfI3N-uMWfIErkrBZg-1 X-Mimecast-MFC-AGG-ID: OT9brfI3N-uMWfIErkrBZg_1753149053 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 23F1A1800872; Tue, 22 Jul 2025 01:50:53 +0000 (UTC) Received: from dreadlord.redhat.com (unknown [10.67.32.7]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id EF96219560A3; Tue, 22 Jul 2025 01:50:48 +0000 (UTC) From: Dave Airlie To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Johannes Weiner , Christian Koenig Cc: Dave Chinner , Kairui Song , Dave Airlie Subject: [PATCH 11/15] ttm/pool: enable memcg tracking and shrinker. (v2) Date: Tue, 22 Jul 2025 11:43:24 +1000 Message-ID: <20250722014942.1878844-12-airlied@gmail.com> In-Reply-To: <20250722014942.1878844-1-airlied@gmail.com> References: <20250722014942.1878844-1-airlied@gmail.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: haiQYvbFe8zl8xSiDz7lruyo3PQxC827gkvqVxKMtN4_1753149053 X-Mimecast-Originator: gmail.com Content-Transfer-Encoding: quoted-printable content-type: text/plain; charset=WINDOWS-1252; x-default=true X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8BBE5100008 X-Stat-Signature: d7prry1xooychu1tn7qyt3yx9qi8mgo4 X-HE-Tag: 1753149058-165641 X-HE-Meta: U2FsdGVkX1/T9xfnyR2AB1OOCZ/hdPtwx0J2vSq6fBzShgtD4iEY2FTsTPyahHZoXrqQAAuOAHWUBm2IC2LoYT0/gN6sX7XtN0m7zLMadPFsrpNvwqoq45OraIH6kMmyQzWGnoo245w3Z/T1AnV64/NF4u5L6MffsFs/YRHellvAPiCn/sUnfSbutk66IY1Tsqiac9vJN1MK+QYZELnHFT9fGQ85qOykndcDcbmsEA9llRW+xtsrlBsrhyGct+Q2ADk9JOAzzqUFDM8mmCWjIwoAYglW7SHuKRIjLdO51bPSSuJIuHfDoWIyBXebGkSPNuaWb+Nj6uYliKU5C8SX7ukUf0wLjYpqBxIqB7szXECH+Jn2XEJPDiK+GxY6yFi9PDVZzWkeSxKFlqyE8BN1WAc/2fVRChVg9jEE+kG6LLV1aZcuKcsC1a3pSXcfjkRiyQoLH7eCpctiChNphWCllTrbvtpYDF3oef0bx+9qcVss4fBcaXk4B8k2r8meWRrTb/slH3J7ROCxE+PoTZoirm6L7gJYCORm1p1Mt0++8bLifagyUZKSHKowNeKVqlxseUPOFxWj+mt8CafvxiA0c4PMs2hMzwOMV23NTa2dmgLfk+5+yeYgXZg5TBINE6GaWBW6O4c9uZ0+1Zk7u1nUD7PJqj20zofjf6+pzb2OIyxwauOrWAxUbrM/JyRURrElr84kXio7r0Ttwi6P1vwMKTuXbt4p0OtLsPmLg6TqP7pN4FDUUGCgOmq/see7zxnC3qMo+pPFgUEJ8YT2dr5SYe2oQn+lqM1p5lZ3Pw6Co/13cxWl4QUAWdz1UmJZRBXBK+AQXo+OhmjRRRoy8CBJ9rQzivcEK9E3MEaLwdmRsjwfAMWGbOqHcGaWJB5J/WQZbGJpGzWgr5FuS2v7jVW+Zo8bVyIK1QGGTgaHzb3yAbZS38QVaX18e8Be4TeH37NXQlKWb8wmPbYFmCOED2b v7TkZQpX jteP2Ou/sUjUj7lSZGmw8Ez0MqtGjkp2kYIQ9Fi713tBSx9UTeLhvdPrEaf+mdTyqO4Im5rlQCCLXWNVpnXwbCK23a6ifV9tTs5GkV9j9fMLru8xoAy1CqCZ9YG8XlGfOkreqKcfzS4SnpkeCawxLn8xt3h7+oIFFxylRCQSicgc6zhF2M+hgF+eA2M3JapnW0l9TJSkexI/O4SCfmPM8t8Mp53FWKg8hpAuF9uDl8xTHMZMRVSZ1WpziUUd7dYNnKjayKI+QLJl0SZSm1quJlk2hO8mIX9lngqoqKpcgubu2Wjb2ER9d0r2ahljS+UCbAvXOh8KqJDFUAgbSK8ZrHSvOcNGWOBuvzU1CMVFxlwlfYTLXoab47TzZE7VKocrXm2lhpgcfkTLRsOoFi8IBrmvM2SSWvKI463/nDwMnQ6II3Xw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Dave Airlie This enables all the backend code to use the list lru in memcg mode, and set the shrinker to be memcg aware. It adds the loop case for when pooled pages end up being reparented to a higher memcg group, that newer memcg can search for them there and take them back. Signed-off-by: Dave Airlie --- v2: just use the proper stats. --- drivers/gpu/drm/ttm/ttm_pool.c | 127 ++++++++++++++++++++++++++------- mm/list_lru.c | 1 + 2 files changed, 104 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.= c index 2c9969de7517..1e6da2cc1f06 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -142,7 +142,9 @@ static int ttm_pool_nid(struct ttm_pool *pool) { } =20 /* Allocate pages of size 1 << order with the given gfp_flags */ -static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_f= lags, +static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, +=09=09=09=09=09struct obj_cgroup *objcg, +=09=09=09=09=09gfp_t gfp_flags, =09=09=09=09=09unsigned int order) { =09unsigned long attr =3D DMA_ATTR_FORCE_CONTIGUOUS; @@ -162,7 +164,10 @@ static struct page *ttm_pool_alloc_page(struct ttm_poo= l *pool, gfp_t gfp_flags, =09=09p =3D alloc_pages_node(pool->nid, gfp_flags, order); =09=09if (p) { =09=09=09p->private =3D order; -=09=09=09mod_lruvec_page_state(p, NR_GPU_ACTIVE, 1 << order); +=09=09=09if (!mem_cgroup_charge_gpu_page(objcg, p, order, gfp_flags, false= )) { +=09=09=09=09__free_pages(p, order); +=09=09=09=09return NULL; +=09=09=09} =09=09} =09=09return p; =09} @@ -213,8 +218,7 @@ static void ttm_pool_free_page(struct ttm_pool *pool, e= num ttm_caching caching, #endif =20 =09if (!pool || !pool->use_dma_alloc) { -=09=09mod_lruvec_page_state(p, reclaim ? NR_GPU_RECLAIM : NR_GPU_ACTIVE, -=09=09=09=09 -(1 << order)); +=09=09mem_cgroup_uncharge_gpu_page(p, order, reclaim); =09=09__free_pages(p, order); =09=09return; =09} @@ -301,12 +305,11 @@ static void ttm_pool_type_give(struct ttm_pool_type *= pt, struct page *p) =20 =09INIT_LIST_HEAD(&p->lru); =09rcu_read_lock(); -=09list_lru_add(&pt->pages, &p->lru, nid, NULL); +=09list_lru_add(&pt->pages, &p->lru, nid, page_memcg_check(p)); =09rcu_read_unlock(); =20 -=09atomic_long_add(num_pages, &allocated_pages[nid]);=09 -=09mod_lruvec_page_state(p, NR_GPU_ACTIVE, -num_pages); -=09mod_lruvec_page_state(p, NR_GPU_RECLAIM, num_pages); +=09atomic_long_add(num_pages, &allocated_pages[nid]); +=09mem_cgroup_move_gpu_page_reclaim(NULL, p, pt->order, true); } =20 static enum lru_status take_one_from_lru(struct list_head *item, @@ -321,20 +324,56 @@ static enum lru_status take_one_from_lru(struct list_= head *item, =09return LRU_REMOVED; } =20 -/* Take pages from a specific pool_type, return NULL when nothing availabl= e */ -static struct page *ttm_pool_type_take(struct ttm_pool_type *pt, int nid) +static int pool_lru_get_page(struct ttm_pool_type *pt, int nid, +=09=09=09 struct page **page_out, +=09=09=09 struct obj_cgroup *objcg, +=09=09=09 struct mem_cgroup *memcg) { =09int ret; =09struct page *p =3D NULL; =09unsigned long nr_to_walk =3D 1; +=09unsigned int num_pages =3D 1 << pt->order; =20 -=09ret =3D list_lru_walk_node(&pt->pages, nid, take_one_from_lru, (void *)= &p, &nr_to_walk); +=09ret =3D list_lru_walk_one(&pt->pages, nid, memcg, take_one_from_lru, (v= oid *)&p, &nr_to_walk); =09if (ret =3D=3D 1 && p) { -=09=09atomic_long_sub(1 << pt->order, &allocated_pages[nid]); -=09=09mod_lruvec_page_state(p, NR_GPU_ACTIVE, (1 << pt->order)); -=09=09mod_lruvec_page_state(p, NR_GPU_RECLAIM, -(1 << pt->order));=09=09 +=09=09atomic_long_sub(num_pages, &allocated_pages[nid]); + +=09=09if (!mem_cgroup_move_gpu_page_reclaim(objcg, p, pt->order, false)) { +=09=09=09__free_pages(p, pt->order); +=09=09=09p =3D NULL; +=09=09} =09} -=09return p; +=09*page_out =3D p; +=09return ret; +} + +/* Take pages from a specific pool_type, return NULL when nothing availabl= e */ +static struct page *ttm_pool_type_take(struct ttm_pool_type *pt, int nid, +=09=09=09=09 struct obj_cgroup *orig_objcg) +{ +=09struct page *page_out =3D NULL; +=09int ret; +=09struct mem_cgroup *orig_memcg =3D orig_objcg ? get_mem_cgroup_from_objc= g(orig_objcg) : NULL; +=09struct mem_cgroup *memcg =3D orig_memcg; + +=09/* +=09 * Attempt to get a page from the current memcg, but if it hasn't got a= ny in it's level, +=09 * go up to the parent and check there. This helps the scenario where m= ultiple apps get +=09 * started into their own cgroup from a common parent and want to reuse= the pools. +=09 */ +=09while (!page_out) { +=09=09ret =3D pool_lru_get_page(pt, nid, &page_out, orig_objcg, memcg); +=09=09if (ret =3D=3D 1) +=09=09=09break; +=09=09if (!memcg) +=09=09=09break; +=09=09memcg =3D parent_mem_cgroup(memcg); +=09=09if (!memcg) +=09=09=09break; +=09} + +=09mem_cgroup_put(orig_memcg); +=09return page_out; } =20 /* Initialize and add a pool type to the global shrinker list */ @@ -344,7 +383,7 @@ static void ttm_pool_type_init(struct ttm_pool_type *pt= , struct ttm_pool *pool, =09pt->pool =3D pool; =09pt->caching =3D caching; =09pt->order =3D order; -=09list_lru_init(&pt->pages); +=09list_lru_init_memcg(&pt->pages, mm_shrinker); =20 =09spin_lock(&shrinker_lock); =09list_add_tail(&pt->shrinker_list, &shrinker_list); @@ -387,6 +426,30 @@ static void ttm_pool_type_fini(struct ttm_pool_type *p= t) =09ttm_pool_dispose_list(pt, &dispose); } =20 +static int ttm_pool_check_objcg(struct obj_cgroup *objcg) +{ +#ifdef CONFIG_MEMCG +=09int r =3D 0; +=09struct mem_cgroup *memcg; +=09if (!objcg) +=09=09return 0; + +=09memcg =3D get_mem_cgroup_from_objcg(objcg); +=09for (unsigned i =3D 0; i < NR_PAGE_ORDERS; i++) { +=09=09r =3D memcg_list_lru_alloc(memcg, &global_write_combined[i].pages, G= FP_KERNEL); +=09=09if (r) { +=09=09=09break; +=09=09} +=09=09r =3D memcg_list_lru_alloc(memcg, &global_uncached[i].pages, GFP_KER= NEL); +=09=09if (r) { +=09=09=09break; +=09=09} +=09} +=09mem_cgroup_put(memcg); +#endif +=09return 0; +} + /* Return the pool_type to use for the given caching and order */ static struct ttm_pool_type *ttm_pool_select_type(struct ttm_pool *pool, =09=09=09=09=09=09 enum ttm_caching caching, @@ -416,7 +479,9 @@ static struct ttm_pool_type *ttm_pool_select_type(struc= t ttm_pool *pool, } =20 /* Free pages using the per-node shrinker list */ -static unsigned int ttm_pool_shrink(int nid, unsigned long num_to_free) +static unsigned int ttm_pool_shrink(int nid, +=09=09=09=09 struct mem_cgroup *memcg, +=09=09=09=09 unsigned long num_to_free) { =09LIST_HEAD(dispose); =09struct ttm_pool_type *pt; @@ -428,7 +493,11 @@ static unsigned int ttm_pool_shrink(int nid, unsigned = long num_to_free) =09list_move_tail(&pt->shrinker_list, &shrinker_list); =09spin_unlock(&shrinker_lock); =20 -=09num_pages =3D list_lru_walk_node(&pt->pages, nid, pool_move_to_dispose_= list, &dispose, &num_to_free); +=09if (!memcg) { +=09=09num_pages =3D list_lru_walk_node(&pt->pages, nid, pool_move_to_dispo= se_list, &dispose, &num_to_free); +=09} else { +=09=09num_pages =3D list_lru_walk_one(&pt->pages, nid, memcg, pool_move_to= _dispose_list, &dispose, &num_to_free); +=09} =09num_pages *=3D 1 << pt->order; =20 =09ttm_pool_dispose_list(pt, &dispose); @@ -593,6 +662,7 @@ static int ttm_pool_restore_commit(struct ttm_pool_tt_r= estore *restore, =09=09=09 */ =09=09=09ttm_pool_split_for_swap(restore->pool, p); =09=09=09copy_highpage(restore->alloced_page + i, p); +=09=09=09p->memcg_data =3D 0; =09=09=09__free_pages(p, 0); =09=09} =20 @@ -754,6 +824,7 @@ static int __ttm_pool_alloc(struct ttm_pool *pool, stru= ct ttm_tt *tt, =09bool allow_pools; =09struct page *p; =09int r; +=09struct obj_cgroup *objcg =3D memcg_account ? tt->objcg : NULL; =20 =09WARN_ON(!alloc->remaining_pages || ttm_tt_is_populated(tt)); =09WARN_ON(alloc->dma_addr && !pool->dev); @@ -771,6 +842,9 @@ static int __ttm_pool_alloc(struct ttm_pool *pool, stru= ct ttm_tt *tt, =20 =09page_caching =3D tt->caching; =09allow_pools =3D true; + +=09ttm_pool_check_objcg(objcg); + =09for (order =3D ttm_pool_alloc_find_order(MAX_PAGE_ORDER, alloc); =09 alloc->remaining_pages; =09 order =3D ttm_pool_alloc_find_order(order, alloc)) { @@ -780,7 +854,7 @@ static int __ttm_pool_alloc(struct ttm_pool *pool, stru= ct ttm_tt *tt, =09=09p =3D NULL; =09=09pt =3D ttm_pool_select_type(pool, page_caching, order); =09=09if (pt && allow_pools) -=09=09=09p =3D ttm_pool_type_take(pt, ttm_pool_nid(pool)); +=09=09=09p =3D ttm_pool_type_take(pt, ttm_pool_nid(pool), objcg); =20 =09=09/* =09=09 * If that fails or previously failed, allocate from system. @@ -791,7 +865,7 @@ static int __ttm_pool_alloc(struct ttm_pool *pool, stru= ct ttm_tt *tt, =09=09if (!p) { =09=09=09page_caching =3D ttm_cached; =09=09=09allow_pools =3D false; -=09=09=09p =3D ttm_pool_alloc_page(pool, gfp_flags, order); +=09=09=09p =3D ttm_pool_alloc_page(pool, objcg, gfp_flags, order); =09=09} =09=09/* If that fails, lower the order if possible and retry. */ =09=09if (!p) { @@ -935,7 +1009,7 @@ void ttm_pool_free(struct ttm_pool *pool, struct ttm_t= t *tt) =20 =09while (atomic_long_read(&allocated_pages[nid]) > pool_node_limit[nid]) = { =09=09unsigned long diff =3D pool_node_limit[nid] - atomic_long_read(&allo= cated_pages[nid]); -=09=09ttm_pool_shrink(nid, diff); +=09=09ttm_pool_shrink(nid, NULL, diff); =09} } EXPORT_SYMBOL(ttm_pool_free); @@ -1055,6 +1129,7 @@ long ttm_pool_backup(struct ttm_pool *pool, struct tt= m_tt *tt, =09=09=09if (flags->purge) { =09=09=09=09shrunken +=3D num_pages; =09=09=09=09page->private =3D 0; +=09=09=09=09page->memcg_data =3D 0; =09=09=09=09__free_pages(page, order); =09=09=09=09memset(tt->pages + i, 0, =09=09=09=09 num_pages * sizeof(*tt->pages)); @@ -1191,10 +1266,14 @@ static unsigned long ttm_pool_shrinker_scan(struct = shrinker *shrink, =09=09=09=09=09 struct shrink_control *sc) { =09unsigned long num_freed =3D 0; +=09int num_pools; +=09spin_lock(&shrinker_lock); +=09num_pools =3D list_count_nodes(&shrinker_list); +=09spin_unlock(&shrinker_lock); =20 =09do -=09=09num_freed +=3D ttm_pool_shrink(sc->nid, sc->nr_to_scan); -=09while (num_freed < sc->nr_to_scan && +=09=09num_freed +=3D ttm_pool_shrink(sc->nid, sc->memcg, sc->nr_to_scan); +=09while (num_pools-- >=3D 0 && num_freed < sc->nr_to_scan && =09 atomic_long_read(&allocated_pages[sc->nid])); =20 =09sc->nr_scanned =3D num_freed; @@ -1381,7 +1460,7 @@ int ttm_pool_mgr_init(unsigned long num_pages) =09spin_lock_init(&shrinker_lock); =09INIT_LIST_HEAD(&shrinker_list); =20 -=09mm_shrinker =3D shrinker_alloc(SHRINKER_NUMA_AWARE, "drm-ttm_pool"); +=09mm_shrinker =3D shrinker_alloc(SHRINKER_MEMCG_AWARE | SHRINKER_NUMA_AWA= RE, "drm-ttm_pool"); =09if (!mm_shrinker) =09=09return -ENOMEM; =20 diff --git a/mm/list_lru.c b/mm/list_lru.c index 315362e3df3d..6d0f277a9c57 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -558,6 +558,7 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, stru= ct list_lru *lru, =20 =09return xas_error(&xas); } +EXPORT_SYMBOL_GPL(memcg_list_lru_alloc); #else static inline void memcg_init_list_lru(struct list_lru *lru, bool memcg_aw= are) { --=20 2.49.0