From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55719C7EE2A for ; Mon, 30 Jun 2025 04:51:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ECF426B00A4; Mon, 30 Jun 2025 00:50:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7FE86B00A5; Mon, 30 Jun 2025 00:50:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBD186B00A6; Mon, 30 Jun 2025 00:50:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CAB4F6B00A4 for ; Mon, 30 Jun 2025 00:50:59 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8AB7A121C24 for ; Mon, 30 Jun 2025 04:50:59 +0000 (UTC) X-FDA: 83610842238.30.E73AB0C Received: from us-smtp-delivery-44.mimecast.com (us-smtp-delivery-44.mimecast.com [205.139.111.44]) by imf09.hostedemail.com (Postfix) with ESMTP id 90C6B140002 for ; Mon, 30 Jun 2025 04:50:57 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=gmail.com (policy=none); spf=softfail (imf09.hostedemail.com: 205.139.111.44 is neither permitted nor denied by domain of airlied@gmail.com) smtp.mailfrom=airlied@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751259057; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oEHSV/BVbQ3jbsCobfgS40/zGctL9K+bXKPjH19Vf0w=; b=i4u0tOx0PTOrxwZyaPKy6NQ48ImgPxKbUUeOuOLGX94yS+5ep/+W7V8LtrHfRBGR0917ux HZfCEg7CpT+4FhhDayvM9/iVBJhC0W7IH/r2DB+3unKYn4OW9AXSGLs3D2t4MXxCGqy0hj hXCBLPyA8c46O5q5DynrERWQlYwQaT8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751259057; a=rsa-sha256; cv=none; b=sIAf783FjQWXWU/iVOS03oNMa8Nq+PBcJbPtt+pvFkIybZWQRbhrqfC8VjB2d3zDhrPJpw HVKWkSQ+BEDrSBqEfj3dtLlvwXSWLcjX3n9P4b94eZb3p+OoMOW0U3W+sEb0RFjXwE8NEc FkQwiPNiOSBGnfr9INaQ24Xj+mIa/8U= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=gmail.com (policy=none); spf=softfail (imf09.hostedemail.com: 205.139.111.44 is neither permitted nor denied by domain of airlied@gmail.com) smtp.mailfrom=airlied@gmail.com Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-440-s6ZQXXGyPqSoGbD_dcxbXw-1; Mon, 30 Jun 2025 00:50:54 -0400 X-MC-Unique: s6ZQXXGyPqSoGbD_dcxbXw-1 X-Mimecast-MFC-AGG-ID: s6ZQXXGyPqSoGbD_dcxbXw_1751259052 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9CE3119560A6; Mon, 30 Jun 2025 04:50:52 +0000 (UTC) Received: from dreadlord.redhat.com (unknown [10.67.24.96]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 104A91956095; Mon, 30 Jun 2025 04:50:46 +0000 (UTC) From: Dave Airlie To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Johannes Weiner , Christian Koenig Cc: Dave Chinner , Kairui Song , Dave Airlie Subject: [PATCH 04/17] ttm/pool: port to list_lru. (v2) Date: Mon, 30 Jun 2025 14:49:23 +1000 Message-ID: <20250630045005.1337339-5-airlied@gmail.com> In-Reply-To: <20250630045005.1337339-1-airlied@gmail.com> References: <20250630045005.1337339-1-airlied@gmail.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: gCFjO5gwSf6VFm1hKOsgKXawQq6kR6VKBOJF86oMeoE_1751259052 X-Mimecast-Originator: gmail.com Content-Transfer-Encoding: quoted-printable content-type: text/plain; charset=WINDOWS-1252; x-default=true X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 90C6B140002 X-Stat-Signature: pnicunosnhoiqn5khrkhx5hdts1wbteg X-HE-Tag: 1751259057-147384 X-HE-Meta: U2FsdGVkX1/i+MOxmhniueLJwzxPCFjbmRygjyhx4Syyi+V88ez7LCxC3dKrNsV6Bqe9iQanB8lQy55w02l8D2fxtM8ySii7uR/R+/h1Jx8iWt8HewBYjXNbDscv3LA6cy3jFaXab81rfa4xvo+e+44712FjHYRx9ocB9VfNhwxjhSUmuJT41fQj07dW4SD6Eb2bjtfq92/+KFmUvs3VC9LCqR0Zvbtgec5uMFT/1+eEDrkF7AjgHuy2oSL2/P8zQCGvP7iCwDrGauVMGFDq2MG6bVJerubAmj7iepIxj/rEY7sRFCTILXqzh2UfIv10e+z4Whd79yQI7EMYMjsEatiMSqeGQz7V5t8+Az2SABC91/y3Fs/P7CvnQ5bfKrxm78rTIjWozAyIxFh0j/qvkjFRxbcoZQz8mHkTsSG9RfUMuyPQHeLpjcPPUtGiDt5vxhZvcHF03RsiNQM1mNfgnoIo59W3lrPavAQVjaMuCpUGOIOqCT+240Kx67LrMVK+mAXN5euGaUq9Hjl+tQ0YOLhxar80qdM+GWtBTk0su/41q1pt4y/JWZprJ3cNNmb2JpCRgleSShmfVzuh//BetqEZnR/9w7eH5DGj4yCwKdApuHawEUn1TaC0JzNPLeNvJ1zcPXJqPQPAM/IRYEvD3Nk+PB6xEHjj4Da57mLHHw0YBNeZeGOhf2TIx7/Ibz9hqdrqFF0Jn9IjVhKhm1iFl5bfXekedhfH6uTevaT5cHIa8QVHkNvjCuf/nYnvA/jJSkOu/fuANFlgyyAx6Nz+iQsdxUpqQSc8SDR8NfxCUZJfKHdCM5LckXHk3gbK0ZUtjo7m8RQ5Rcn5ONM4Oz2a/cNzM5u3lpus300hKztd4pE2YEgxnUEBdg8ENCdFHH9KLOrKblOyCB7/HMYa8VD/IruTflQywpk7YkieMktvf3ssU3nbF4ErjUoW+dtYIWAlk0aF8YLtHr2TuTxvSUv lHU3kkdh p6MF1xLZzZJ1XDfplxCkvBLpYRNTcLgGHhH0Gu5dEQJ0PFBFbPRNqKNRdBwyLgD4CtoF/6dzwqcURCvsHMtewubzvn6v+QV6Fz5kwCBm9szESHrSqQ6sjT91PQDvpfYWP//IssISqf05VLRZ52rUjnDGc0M1/3lZweCMQmqED2EDZq4y9T0wT1NUShLh0h2EsYbbRdsyalFE2D4wcpLPlzglwbcKrgelOfUL5GB2b/W0Lk0c8P4uSgRbCrdnz2M0rtsgEy/oO5jzuMUo7oWLfc3MWYJo7dOjhiK3PoBSNAx7q+/jqWwqlkN1sXRzmFKu121FMiqk/csKId7y/xec+uoxPQ0j4ebQIF5gL81aC4lqVUEi1+KYCjOrwwN1z355cDR7+CSLp05KLcYH95nHY6j3fXg6uA6OLuoMMASrOR0FAa+AIjswNqB9qSq66WJX7ydgOzrkun94Dg2+KUJ81kuoz3SrkzyekM+133A/Ibl9MIDecqj5167ijS7BWWeHVmVFiavToNP8Zy5K6Hr4GYdnTrgBZEJl0XmUBOWv8RrQAxHbielGwUHLGjN/p5ccNYgko8cAU7fs5ULvajUHOco221X1xKeUftukssxMBuKUT1Mk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Dave Airlie This is an initial port of the TTM pools for write combined and uncached pages to use the list_lru. This makes the pool's more NUMA aware and avoids needing separate NUMA pools (later commit enables this). Cc: Christian Koenig Cc: Johannes Weiner Cc: Dave Chinner Signed-off-by: Dave Airlie --- v2: drop the pt->lock, lru list has it's own lock which is sufficent. rearrange list isolates to fix bad locking orders. --- drivers/gpu/drm/ttm/tests/ttm_device_test.c | 2 +- drivers/gpu/drm/ttm/tests/ttm_pool_test.c | 32 ++++---- drivers/gpu/drm/ttm/ttm_pool.c | 83 +++++++++++++-------- include/drm/ttm/ttm_pool.h | 6 +- 4 files changed, 72 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/ttm/tests/ttm_device_test.c b/drivers/gpu/drm/= ttm/tests/ttm_device_test.c index 1621903818e5..1f207fd222bc 100644 --- a/drivers/gpu/drm/ttm/tests/ttm_device_test.c +++ b/drivers/gpu/drm/ttm/tests/ttm_device_test.c @@ -183,7 +183,7 @@ static void ttm_device_init_pools(struct kunit *test) =20 =09=09=09=09if (params->use_dma_alloc) =09=09=09=09=09KUNIT_ASSERT_FALSE(test, -=09=09=09=09=09=09=09 list_empty(&pt.pages)); +=09=09=09=09=09=09=09 !list_lru_count(&pt.pages)); =09=09=09} =09=09} =09} diff --git a/drivers/gpu/drm/ttm/tests/ttm_pool_test.c b/drivers/gpu/drm/tt= m/tests/ttm_pool_test.c index 8ade53371f72..39234a3e98c4 100644 --- a/drivers/gpu/drm/ttm/tests/ttm_pool_test.c +++ b/drivers/gpu/drm/ttm/tests/ttm_pool_test.c @@ -248,7 +248,7 @@ static void ttm_pool_alloc_order_caching_match(struct k= unit *test) =09pool =3D ttm_pool_pre_populated(test, size, caching); =20 =09pt =3D &pool->caching[caching].orders[order]; -=09KUNIT_ASSERT_FALSE(test, list_empty(&pt->pages)); +=09KUNIT_ASSERT_FALSE(test, !list_lru_count(&pt->pages)); =20 =09tt =3D ttm_tt_kunit_init(test, 0, caching, size); =09KUNIT_ASSERT_NOT_NULL(test, tt); @@ -256,7 +256,7 @@ static void ttm_pool_alloc_order_caching_match(struct k= unit *test) =09err =3D ttm_pool_alloc(pool, tt, &simple_ctx); =09KUNIT_ASSERT_EQ(test, err, 0); =20 -=09KUNIT_ASSERT_TRUE(test, list_empty(&pt->pages)); +=09KUNIT_ASSERT_TRUE(test, !list_lru_count(&pt->pages)); =20 =09ttm_pool_free(pool, tt); =09ttm_tt_fini(tt); @@ -282,8 +282,8 @@ static void ttm_pool_alloc_caching_mismatch(struct kuni= t *test) =09tt =3D ttm_tt_kunit_init(test, 0, tt_caching, size); =09KUNIT_ASSERT_NOT_NULL(test, tt); =20 -=09KUNIT_ASSERT_FALSE(test, list_empty(&pt_pool->pages)); -=09KUNIT_ASSERT_TRUE(test, list_empty(&pt_tt->pages)); +=09KUNIT_ASSERT_FALSE(test, !list_lru_count(&pt_pool->pages)); +=09KUNIT_ASSERT_TRUE(test, !list_lru_count(&pt_tt->pages)); =20 =09err =3D ttm_pool_alloc(pool, tt, &simple_ctx); =09KUNIT_ASSERT_EQ(test, err, 0); @@ -291,8 +291,8 @@ static void ttm_pool_alloc_caching_mismatch(struct kuni= t *test) =09ttm_pool_free(pool, tt); =09ttm_tt_fini(tt); =20 -=09KUNIT_ASSERT_FALSE(test, list_empty(&pt_pool->pages)); -=09KUNIT_ASSERT_FALSE(test, list_empty(&pt_tt->pages)); +=09KUNIT_ASSERT_FALSE(test, !list_lru_count(&pt_pool->pages)); +=09KUNIT_ASSERT_FALSE(test, !list_lru_count(&pt_tt->pages)); =20 =09ttm_pool_fini(pool); } @@ -316,8 +316,8 @@ static void ttm_pool_alloc_order_mismatch(struct kunit = *test) =09tt =3D ttm_tt_kunit_init(test, 0, caching, snd_size); =09KUNIT_ASSERT_NOT_NULL(test, tt); =20 -=09KUNIT_ASSERT_FALSE(test, list_empty(&pt_pool->pages)); -=09KUNIT_ASSERT_TRUE(test, list_empty(&pt_tt->pages)); +=09KUNIT_ASSERT_FALSE(test, !list_lru_count(&pt_pool->pages)); +=09KUNIT_ASSERT_TRUE(test, !list_lru_count(&pt_tt->pages)); =20 =09err =3D ttm_pool_alloc(pool, tt, &simple_ctx); =09KUNIT_ASSERT_EQ(test, err, 0); @@ -325,8 +325,8 @@ static void ttm_pool_alloc_order_mismatch(struct kunit = *test) =09ttm_pool_free(pool, tt); =09ttm_tt_fini(tt); =20 -=09KUNIT_ASSERT_FALSE(test, list_empty(&pt_pool->pages)); -=09KUNIT_ASSERT_FALSE(test, list_empty(&pt_tt->pages)); +=09KUNIT_ASSERT_FALSE(test, !list_lru_count(&pt_pool->pages)); +=09KUNIT_ASSERT_FALSE(test, !list_lru_count(&pt_tt->pages)); =20 =09ttm_pool_fini(pool); } @@ -352,12 +352,12 @@ static void ttm_pool_free_dma_alloc(struct kunit *tes= t) =09ttm_pool_alloc(pool, tt, &simple_ctx); =20 =09pt =3D &pool->caching[caching].orders[order]; -=09KUNIT_ASSERT_TRUE(test, list_empty(&pt->pages)); +=09KUNIT_ASSERT_TRUE(test, !list_lru_count(&pt->pages)); =20 =09ttm_pool_free(pool, tt); =09ttm_tt_fini(tt); =20 -=09KUNIT_ASSERT_FALSE(test, list_empty(&pt->pages)); +=09KUNIT_ASSERT_FALSE(test, !list_lru_count(&pt->pages)); =20 =09ttm_pool_fini(pool); } @@ -383,12 +383,12 @@ static void ttm_pool_free_no_dma_alloc(struct kunit *= test) =09ttm_pool_alloc(pool, tt, &simple_ctx); =20 =09pt =3D &pool->caching[caching].orders[order]; -=09KUNIT_ASSERT_TRUE(test, list_is_singular(&pt->pages)); +=09KUNIT_ASSERT_TRUE(test, list_lru_count(&pt->pages) =3D=3D 1); =20 =09ttm_pool_free(pool, tt); =09ttm_tt_fini(tt); =20 -=09KUNIT_ASSERT_TRUE(test, list_is_singular(&pt->pages)); +=09KUNIT_ASSERT_TRUE(test, list_lru_count(&pt->pages) =3D=3D 1); =20 =09ttm_pool_fini(pool); } @@ -404,11 +404,11 @@ static void ttm_pool_fini_basic(struct kunit *test) =09pool =3D ttm_pool_pre_populated(test, size, caching); =09pt =3D &pool->caching[caching].orders[order]; =20 -=09KUNIT_ASSERT_FALSE(test, list_empty(&pt->pages)); +=09KUNIT_ASSERT_FALSE(test, !list_lru_count(&pt->pages)); =20 =09ttm_pool_fini(pool); =20 -=09KUNIT_ASSERT_TRUE(test, list_empty(&pt->pages)); +=09KUNIT_ASSERT_TRUE(test, !list_lru_count(&pt->pages)); } =20 static struct kunit_case ttm_pool_test_cases[] =3D { diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.= c index 11a5777b4a85..4372f0cc4a57 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -291,7 +291,7 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_a= ddr_t dma_addr, static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p) { =09unsigned int i, num_pages =3D 1 << pt->order; -=09int nid =3D ttm_pool_nid(pt->pool); +=09int nid =3D page_to_nid(p); =20 =09for (i =3D 0; i < num_pages; ++i) { =09=09if (PageHighMem(p)) @@ -300,31 +300,41 @@ static void ttm_pool_type_give(struct ttm_pool_type *= pt, struct page *p) =09=09=09clear_page(page_address(p + i)); =09} =20 -=09spin_lock(&pt->lock); -=09list_add(&p->lru, &pt->pages); -=09spin_unlock(&pt->lock); +=09INIT_LIST_HEAD(&p->lru); +=09rcu_read_lock(); +=09list_lru_add(&pt->pages, &p->lru, nid, NULL); +=09rcu_read_unlock(); =09atomic_long_add(1 << pt->order, &allocated_pages); =20 =09mod_node_page_state(NODE_DATA(nid), NR_GPU_ACTIVE, -num_pages); =09mod_node_page_state(NODE_DATA(nid), NR_GPU_RECLAIM, num_pages); } =20 +static enum lru_status take_one_from_lru(struct list_head *item, +=09=09=09=09=09 struct list_lru_one *list, +=09=09=09=09=09 void *cb_arg) +{ +=09struct page **out_page =3D cb_arg; +=09struct page *p =3D container_of(item, struct page, lru); +=09list_lru_isolate(list, item); + +=09*out_page =3D p; +=09return LRU_REMOVED; +} + /* Take pages from a specific pool_type, return NULL when nothing availabl= e */ -static struct page *ttm_pool_type_take(struct ttm_pool_type *pt) +static struct page *ttm_pool_type_take(struct ttm_pool_type *pt, int nid) { -=09struct page *p; -=09int nid =3D ttm_pool_nid(pt->pool); +=09int ret; +=09struct page *p =3D NULL; +=09unsigned long nr_to_walk =3D 1; =20 -=09spin_lock(&pt->lock); -=09p =3D list_first_entry_or_null(&pt->pages, typeof(*p), lru); -=09if (p) { +=09ret =3D list_lru_walk_node(&pt->pages, nid, take_one_from_lru, (void *)= &p, &nr_to_walk); +=09if (ret =3D=3D 1 && p) { =09=09atomic_long_sub(1 << pt->order, &allocated_pages); =09=09mod_node_page_state(NODE_DATA(nid), NR_GPU_ACTIVE, (1 << pt->order))= ; =09=09mod_node_page_state(NODE_DATA(nid), NR_GPU_RECLAIM, -(1 << pt->order= )); -=09=09list_del(&p->lru); =09} -=09spin_unlock(&pt->lock); - =09return p; } =20 @@ -335,25 +345,47 @@ static void ttm_pool_type_init(struct ttm_pool_type *= pt, struct ttm_pool *pool, =09pt->pool =3D pool; =09pt->caching =3D caching; =09pt->order =3D order; -=09spin_lock_init(&pt->lock); -=09INIT_LIST_HEAD(&pt->pages); +=09list_lru_init(&pt->pages); =20 =09spin_lock(&shrinker_lock); =09list_add_tail(&pt->shrinker_list, &shrinker_list); =09spin_unlock(&shrinker_lock); } =20 +static enum lru_status pool_move_to_dispose_list(struct list_head *item, +=09=09=09=09=09=09 struct list_lru_one *list, +=09=09=09=09=09=09 void *cb_arg) +{ +=09struct list_head *dispose =3D cb_arg; + +=09list_lru_isolate_move(list, item, dispose); + +=09return LRU_REMOVED; +} + +static void ttm_pool_dispose_list(struct ttm_pool_type *pt, +=09=09=09=09 struct list_head *dispose) +{ +=09while (!list_empty(dispose)) { +=09=09struct page *p; +=09=09p =3D list_first_entry(dispose, struct page, lru); +=09=09list_del_init(&p->lru); +=09=09atomic_long_sub(1 << pt->order, &allocated_pages); +=09=09ttm_pool_free_page(pt->pool, pt->caching, pt->order, p, true); +=09} +} + /* Remove a pool_type from the global shrinker list and free all pages */ static void ttm_pool_type_fini(struct ttm_pool_type *pt) { -=09struct page *p; +=09LIST_HEAD(dispose); =20 =09spin_lock(&shrinker_lock); =09list_del(&pt->shrinker_list); =09spin_unlock(&shrinker_lock); =20 -=09while ((p =3D ttm_pool_type_take(pt))) -=09=09ttm_pool_free_page(pt->pool, pt->caching, pt->order, p, true); +=09list_lru_walk(&pt->pages, pool_move_to_dispose_list, &dispose, LONG_MAX= ); +=09ttm_pool_dispose_list(pt, &dispose); } =20 /* Return the pool_type to use for the given caching and order */ @@ -403,7 +435,7 @@ static unsigned int ttm_pool_shrink(void) =09list_move_tail(&pt->shrinker_list, &shrinker_list); =09spin_unlock(&shrinker_lock); =20 -=09p =3D ttm_pool_type_take(pt); +=09p =3D ttm_pool_type_take(pt, ttm_pool_nid(pt->pool)); =09if (p) { =09=09ttm_pool_free_page(pt->pool, pt->caching, pt->order, p, true); =09=09num_pages =3D 1 << pt->order; @@ -757,7 +789,7 @@ static int __ttm_pool_alloc(struct ttm_pool *pool, stru= ct ttm_tt *tt, =09=09p =3D NULL; =09=09pt =3D ttm_pool_select_type(pool, page_caching, order); =09=09if (pt && allow_pools) -=09=09=09p =3D ttm_pool_type_take(pt); +=09=09=09p =3D ttm_pool_type_take(pt, ttm_pool_nid(pool)); =09=09/* =09=09 * If that fails or previously failed, allocate from system. =09=09 * Note that this also disallows additional pool allocations using @@ -1186,16 +1218,7 @@ static unsigned long ttm_pool_shrinker_count(struct = shrinker *shrink, /* Count the number of pages available in a pool_type */ static unsigned int ttm_pool_type_count(struct ttm_pool_type *pt) { -=09unsigned int count =3D 0; -=09struct page *p; - -=09spin_lock(&pt->lock); -=09/* Only used for debugfs, the overhead doesn't matter */ -=09list_for_each_entry(p, &pt->pages, lru) -=09=09++count; -=09spin_unlock(&pt->lock); - -=09return count; +=09return list_lru_count(&pt->pages); } =20 /* Print a nice header for the order */ diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index 54cd34a6e4c0..df56527c4853 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -45,8 +45,7 @@ struct ttm_tt; * @order: the allocation order our pages have * @caching: the caching type our pages have * @shrinker_list: our place on the global shrinker list - * @lock: protection of the page list - * @pages: the list of pages in the pool + * @pages: the lru_list of pages in the pool */ struct ttm_pool_type { =09struct ttm_pool *pool; @@ -55,8 +54,7 @@ struct ttm_pool_type { =20 =09struct list_head shrinker_list; =20 -=09spinlock_t lock; -=09struct list_head pages; +=09struct list_lru pages; }; =20 /** --=20 2.49.0