From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C123CC83000 for ; Mon, 30 Jun 2025 04:50:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2AEB110E1B0; Mon, 30 Jun 2025 04:50:47 +0000 (UTC) Received: from us-smtp-delivery-44.mimecast.com (us-smtp-delivery-44.mimecast.com [207.211.30.44]) by gabe.freedesktop.org (Postfix) with ESMTPS id C812410E1B0 for ; Mon, 30 Jun 2025 04:50:44 +0000 (UTC) Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-5-5BNGUrAFPra0dvw-jg1Fmw-1; Mon, 30 Jun 2025 00:50:39 -0400 X-MC-Unique: 5BNGUrAFPra0dvw-jg1Fmw-1 X-Mimecast-MFC-AGG-ID: 5BNGUrAFPra0dvw-jg1Fmw_1751259038 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7F27018011CD; Mon, 30 Jun 2025 04:50:37 +0000 (UTC) Received: from dreadlord.redhat.com (unknown [10.67.24.96]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id EF3F51956095; Mon, 30 Jun 2025 04:50:30 +0000 (UTC) From: Dave Airlie To: dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Johannes Weiner , Christian Koenig Cc: Dave Chinner , Kairui Song , Dave Airlie , Matthew Brost , Andrew Morton Subject: [PATCH 02/17] drm/ttm: use gpu mm stats to track gpu memory allocations. (v2) Date: Mon, 30 Jun 2025 14:49:21 +1000 Message-ID: <20250630045005.1337339-3-airlied@gmail.com> In-Reply-To: <20250630045005.1337339-1-airlied@gmail.com> References: <20250630045005.1337339-1-airlied@gmail.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: Xl46PPJLznI8ci1zKgxpKw_sA1w6TAgq1wBpFIrlwu8_1751259038 X-Mimecast-Originator: gmail.com Content-Transfer-Encoding: quoted-printable content-type: text/plain; charset=WINDOWS-1252; x-default=true X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Dave Airlie This uses the newly introduced per-node gpu tracking stats, to track GPU memory allocated via TTM and reclaimable memory in the TTM page pools. These stats will be useful later for system information and later when mem cgroups are integrated. Cc: Christian Koenig Cc: Matthew Brost Cc: Johannes Weiner Cc: linux-mm@kvack.org Cc: Andrew Morton Signed-off-by: Dave Airlie --- v2: add reclaim parameters and adjust the right counters. --- drivers/gpu/drm/ttm/ttm_pool.c | 34 ++++++++++++++++++++++++++++------ 1 file changed, 28 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.= c index baf27c70a419..11a5777b4a85 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -131,6 +131,16 @@ static struct list_head shrinker_list; static struct shrinker *mm_shrinker; static DECLARE_RWSEM(pool_shrink_rwsem); =20 +/* helper to get a current valid node id from a pool */ +static int ttm_pool_nid(struct ttm_pool *pool) { +=09int nid =3D NUMA_NO_NODE; +=09if (pool) +=09=09nid =3D pool->nid; +=09if (nid =3D=3D NUMA_NO_NODE) +=09=09nid =3D numa_node_id(); +=09return nid; +} + /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_f= lags, =09=09=09=09=09unsigned int order) @@ -150,8 +160,10 @@ static struct page *ttm_pool_alloc_page(struct ttm_poo= l *pool, gfp_t gfp_flags, =20 =09if (!pool->use_dma_alloc) { =09=09p =3D alloc_pages_node(pool->nid, gfp_flags, order); -=09=09if (p) +=09=09if (p) { =09=09=09p->private =3D order; +=09=09=09mod_node_page_state(NODE_DATA(ttm_pool_nid(pool)), NR_GPU_ACTIVE,= (1 << order)); +=09=09} =09=09return p; =09} =20 @@ -186,7 +198,7 @@ static struct page *ttm_pool_alloc_page(struct ttm_pool= *pool, gfp_t gfp_flags, =20 /* Reset the caching and pages of size 1 << order */ static void ttm_pool_free_page(struct ttm_pool *pool, enum ttm_caching cac= hing, -=09=09=09 unsigned int order, struct page *p) +=09=09=09 unsigned int order, struct page *p, bool reclaim) { =09unsigned long attr =3D DMA_ATTR_FORCE_CONTIGUOUS; =09struct ttm_pool_dma *dma; @@ -201,6 +213,9 @@ static void ttm_pool_free_page(struct ttm_pool *pool, e= num ttm_caching caching, #endif =20 =09if (!pool || !pool->use_dma_alloc) { +=09=09mod_node_page_state(NODE_DATA(ttm_pool_nid(pool)), +=09=09=09=09 reclaim ? NR_GPU_RECLAIM : NR_GPU_ACTIVE, +=09=09=09=09 -(1 << order)); =09=09__free_pages(p, order); =09=09return; =09} @@ -276,6 +291,7 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_a= ddr_t dma_addr, static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p) { =09unsigned int i, num_pages =3D 1 << pt->order; +=09int nid =3D ttm_pool_nid(pt->pool); =20 =09for (i =3D 0; i < num_pages; ++i) { =09=09if (PageHighMem(p)) @@ -288,17 +304,23 @@ static void ttm_pool_type_give(struct ttm_pool_type *= pt, struct page *p) =09list_add(&p->lru, &pt->pages); =09spin_unlock(&pt->lock); =09atomic_long_add(1 << pt->order, &allocated_pages); + +=09mod_node_page_state(NODE_DATA(nid), NR_GPU_ACTIVE, -num_pages); +=09mod_node_page_state(NODE_DATA(nid), NR_GPU_RECLAIM, num_pages); } =20 /* Take pages from a specific pool_type, return NULL when nothing availabl= e */ static struct page *ttm_pool_type_take(struct ttm_pool_type *pt) { =09struct page *p; +=09int nid =3D ttm_pool_nid(pt->pool); =20 =09spin_lock(&pt->lock); =09p =3D list_first_entry_or_null(&pt->pages, typeof(*p), lru); =09if (p) { =09=09atomic_long_sub(1 << pt->order, &allocated_pages); +=09=09mod_node_page_state(NODE_DATA(nid), NR_GPU_ACTIVE, (1 << pt->order))= ; +=09=09mod_node_page_state(NODE_DATA(nid), NR_GPU_RECLAIM, -(1 << pt->order= )); =09=09list_del(&p->lru); =09} =09spin_unlock(&pt->lock); @@ -331,7 +353,7 @@ static void ttm_pool_type_fini(struct ttm_pool_type *pt= ) =09spin_unlock(&shrinker_lock); =20 =09while ((p =3D ttm_pool_type_take(pt))) -=09=09ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); +=09=09ttm_pool_free_page(pt->pool, pt->caching, pt->order, p, true); } =20 /* Return the pool_type to use for the given caching and order */ @@ -383,7 +405,7 @@ static unsigned int ttm_pool_shrink(void) =20 =09p =3D ttm_pool_type_take(pt); =09if (p) { -=09=09ttm_pool_free_page(pt->pool, pt->caching, pt->order, p); +=09=09ttm_pool_free_page(pt->pool, pt->caching, pt->order, p, true); =09=09num_pages =3D 1 << pt->order; =09} else { =09=09num_pages =3D 0; @@ -475,7 +497,7 @@ static pgoff_t ttm_pool_unmap_and_free(struct ttm_pool = *pool, struct page *page, =09if (pt) =09=09ttm_pool_type_give(pt, page); =09else -=09=09ttm_pool_free_page(pool, caching, order, page); +=09=09ttm_pool_free_page(pool, caching, order, page, false); =20 =09return nr; } @@ -780,7 +802,7 @@ static int __ttm_pool_alloc(struct ttm_pool *pool, stru= ct ttm_tt *tt, =09return 0; =20 error_free_page: -=09ttm_pool_free_page(pool, page_caching, order, p); +=09ttm_pool_free_page(pool, page_caching, order, p, false); =20 error_free_all: =09if (tt->restore) --=20 2.49.0