From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D37602E762C for ; Wed, 6 May 2026 03:33:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778038393; cv=none; b=LNnO/bonxU+6bTXiULd9aF6gqAbMvC1jIWoLJoQhxtOujvMk3uFJdVddY4YiKT32p2cqBWUbtJMyl5x2Lx5Xmd7Ec/Zl1Hhz1cWOi2gIgRuO9YFM0y6dFpI6+HXihJ4RTOCxq4dnc9IrKsLcWG31WyihrILNQ4cxfJNkyONBNAE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778038393; c=relaxed/simple; bh=hKs2r/SZ8fBLPzoHgcugkqcKmbZtBIMZ80SyreZA8i4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=vD6/JL12AHrGMZZMCS4rxSUo3bzI/G/yx9gaPcvuiYH+a6vsz+0WIbmLF2FdekDHnSUT/757QclAWim47H/E0tE254L7/ExyUZSxF6GaKrHYhZ39YZGD45RrsIZB6961Q5Y2fDtNsBXsAlGVbTgMVqtoipEwSQTaTBTrS9+Pyg8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=iiexS1Px; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="iiexS1Px" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1778038392; x=1809574392; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hKs2r/SZ8fBLPzoHgcugkqcKmbZtBIMZ80SyreZA8i4=; b=iiexS1PxxvVn70NoFHGYA/t6B05GZwe9swz7LkuHqzjXmSonMQctCz3W 4/qtNzompBMVFnzt0KhjYJGGdUpDD1hx/o9t8H6diSrF7ZopKetCusDWw mCivbHivzkp7gl/BddKzpVUFlIvLFMPPMkPlhWYMNFQg/iUl0jLJaYlAW Eg4SZtSkbLABx2DKJEUsBiRsPF7WwLyBmAYDvgwv9H+f7kIW51o0EIWj5 RLIRW+ytow+s5bv8e/Xi84vF5YcWzsTY+/KGtvI7L/VIBv0NmK1pJqrQT BSO/6LPoOkk3X6LBkSnFHjGDiI9oQJ6K9bXZXkleWfo5invSRl+bxrCc/ Q==; X-CSE-ConnectionGUID: erd8KU3sRbKVE56oXddZUQ== X-CSE-MsgGUID: HTOssimGRqGXNyWlLNJ98g== X-IronPort-AV: E=McAfee;i="6800,10657,11777"; a="78829082" X-IronPort-AV: E=Sophos;i="6.23,218,1770624000"; d="scan'208";a="78829082" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 May 2026 20:33:07 -0700 X-CSE-ConnectionGUID: 7i4b2G9rRAKaRcC7Cv6/qw== X-CSE-MsgGUID: El8a7KdFSaqIsFbWl9ZtuQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,218,1770624000"; d="scan'208";a="266342154" Received: from gsse-cloud1.jf.intel.com ([10.54.39.91]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 May 2026 20:33:07 -0700 From: Matthew Brost To: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: Andrew Morton , Dave Chinner , Qi Zheng , Roman Gushchin , Muchun Song , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Johannes Weiner , Shakeel Butt , Kairui Song , Barry Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 5/5] drm/xe: Make use of shrink_control::opportunistic_compaction hint Date: Tue, 5 May 2026 20:33:00 -0700 Message-Id: <20260506033300.3534883-6-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260506033300.3534883-1-matthew.brost@intel.com> References: <20260506033300.3534883-1-matthew.brost@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Xe/TTM backup reclaim can be extremely expensive under fragmentation pressure as reclaim may migrate or destroy actively used GPU working sets despite the system still having substantial free memory available. Under high-order opportunistic reclaim, repeatedly backing up GPU memory can lead to reclaim/rebind ping-pong behavior where active GPU working sets are continuously torn down and reconstructed without materially improving allocation success. Use the new shrink_control::opportunistic_compaction hint to avoid Xe backup reclaim during fragmentation-driven high-order reclaim attempts. In this mode the shrinker skips advertising backup-backed reclaimable memory and avoids initiating backup operations entirely. Order-0 and non-opportunistic reclaim behavior remain unchanged, so Xe backup reclaim still participates normally during genuine memory pressure. Cc: Andrew Morton Cc: Dave Chinner Cc: Qi Zheng Cc: Roman Gushchin Cc: Muchun Song Cc: David Hildenbrand Cc: Lorenzo Stoakes Cc: "Liam R. Howlett" Cc: Vlastimil Babka Cc: Mike Rapoport Cc: Suren Baghdasaryan Cc: Michal Hocko Cc: Johannes Weiner Cc: Shakeel Butt Cc: Kairui Song Cc: Barry Song Cc: Axel Rasmussen Cc: Yuanchu Xie Cc: Wei Xu Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Assisted-by: Claude:claude-opus-4.6 Signed-off-by: Matthew Brost --- drivers/gpu/drm/xe/xe_shrinker.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_shrinker.c b/drivers/gpu/drm/xe/xe_shrinker.c index 83374cd57660..4646b0f5b82b 100644 --- a/drivers/gpu/drm/xe/xe_shrinker.c +++ b/drivers/gpu/drm/xe/xe_shrinker.c @@ -139,10 +139,17 @@ static unsigned long xe_shrinker_count(struct shrinker *shrink, struct shrink_control *sc) { struct xe_shrinker *shrinker = to_xe_shrinker(shrink); - unsigned long num_pages; + unsigned long num_pages = 0; bool can_backup = !!(sc->gfp_mask & __GFP_FS); - num_pages = ttm_backup_bytes_avail() >> PAGE_SHIFT; + /* + * Skip accounting backup-able pages when this is an opportunistic + * high-order pass: TTM backup work shrinks at native page granularity + * and is unlikely to produce the contiguous block the caller wants, + * so don't advertise it as reclaimable for this hint. + */ + if (!sc->order || !sc->opportunistic_compaction) + num_pages = ttm_backup_bytes_avail() >> PAGE_SHIFT; read_lock(&shrinker->lock); if (can_backup) @@ -233,7 +240,14 @@ static unsigned long xe_shrinker_scan(struct shrinker *shrink, struct shrink_con } sc->nr_scanned = nr_scanned; - if (nr_scanned >= nr_to_scan || !can_backup) + /* + * Stop after the purge pass for opportunistic high-order reclaim: + * the subsequent backup/writeback pass works at native page order + * and is unlikely to free a contiguous high-order block, so doing + * it here would just churn working sets for no compaction benefit. + */ + if (nr_scanned >= nr_to_scan || !can_backup || + (sc->order && sc->opportunistic_compaction)) goto out; /* If we didn't wake before, try to do it now if needed. */ -- 2.34.1