From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE7ED2877FE for ; Tue, 3 Feb 2026 10:45:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.20 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770115560; cv=none; b=fnpYo/0iQVwoQlxNrjyxHJ7G2IZGe86sU8EtgRI5kD7L06WCKeZLWgRsTfWcrxSHx1Pg4k1FnDo+6ajSI324xNyCJCwEEbuf3M0zFHycRgeEI78GJj3CC1pwZVaFiyv3+zeBVUFztwbUJQDiKyjZT285M2GvDz0VmX6asbu2vVI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770115560; c=relaxed/simple; bh=kjyxA60L8xhgKFON85U0bBh5TOrChBXp9hwXJ5/2Fvc=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version:Content-Type; b=ZUgzdszumXPrfMEi7HPt1pr4PAh69Hx1ov6taneU3srpyMhpr+wkrykNMWV8cFY7H/swWFpVpioU0a0KqGy+2ry7bqum9tkyM/uNghHwcfD2Jpu2eOJYVr1OQFlSToePO6WUyV1pj+I3SLCKeEHwF1LyBiQRv3WZJye25bECYlQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Fqyt2IfZ; arc=none smtp.client-ip=198.175.65.20 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Fqyt2IfZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1770115559; x=1801651559; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=kjyxA60L8xhgKFON85U0bBh5TOrChBXp9hwXJ5/2Fvc=; b=Fqyt2IfZGQbDUDm384vl4kiXQQ8dlWg9gBTygl/k90GjEbKSMP2pUiiQ HirBgVIL0xwTB5ywZXophfs2UWOqV83s11iy8HDd+3fbvvbEQZtQVTcOv 839/C/A5KANukYRLn7FSdCWqqYSCxvMUGM1wbzN1hEt4JzOtQOdJN3DR+ lfkXBr9Irht+y5O5N4rNfpEtH4T/X90F1VdnOnlj/KCnTLSlo5cTsS529 /jcSwN+wT/BtdiPJ5ArAQXHCaUwTdx+xx6QCM7D7XZzGYR378gTafzXCH L32FvWR/MxI185C/M3Srikvmz1pOsNh/PMl4WsTUkELeFfbBY6LqKitgM w==; X-CSE-ConnectionGUID: OFWjTvKESniFVeNmE8JEQw== X-CSE-MsgGUID: e1dc/u++Sh+hudKbfeWg1A== X-IronPort-AV: E=McAfee;i="6800,10657,11690"; a="71005741" X-IronPort-AV: E=Sophos;i="6.21,270,1763452800"; d="scan'208";a="71005741" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa112.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2026 02:45:58 -0800 X-CSE-ConnectionGUID: OxX4/WPKS7eRU64QvwjcBA== X-CSE-MsgGUID: GSPn8rJPR4O6xritYvNlcQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,270,1763452800"; d="scan'208";a="209982222" Received: from rvuia-mobl.ger.corp.intel.com (HELO fedora) ([10.245.245.55]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2026 02:45:54 -0800 From: =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= To: intel-xe@lists.freedesktop.org Cc: =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= , Alistair Popple , Ralph Campbell , Christoph Hellwig , Jason Gunthorpe , Jason Gunthorpe , Leon Romanovsky , Andrew Morton , Matthew Brost , John Hubbard , linux-mm@kvack.org, dri-devel@lists.freedesktop.org, stable@vger.kernel.org Subject: [PATCH v2] mm: Fix a hmm_range_fault() livelock / starvation problem Date: Tue, 3 Feb 2026 11:45:32 +0100 Message-ID: <20260203104532.98534-1-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.52.0 Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit If hmm_range_fault() fails a folio_trylock() in do_swap_page, trying to acquire the lock of a device-private folio for migration, to ram, the function will spin until it succeeds grabbing the lock. However, if the process holding the lock is depending on a work item to be completed, which is scheduled on the same CPU as the spinning hmm_range_fault(), that work item might be starved and we end up in a livelock / starvation situation which is never resolved. This can happen, for example if the process holding the device-private folio lock is stuck in migrate_device_unmap()->lru_add_drain_all() The lru_add_drain_all() function requires a short work-item to be run on all online cpus to complete. A prerequisite for this to happen is: a) Both zone device and system memory folios are considered in migrate_device_unmap(), so that there is a reason to call lru_add_drain_all() for a system memory folio while a folio lock is held on a zone device folio. b) The zone device folio has an initial mapcount > 1 which causes at least one migration PTE entry insertion to be deferred to try_to_migrate(), which can happen after the call to lru_add_drain_all(). c) No or voluntary only preemption. This all seems pretty unlikely to happen, but indeed is hit by the "xe_exec_system_allocator" igt test. Resolve this by waiting for the folio to be unlocked if the folio_trylock() fails in the do_swap_page() function. Future code improvements might consider moving the lru_add_drain_all() call in migrate_device_unmap() to be called *after* all pages have migration entries inserted. That would eliminate also b) above. v2: - Instead of a cond_resched() in the hmm_range_fault() function, eliminate the problem by waiting for the folio to be unlocked in do_swap_page() (Alistair Popple, Andrew Morton) Suggested-by: Alistair Popple Fixes: 1afaeb8293c9 ("mm/migrate: Trylock device page in do_swap_page") Cc: Ralph Campbell Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Jason Gunthorpe Cc: Leon Romanovsky Cc: Andrew Morton Cc: Matthew Brost Cc: John Hubbard Cc: Alistair Popple Cc: linux-mm@kvack.org Cc: Signed-off-by: Thomas Hellström Cc: # v6.15+ --- mm/memory.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index da360a6eb8a4..ed20da5570d5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4684,7 +4684,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) unlock_page(vmf->page); put_page(vmf->page); } else { - pte_unmap_unlock(vmf->pte, vmf->ptl); + pte_unmap(vmf->pte); + migration_entry_wait_on_locked(entry, vmf->ptl); } } else if (softleaf_is_hwpoison(entry)) { ret = VM_FAULT_HWPOISON; -- 2.52.0