From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-185.mta0.migadu.com (out-185.mta0.migadu.com [91.218.175.185]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9CC8A3F164E for ; Wed, 1 Apr 2026 11:21:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.185 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775042466; cv=none; b=NGRmNDjKshNB6whb2dxeE5wHeW0nCicegU36PeH0o5LgvXPiYurzSCU75eD0a9cfYqZln5Bh3RWTctFrBPXhQZGotY5MXX+vUWzosaSo8YotQfjOg+KynDVk2XJ8whteEEg4N4XMiPiu5jOsuWT9PYgOZI8m4fOBS1SaY4kMGVc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775042466; c=relaxed/simple; bh=GNebW3Kd9Suc2g9a2LLicRuaSbt2h61yzq/tIbiG7Ys=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=EVXsbAZUWaDP611R2rk6BawHYUFr1picAZwIHtdrGYVS/Y0gH/OChChAPH020MnfmMuQwPZTxDMXNcolvchQHeW9C+g9lQyoRmZ/OTLvA6uJW57a/JeYGMt5DIoU2c/ncPltwGrgdVgxh871mOqyIUFHdL2jsOKoEksa5QPbVSc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Igxn/Ksy; arc=none smtp.client-ip=91.218.175.185 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Igxn/Ksy" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1775042461; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UebqlSzKqacX5E8CJ3qASYdWU109pMRbyg8cPv53GRc=; b=Igxn/Ksyj6n7VznzSj69beimIUFJWdx4bwroO+BMnIhy++j5EoH0g+omggUGB6H3KxUhfr /U7eLljdyGvI9OPKyZYkdbh2Alnu1pO/SznB+dR1zMi19oNOAK9rBYtVhBtGDKWzKRTGBm m5wI9yoSk0rLmPcOUGijQO4kMWdtnqM= From: Lance Yang To: david@kernel.org Cc: lance.yang@linux.dev, kartikey406@gmail.com, usama.arif@linux.dev, Liam.Howlett@oracle.com, ziy@nvidia.com, syzbot+a7067a757858ac8eb085@syzkaller.appspotmail.com, akpm@linux-foundation.org, baohua@kernel.org, baolin.wang@linux.alibaba.com, dev.jain@arm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ljs@kernel.org, npache@redhat.com, ryan.roberts@arm.com, syzkaller-bugs@googlegroups.com Subject: Re: [syzbot] [mm?] WARNING in deferred_split_folio Date: Wed, 1 Apr 2026 19:20:49 +0800 Message-Id: <20260401112049.18634-1-lance.yang@linux.dev> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT On Wed, Apr 01, 2026 at 01:00:13PM +0200, David Hildenbrand (Arm) wrote: >On 4/1/26 12:53, Lance Yang wrote: >> >> +Cc Deepanshu >> >> On Wed, Apr 01, 2026 at 12:16:43PM +0200, David Hildenbrand (Arm) wrote: >>> On 4/1/26 10:59, Lance Yang wrote: >>>> >>>> >from another sharer can then remove some of those mappings and reach >>>> >>>> Perhaps the WARN is simply too strict there :) >>>> >>>> Migration already holds the folio lock on dst, while the competing >>>> rmap-removal path runs under the page-table lock. So once >>>> remove_migration_ptes(src, dst, 0) makes dst visible again, this race >>>> looks hard to avoid. >>>> >>>> So maybe the simplest fix is just to drop the WARN in the >>>> !partially_mapped path: >>>> >>>> ---8<--- >>>> Subject: [PATCH 1/1] mm/thp: avoid false warning in deferred_split_folio() >>>> >>>> From: Lance Yang >>>> >>>> migrate_folio_move() snapshots src_partially_mapped from src before >>>> migration and later requeues dst after remove_migration_ptes(src, dst, 0). >>>> >>>> Once dst is visible again, a competing rmap-removal path can legally set >>>> PG_partially_mapped before the migration path reaches >>>> deferred_split_folio(dst, src_partially_mapped). >>>> >>>> Migration already holds the folio lock on dst, while the competing >>>> rmap-removal path runs under the page-table lock. So once >>>> remove_migration_ptes(src, dst, 0) makes dst visible again, this race >>>> looks hard to avoid. >>>> >>>> So just drop the WARN in the !partially_mapped path and preserve an >>>> already-set PG_partially_mapped bit. >>>> >>>> Link: https://lore.kernel.org/linux-mm/69ccb65b.050a0220.183828.003a.GAE@google.com/ >>>> Fixes: 8a8ca142a488 ("mm: migrate: requeue destination folio on deferred split queue") >>>> Reported-by: syzbot+a7067a757858ac8eb085@syzkaller.appspotmail.com >>>> Signed-off-by: Lance Yang >>>> --- >>>> mm/huge_memory.c | 3 --- >>>> 1 file changed, 3 deletions(-) >>>> >>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>> index 745eb3d0d4a7..8ea8e293dc7c 100644 >>>> --- a/mm/huge_memory.c >>>> +++ b/mm/huge_memory.c >>>> @@ -4433,9 +4433,6 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped) >>>> mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, 1); >>>> >>>> } >>>> - } else { >>>> - /* partially mapped folios cannot become non-partially mapped */ >>>> - VM_WARN_ON_FOLIO(folio_test_partially_mapped(folio), folio); >>>> } >>> >>> Can't we simply move the setting before restoring migration ptes? >> >> Afraid not, it closes the remove_migration_ptes() -> >> deferred_split_folio() race, but opens a new one with the shrinker, IIUC >> >> Once dst is on the deferred split queue, deferred_split_scan() can >> pick it up immediately. The shrinker unconditionally dequeues every >> folio it visits: >> >> list_del_init(&folio->_deferred_list); /* always */ >> >> Then for a non-partially-mapped folio, if folio_trylock() fails >> (dst is still locked by migration), it falls through to: >> >> next: >> if (did_split || !folio_test_partially_mapped(folio)) >> continue; /* not requeued, dst silently lost */ >> >> so it is *not* requeued. > >How is that different to the shrinker just trying to lock the folio before we >unlock it and failing? The race already exists? Ouch, you're right, I was wrong - the trylock drop is a pre-existing issue, not caused by the reorder ;) > >To sort out that race a trylock must not result in the folio getting >discarded. Nice, LGTM! Given that the "trylock -> drop" behavior seems to exist already today, do you think it's worth fixing that together with the reorder? >diff --git a/mm/huge_memory.c b/mm/huge_memory.c >index ff9a42abd1b6..521989517cd1 100644 >--- a/mm/huge_memory.c >+++ b/mm/huge_memory.c >@@ -4558,7 +4558,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, > goto next; > } > if (!folio_trylock(folio)) >- goto next; >+ goto requeue: > if (!split_folio(folio)) { > did_split = true; > if (underused) >@@ -4569,6 +4569,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, > next: > if (did_split || !folio_test_partially_mapped(folio)) > continue; >+requeue: > /* > * Only add back to the queue if folio is partially mapped. > * If thp_underused returns false, or if split_folio fails > >