From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2F15274B28 for ; Sun, 1 Mar 2026 01:24:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772328247; cv=none; b=FFXaPM7W4spleDvyix1pxjv0jf2AWkJpuSA0/7I42uqZE5PxbKTQBLhqQdXVX7ZylzL0qLaOyBPj82tFw/WpC/IWCI+BR1JGvWJYS8KwNCGhjoh43PlwQGBXeDihAIsInrGFkv+/isr0nn4vBvGp27slAjAOzfnLzoB6HhUf0V0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772328247; c=relaxed/simple; bh=KksfMdpxkyvIDXGgasBLEr4mx4viiP1gKKA5/7q133g=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=L7sZL/8hdnaNvZCosxNM5wZzGCzTGH+jJun5ybwqoKXt3oBiTiDfUp5iITBewBzAE1bKwtyK9OMqOode4H247uVl9rZeAE1EJ4kCKpqvuyxal+x3T58yaeErSxWCTXTAgoE9r+5mhoGmvsgy3puCIW7Vi+WIEMR55T1zOu58flo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pE/Seyic; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pE/Seyic" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 160B6C19421; Sun, 1 Mar 2026 01:24:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772328246; bh=KksfMdpxkyvIDXGgasBLEr4mx4viiP1gKKA5/7q133g=; h=From:To:Cc:Subject:Date:From; b=pE/SeyicvqVRcvFLSGyp4xr8jEVdlYn4uZrIOdR0gF1lG9xyvdyg4wA199PFfPAw1 OoTFGLJuHdKEUWmClEVu7qWgo9i24KHrygJLe9gpsBgh2JkO+HiCZPinJebyWWY7wP 9UCUn/ldTgc4Mr1FhwOd4SFYz+R9hEgOwvbz4j0ouqINRM9IJiqyeNmamGiyFUq6Y9 GwQt/zecTU5PR4pip+Lq3Jj24USK/mA1mz5/YSjU/snVmOm+copxrt9P7IIxOX/PLv OyIbCF1KGK8zJk2UI9HtYqwZ1mPmD1CzPQtoJv/9VfoDRqjJM8K7zTEsHMnWBfTeuK Cnj/wFYWwOXQw== From: Sasha Levin To: stable@vger.kernel.org, kartikey406@gmail.com Cc: syzbot+d8d4c31d40f868eaea30@syzkaller.appspotmail.com, Uladzislau Rezki , Hillf Danton , Andrew Morton , linux-mm@kvack.org Subject: FAILED: Patch "mm/vmalloc: prevent RCU stalls in kasan_release_vmalloc_node" failed to apply to 6.12-stable tree Date: Sat, 28 Feb 2026 20:24:04 -0500 Message-ID: <20260301012404.1680833-1-sashal@kernel.org> X-Mailer: git-send-email 2.51.0 Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Hint: ignore X-stable: review Content-Transfer-Encoding: 8bit The patch below does not apply to the 6.12-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . Thanks, Sasha ------------------ original commit in Linus's tree ------------------ >From 5747435e0fd474c24530ef1a6822f47e7d264b27 Mon Sep 17 00:00:00 2001 From: Deepanshu Kartikey Date: Mon, 12 Jan 2026 16:06:12 +0530 Subject: [PATCH] mm/vmalloc: prevent RCU stalls in kasan_release_vmalloc_node When CONFIG_PAGE_OWNER is enabled, freeing KASAN shadow pages during vmalloc cleanup triggers expensive stack unwinding that acquires RCU read locks. Processing a large purge_list without rescheduling can cause the task to hold CPU for extended periods (10+ seconds), leading to RCU stalls and potential OOM conditions. The issue manifests in purge_vmap_node() -> kasan_release_vmalloc_node() where iterating through hundreds or thousands of vmap_area entries and freeing their associated shadow pages causes: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P6229/1:b..l ... task:kworker/0:17 state:R running task stack:28840 pid:6229 ... kasan_release_vmalloc_node+0x1ba/0xad0 mm/vmalloc.c:2299 purge_vmap_node+0x1ba/0xad0 mm/vmalloc.c:2299 Each call to kasan_release_vmalloc() can free many pages, and with page_owner tracking, each free triggers save_stack() which performs stack unwinding under RCU read lock. Without yielding, this creates an unbounded RCU critical section. Add periodic cond_resched() calls within the loop to allow: - RCU grace periods to complete - Other tasks to run - Scheduler to preempt when needed The fix uses need_resched() for immediate response under load, with a batch count of 32 as a guaranteed upper bound to prevent worst-case stalls even under light load. Link: https://lkml.kernel.org/r/20260112103612.627247-1-kartikey406@gmail.com Signed-off-by: Deepanshu Kartikey Reported-by: syzbot+d8d4c31d40f868eaea30@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=d8d4c31d40f868eaea30 Link: https://lore.kernel.org/all/20260112084723.622910-1-kartikey406@gmail.com/T/ [v1] Suggested-by: Uladzislau Rezki Reviewed-by: Uladzislau Rezki (Sony) Cc: Hillf Danton Cc: Signed-off-by: Andrew Morton --- mm/vmalloc.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 32d6ee92d4ff8..ca4c653286870 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2273,11 +2273,14 @@ decay_va_pool_node(struct vmap_node *vn, bool full_decay) reclaim_list_global(&decay_list); } +#define KASAN_RELEASE_BATCH_SIZE 32 + static void kasan_release_vmalloc_node(struct vmap_node *vn) { struct vmap_area *va; unsigned long start, end; + unsigned int batch_count = 0; start = list_first_entry(&vn->purge_list, struct vmap_area, list)->va_start; end = list_last_entry(&vn->purge_list, struct vmap_area, list)->va_end; @@ -2287,6 +2290,11 @@ kasan_release_vmalloc_node(struct vmap_node *vn) kasan_release_vmalloc(va->va_start, va->va_end, va->va_start, va->va_end, KASAN_VMALLOC_PAGE_RANGE); + + if (need_resched() || (++batch_count >= KASAN_RELEASE_BATCH_SIZE)) { + cond_resched(); + batch_count = 0; + } } kasan_release_vmalloc(start, end, start, end, KASAN_VMALLOC_TLB_FLUSH); -- 2.51.0