From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ot1-f70.google.com (mail-ot1-f70.google.com [209.85.210.70]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38D0029D275 for ; Tue, 24 Feb 2026 03:17:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.70 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771903054; cv=none; b=hVNVt5WnrNbKaKjw0EWstrQP6oq1LZ7qqgf97xwcv0m/Pw7juwvFbeXnoxmp3b+2bG9Vb/y1+/iaURlMBCy130qCRU5sBX/0DiL53/R1mHBNY8S/bm+rJqTymfdnV2RcLN+XjixQdT48CJ/Ul3zHQ2FxFfO1F1s734vqzXMJ+3k= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771903054; c=relaxed/simple; bh=x2m5+TYlZ+vtuwP1Z8IxZC86IFvbwq/p6j9fG9EbLT0=; h=MIME-Version:Date:In-Reply-To:Message-ID:Subject:From:To: Content-Type; b=iBwalqbNLNEHPZdIpFYkgN81Zw0i/MGw7LzRLCj4IXJxaZ2nraZY1ZECjzuc+bTlhuM7gP8wb+VR5SQCvHcyJmDOteEzpvpPtZHbJWur+L69+ZcSJqvJDKSO1hQBF8tgsBeRnEqiluJKHBv0BoMbiPHE2GlkW7O0/H7I3JxqEoU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=syzkaller.appspotmail.com; spf=pass smtp.mailfrom=M3KW2WVRGUFZ5GODRSRYTGD7.apphosting.bounces.google.com; arc=none smtp.client-ip=209.85.210.70 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=none dis=none) header.from=syzkaller.appspotmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=M3KW2WVRGUFZ5GODRSRYTGD7.apphosting.bounces.google.com Received: by mail-ot1-f70.google.com with SMTP id 46e09a7af769-7d4c6bb79a5so21668484a34.0 for ; Mon, 23 Feb 2026 19:17:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771903052; x=1772507852; h=content-transfer-encoding:to:from:subject:message-id:in-reply-to :date:mime-version:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=h50F/xk0xPgvSGyO2+dc4IIxAK64geTKuFCKXUvJmEE=; b=HPBgtLACiBgQakFjhnzoPjIK6Ny++3Syx0JHc3Adfn6/cgVK2Cz3kN9FRh/LbHR7zR 6MzVCOi1u/R/O5gHsydTjoYpVWx3m0jUDFxgHWJRX7hIZY/n5cQhAxwfNjMp1zWhtfbp Z+wnbwFg4I+3OF84y0lt8nZ6q7NptDrc/Nf4t+NY7X13/J5ByIDvaV4Ztxm9QhPxBqG6 y0+U9VmQqYCj+iAfNY5gz2+O9s+oZ6fgIZdE6wCCKDqh5fo/0p2vDAt8UvVZrb1Kovu6 elTFvnCQ86dZRiz4jscC7lc+EZIf8lWFApRpbj0RS9l9N2xwmw7Dt7QLZYJejPxQgVWm KLjA== X-Gm-Message-State: AOJu0YzsJKvE9JLPNpfVxHcTWQ0ygDzymQhE8cj1rFsR3YMtsENaMsUt 6Pxfhis4IJdfyvzyya8uUOrQx3wDJktUNPwiMIm+jQTIaj0GqZY0bqXtYtWkTxao9WKtcVEgqT4 mXCBJCCKJWy+guLzfNzQB5mrsBYYKClSgNhEWP0ixNYAC7ZgshfHDOiYFiVE= Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Received: by 2002:a05:6820:1986:b0:66f:1374:4906 with SMTP id 006d021491bc7-679c4278e28mr6171397eaf.22.1771903052294; Mon, 23 Feb 2026 19:17:32 -0800 (PST) Date: Mon, 23 Feb 2026 19:17:32 -0800 In-Reply-To: <69984159.050a0220.21cd75.01bb.GAE@google.com> X-Google-Appengine-App-Id: s~syzkaller X-Google-Appengine-App-Id-Alias: syzkaller Message-ID: <699d184c.050a0220.cdd3c.03e2.GAE@google.com> Subject: Forwarded: Re: [syzbot] [kernel?] INFO: task hung in restrict_one_thread_callback From: syzbot To: linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable For archival purposes, forwarding an incoming command email to linux-kernel@vger.kernel.org, syzkaller-bugs@googlegroups.com. *** Subject: Re: [syzbot] [kernel?] INFO: task hung in restrict_one_thread_call= back Author: dingyihan@uniontech.com #syz test: git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.gi= t master --- a/security/landlock/tsync.c +++ b/security/landlock/tsync.c @@ -447,6 +447,12 @@ int landlock_restrict_sibling_threads(const struct cre= d *old_cred, shared_ctx.new_cred =3D new_cred; shared_ctx.set_no_new_privs =3D task_no_new_privs(current); =20 + /* + * Serialize concurrent TSYNC operations to prevent deadlocks + * when multiple threads call landlock_restrict_self() simultaneously. + */ + down_write(¤t->signal->exec_update_lock); + /* * We schedule a pseudo-signal task_work for each of the calling task's * sibling threads. In the task work, each thread: @@ -527,14 +533,17 @@ int landlock_restrict_sibling_threads(const struct cr= ed *old_cred, -ERESTARTNOINTR); =20 /* - * Cancel task works for tasks that did not start running yet, - * and decrement all_prepared and num_unfinished accordingly. + * Opportunistic improvement: try to cancel task works + * for tasks that did not start running yet. We do not + * have a guarantee that it cancels any of the enqueued + * task works (because task_work_run() might already have + * dequeued them). */ cancel_tsync_works(&works, &shared_ctx); =20 /* - * The remaining task works have started running, so waiting for - * their completion will finish. + * We must wait for the remaining task works to finish to + * prevent a use-after-free of the local shared_ctx. */ wait_for_completion(&shared_ctx.all_prepared); } @@ -557,5 +566,7 @@ int landlock_restrict_sibling_threads(const struct cred= *old_cred, =20 tsync_works_release(&works); =20 + up_write(¤t->signal->exec_update_lock); + return atomic_read(&shared_ctx.preparation_error); } =E5=9C=A8 2026/2/24 11:03, syzbot =E5=86=99=E9=81=93: >> Hi G=C3=BCnther, >> >> Thank you for the detailed analysis! I completely agree that serializing= the TSYNC=20 >> operations is the right way to prevent this deadlock. I have drafted a p= atch using=20 >> `exec_update_lock` (similar to how seccomp uses `cred_guard_mutex`). >> >> Regarding your proposal to split this into two patches (one for the clea= nup=20 >> path and one for the lock): Maybe combining them into a single patch is = a better choice. Here is why: >> >> We actually *cannot* remove `wait_for_completion(&shared_ctx.all_prepare= d)`=20 >> in the interrupt recovery path. Since `shared_ctx` is allocated on the l= ocal=20 >> stack of the caller, removing the wait would cause a severe Use-After-Fr= ee (UAF) if the=20 >> thread returns to userspace while sibling task_works are still executing= and dereferencing `ctx`.=20 >> >> By adding the lock, we inherently resolve the deadlock, meaning the sibl= ing task_works=20 >> will never get stuck. Thus, `wait_for_completion` becomes perfectly safe= to keep,=20 >> and it remains strictly necessary to protect the stack memory. Therefore= , the "fix" for the=20 >> cleanup path is simply updating the comments to reflect this reality, wh= ich is tightly coupled with the locking fix.=20 >> It felt more cohesive as a single patch. >> >> I have test the patch on my laptop,and it will not trigger the issue.Let= 's have syzbot test this combined logic: >> >> #syz test:=20 >=20 > "---" does not look like a valid git repo address. >=20