From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9012C6FD1F for ; Fri, 10 Mar 2023 15:30:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232071AbjCJPaU (ORCPT ); Fri, 10 Mar 2023 10:30:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234418AbjCJP36 (ORCPT ); Fri, 10 Mar 2023 10:29:58 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06DAD10D773 for ; Fri, 10 Mar 2023 07:18:30 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7706DB822BD for ; Fri, 10 Mar 2023 15:17:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B656BC4339C; Fri, 10 Mar 2023 15:17:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1678461457; bh=GNTLtPps92aaRkFEiOLa35XgtAKVKLqFy1J7wlkwcY0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sY2e7g5w55BTruWYhbZ6Uen1kL35v1Yloo+HIHVN0fr8JBWmZNl0ZVZSetw0iQKBz SGXHRPIaCG6bXPhPESE1FvxpygCc4hzebLJyzUBzVpXhwYp77xrcGMIjRgZizkjQB9 qAqVljsSpxOSt9BPmijA1AdQf05KSA5W7L8fPmfU= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, syzbot+9c0268252b8ef967c62e@syzkaller.appspotmail.com, Eric Dumazet , Jakub Kicinski Subject: [PATCH 5.15 124/136] net: tls: avoid hanging tasks on the tx_lock Date: Fri, 10 Mar 2023 14:44:06 +0100 Message-Id: <20230310133710.972136927@linuxfoundation.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230310133706.811226272@linuxfoundation.org> References: <20230310133706.811226272@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jakub Kicinski commit f3221361dc85d4de22586ce8441ec2c67b454f5d upstream. syzbot sent a hung task report and Eric explains that adversarial receiver may keep RWIN at 0 for a long time, so we are not guaranteed to make forward progress. Thread which took tx_lock and went to sleep may not release tx_lock for hours. Use interruptible sleep where possible and reschedule the work if it can't take the lock. Testing: existing selftest passes Reported-by: syzbot+9c0268252b8ef967c62e@syzkaller.appspotmail.com Fixes: 79ffe6087e91 ("net/tls: add a TX lock") Link: https://lore.kernel.org/all/000000000000e412e905f5b46201@google.com/ Cc: stable@vger.kernel.org # wait 4 weeks Reviewed-by: Eric Dumazet Link: https://lore.kernel.org/r/20230301002857.2101894-1-kuba@kernel.org Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- net/tls/tls_sw.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -950,7 +950,9 @@ int tls_sw_sendmsg(struct sock *sk, stru MSG_CMSG_COMPAT)) return -EOPNOTSUPP; - mutex_lock(&tls_ctx->tx_lock); + ret = mutex_lock_interruptible(&tls_ctx->tx_lock); + if (ret) + return ret; lock_sock(sk); if (unlikely(msg->msg_controllen)) { @@ -1284,7 +1286,9 @@ int tls_sw_sendpage(struct sock *sk, str MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY)) return -EOPNOTSUPP; - mutex_lock(&tls_ctx->tx_lock); + ret = mutex_lock_interruptible(&tls_ctx->tx_lock); + if (ret) + return ret; lock_sock(sk); ret = tls_sw_do_sendpage(sk, page, offset, size, flags); release_sock(sk); @@ -2284,11 +2288,19 @@ static void tx_work_handler(struct work_ if (!test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) return; - mutex_lock(&tls_ctx->tx_lock); - lock_sock(sk); - tls_tx_records(sk, -1); - release_sock(sk); - mutex_unlock(&tls_ctx->tx_lock); + + if (mutex_trylock(&tls_ctx->tx_lock)) { + lock_sock(sk); + tls_tx_records(sk, -1); + release_sock(sk); + mutex_unlock(&tls_ctx->tx_lock); + } else if (!test_and_set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) { + /* Someone is holding the tx_lock, they will likely run Tx + * and cancel the work on their way out of the lock section. + * Schedule a long delay just in case. + */ + schedule_delayed_work(&ctx->tx_work.work, msecs_to_jiffies(10)); + } } void tls_sw_write_space(struct sock *sk, struct tls_context *ctx)