From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12E83C7619F for ; Mon, 17 Feb 2020 11:48:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E02CA206F4 for ; Mon, 17 Feb 2020 11:48:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581940116; bh=1taVN3xASi7XDpd1Ci0YWQdswD+b2aJchBXLyrLF5Qc=; h=Subject:To:Cc:From:Date:List-ID:From; b=rxdErvC4oaiDFytX4aXfL8HEUzs0GC0sXlhwQCgr5Ef95qVOFQZATe7AA3lFk72dx idyXT2RkGXoLdz2dBE4O/a6fPfIbzAJ1rmCV/RaQX8FZcB/IVKuEbqlkL6gOenvVTv R+QygIqXRAeWSfcrWm+Pb8VHMau55ybvLaS+fPLE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728779AbgBQLsg (ORCPT ); Mon, 17 Feb 2020 06:48:36 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:41587 "EHLO out1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728754AbgBQLsg (ORCPT ); Mon, 17 Feb 2020 06:48:36 -0500 Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 46CE221FFC; Mon, 17 Feb 2020 06:48:35 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Mon, 17 Feb 2020 06:48:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:message-id:mime-version:subject:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=O0bLG+ csCnH5zhgrpdA7mFglviUBJfw6bAL44YVLsX4=; b=oEPm5TQOs5aq3LexDUGFY0 cjy6KT+KcNus97zKsrdEuBuYGbjo3vV3f+Fb7aG4A41pr6+HRfrVlS5Dg9VkJPtR hS8u0K6z0dxwUwRijiNOmG9RFIU4fJ6vkmh50CSXLpA3QLphH0fnUKcoAUQQlRpZ 0DEH83UXhMdxgVenAnLC05rx88SUw/PW6G3HUIvunERJx2Aj7sfQvmavylZ4twR5 9JST7uSQoK0Hv9pmab8a4o4BEPD9RqzfNc5M8sdC7fyLz9RVVq0Y3vVTvJF4wi4y +72vIzm9FWeIEBVyhb6+Dte4lHCzvja8VS34ZQy+8ngVYVelwZivEc5CckzNAB4A == X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedugedrjeeigdefvdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhepuffvhfffkfggtgfgsehtkeertddttd ejnecuhfhrohhmpeeoghhrvghgkhhhsehlihhnuhigfhhouhhnuggrthhiohhnrdhorhhg qeenucfkphepkeefrdekiedrkeelrddutdejnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepghhrvghgsehkrhhorghhrdgtohhm X-ME-Proxy: Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) by mail.messagingengine.com (Postfix) with ESMTPA id 376783060BE4; Mon, 17 Feb 2020 06:48:34 -0500 (EST) Subject: FAILED: patch "[PATCH] io_uring: prune request from overflow list on flush" failed to apply to 5.5-stable tree To: axboe@kernel.dk, carter.li@eoitek.com Cc: From: Date: Mon, 17 Feb 2020 12:48:31 +0100 Message-ID: <1581940111189240@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch below does not apply to the 5.5-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From 2ca10259b4189a433c309054496dd6af1415f992 Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Thu, 13 Feb 2020 17:17:35 -0700 Subject: [PATCH] io_uring: prune request from overflow list on flush MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Carter reported an issue where he could produce a stall on ring exit, when we're cleaning up requests that match the given file table. For this particular test case, a combination of a few things caused the issue: - The cq ring was overflown - The request being canceled was in the overflow list The combination of the above means that the cq overflow list holds a reference to the request. The request is canceled correctly, but since the overflow list holds a reference to it, the final put won't happen. Since the final put doesn't happen, the request remains in the inflight. Hence we never finish the cancelation flush. Fix this by removing requests from the overflow list if we're canceling them. Cc: stable@vger.kernel.org # 5.5 Reported-by: Carter Li 李通洲 Signed-off-by: Jens Axboe diff --git a/fs/io_uring.c b/fs/io_uring.c index 6d4e20d59729..5a826017ebb8 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -481,6 +481,7 @@ enum { REQ_F_TIMEOUT_NOSEQ_BIT, REQ_F_COMP_LOCKED_BIT, REQ_F_NEED_CLEANUP_BIT, + REQ_F_OVERFLOW_BIT, }; enum { @@ -521,6 +522,8 @@ enum { REQ_F_COMP_LOCKED = BIT(REQ_F_COMP_LOCKED_BIT), /* needs cleanup */ REQ_F_NEED_CLEANUP = BIT(REQ_F_NEED_CLEANUP_BIT), + /* in overflow list */ + REQ_F_OVERFLOW = BIT(REQ_F_OVERFLOW_BIT), }; /* @@ -1103,6 +1106,7 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force) req = list_first_entry(&ctx->cq_overflow_list, struct io_kiocb, list); list_move(&req->list, &list); + req->flags &= ~REQ_F_OVERFLOW; if (cqe) { WRITE_ONCE(cqe->user_data, req->user_data); WRITE_ONCE(cqe->res, req->result); @@ -1155,6 +1159,7 @@ static void io_cqring_fill_event(struct io_kiocb *req, long res) set_bit(0, &ctx->sq_check_overflow); set_bit(0, &ctx->cq_check_overflow); } + req->flags |= REQ_F_OVERFLOW; refcount_inc(&req->refs); req->result = res; list_add_tail(&req->list, &ctx->cq_overflow_list); @@ -6463,6 +6468,29 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx, if (!cancel_req) break; + if (cancel_req->flags & REQ_F_OVERFLOW) { + spin_lock_irq(&ctx->completion_lock); + list_del(&cancel_req->list); + cancel_req->flags &= ~REQ_F_OVERFLOW; + if (list_empty(&ctx->cq_overflow_list)) { + clear_bit(0, &ctx->sq_check_overflow); + clear_bit(0, &ctx->cq_check_overflow); + } + spin_unlock_irq(&ctx->completion_lock); + + WRITE_ONCE(ctx->rings->cq_overflow, + atomic_inc_return(&ctx->cached_cq_overflow)); + + /* + * Put inflight ref and overflow ref. If that's + * all we had, then we're done with this request. + */ + if (refcount_sub_and_test(2, &cancel_req->refs)) { + io_put_req(cancel_req); + continue; + } + } + io_wq_cancel_work(ctx->io_wq, &cancel_req->work); io_put_req(cancel_req); schedule();