From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79216199EA9; Sun, 23 Jun 2024 13:45:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719150328; cv=none; b=T9Gy6w4ZIHL4wyQgAJczVlBNyrEq/qMIkFbRr/Vc33fm5p5BldbB7dlNbeDX0oodw/nNs3pgH22Hgq1teHr+Xkkea3DIXREFOKBH8hXK+yHW3FiC0vCzSnXPa3NCb1L4NayDqUPpO15Zk+qG0OuW60yJnhGnteous/Jd/NvxEos= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719150328; c=relaxed/simple; bh=2dMvc69zDQ6giKu4nIZpePNxADLoHCon0Vcf5Hbapu4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cMTe55GhwDeP/F8v7bGgi+yDDkrWgd6ii3+DnwYTO0blGB/Y4INgxa1IOg9Pxozuy+wYRDGHiL5jVb/GGFoDqmjHcbDKPXxZJ03+JPH02pmgRxr4PVxQNTJ7ZfWZp/iDmz3zzJaRbaVEyZUGU+zmPmcZBhrTs3f5FZW8EojrzqQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NocbtCaM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NocbtCaM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B44BC4AF07; Sun, 23 Jun 2024 13:45:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1719150328; bh=2dMvc69zDQ6giKu4nIZpePNxADLoHCon0Vcf5Hbapu4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NocbtCaMOHQXmZ+xNxcf4TPjSPsDsAUU34HBzFWLIREsCCGfNfe/S0oHVkdNk+Lbo Y4LWqkde0daLVBrHmHga1JqWoRw3Ocqs5A70fPZmiczwbS0ihPj+V7bgTtMxv4w0wA GpD2Ogc1Knxy4d9GIiZ4AkjGPWUd/afnxvzV/tcS2DiUhPURRB4A9I2L1ubnNywu8D s5r+fZMpSIrzzEkNB+qSSXA+IagMitpOyapm/oN826EouQ6tuUeajRWTUraUT9IBeo Wy7mquiywnoh8M/2YL27w7L7JX727H/glyyrU7RT9iar3H4nvgga4Iumete69/WvRF 50uZENtSQ5mGg== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Baokun Li , Hou Tao , Jeff Layton , Jia Zhu , Christian Brauner , Sasha Levin , dhowells@redhat.com, netfs@lists.linux.dev Subject: [PATCH AUTOSEL 6.1 06/12] cachefiles: make on-demand read killable Date: Sun, 23 Jun 2024 09:45:09 -0400 Message-ID: <20240623134518.809802-6-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240623134518.809802-1-sashal@kernel.org> References: <20240623134518.809802-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: netfs@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 6.1.95 Content-Transfer-Encoding: 8bit From: Baokun Li [ Upstream commit bc9dde6155464e906e630a0a5c17a4cab241ffbb ] Replacing wait_for_completion() with wait_for_completion_killable() in cachefiles_ondemand_send_req() allows us to kill processes that might trigger a hunk_task if the daemon is abnormal. But now only CACHEFILES_OP_READ is killable, because OP_CLOSE and OP_OPEN is initiated from kworker context and the signal is prohibited in these kworker. Note that when the req in xas changes, i.e. xas_load(&xas) != req, it means that a process will complete the current request soon, so wait again for the request to be completed. In addition, add the cachefiles_ondemand_finish_req() helper function to simplify the code. Suggested-by: Hou Tao Signed-off-by: Baokun Li Link: https://lore.kernel.org/r/20240522114308.2402121-13-libaokun@huaweicloud.com Acked-by: Jeff Layton Reviewed-by: Jia Zhu Signed-off-by: Christian Brauner Signed-off-by: Sasha Levin --- fs/cachefiles/ondemand.c | 40 ++++++++++++++++++++++++++++------------ 1 file changed, 28 insertions(+), 12 deletions(-) diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c index 0862d69d64759..9513efaeb7ab6 100644 --- a/fs/cachefiles/ondemand.c +++ b/fs/cachefiles/ondemand.c @@ -380,6 +380,20 @@ static struct cachefiles_req *cachefiles_ondemand_select_req(struct xa_state *xa return NULL; } +static inline bool cachefiles_ondemand_finish_req(struct cachefiles_req *req, + struct xa_state *xas, int err) +{ + if (unlikely(!xas || !req)) + return false; + + if (xa_cmpxchg(xas->xa, xas->xa_index, req, NULL, 0) != req) + return false; + + req->error = err; + complete(&req->done); + return true; +} + ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, char __user *_buffer, size_t buflen) { @@ -443,16 +457,8 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, out: cachefiles_put_object(req->object, cachefiles_obj_put_read_req); /* Remove error request and CLOSE request has no reply */ - if (ret || msg->opcode == CACHEFILES_OP_CLOSE) { - xas_reset(&xas); - xas_lock(&xas); - if (xas_load(&xas) == req) { - req->error = ret; - complete(&req->done); - xas_store(&xas, NULL); - } - xas_unlock(&xas); - } + if (ret || msg->opcode == CACHEFILES_OP_CLOSE) + cachefiles_ondemand_finish_req(req, &xas, ret); cachefiles_req_put(req); return ret ? ret : n; } @@ -544,8 +550,18 @@ static int cachefiles_ondemand_send_req(struct cachefiles_object *object, goto out; wake_up_all(&cache->daemon_pollwq); - wait_for_completion(&req->done); - ret = req->error; +wait: + ret = wait_for_completion_killable(&req->done); + if (!ret) { + ret = req->error; + } else { + ret = -EINTR; + if (!cachefiles_ondemand_finish_req(req, &xas, ret)) { + /* Someone will complete it soon. */ + cpu_relax(); + goto wait; + } + } cachefiles_req_put(req); return ret; out: -- 2.43.0