From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B9A92B9C7; Sun, 1 Sep 2024 16:43:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725209024; cv=none; b=gG9Ns3nYi5+VqAtPN85W+gFhxfNZzV5G2i8J1JmGF+uTtQB41Krl2368yhJs0/0B0PDonQrVGdMK221K3rfmnovC62Qkfci+leq66A4FpEgEvAYRmIb2RdDEZe8+33FijldriFtjCbZRQc+1NwUzXkpDHyyuhGAIuuHzhjG7ack= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725209024; c=relaxed/simple; bh=MtdWKH9YxJjHLC8IwIhPvBEllasQV8i8tcC4wTAFX7E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HtQQoQFKhIsOBwFaWobr/P7uoaAEWzjs3aIn5YPSxfZ4vCQXpF0DR24r/0AEnqi4HPikV9+hSZJ/JD8tvR8ncTZ3fhULw0i66k9oHrv4zVp5qtc7hk5j5vWx+F9m6AzbxW4Ss190QOl7PhprEHnEI1rSrhc8wGwQAMoO5x/hf4Y= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=SzP47uv3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="SzP47uv3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D82CCC4CEC3; Sun, 1 Sep 2024 16:43:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1725209024; bh=MtdWKH9YxJjHLC8IwIhPvBEllasQV8i8tcC4wTAFX7E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SzP47uv3rrB15DpY3jq1NCYhwAvPMh7l3Uiqm9kQYpefhDmUJ8U4/ozf+N6lO0fTZ 8KovgVs/FNAmmDzvbUva3hRsinFq3Utn5jOKeHwRRm7OwHsESrgnVZXpDKv7JXIVab ygoNcWcw2P+8C3IzaskLFKb3yJ1bVq+yrQ4k2TaY= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Rafael Aquini , Davidlohr Bueso , Manfred Spraul , Waiman Long , Andrew Morton , Linus Torvalds Subject: [PATCH 5.4 110/134] ipc: replace costly bailout check in sysvipc_find_ipc() Date: Sun, 1 Sep 2024 18:17:36 +0200 Message-ID: <20240901160814.224580063@linuxfoundation.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20240901160809.752718937@linuxfoundation.org> References: <20240901160809.752718937@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Rafael Aquini commit 20401d1058f3f841f35a594ac2fc1293710e55b9 upstream. sysvipc_find_ipc() was left with a costly way to check if the offset position fed to it is bigger than the total number of IPC IDs in use. So much so that the time it takes to iterate over /proc/sysvipc/* files grows exponentially for a custom benchmark that creates "N" SYSV shm segments and then times the read of /proc/sysvipc/shm (milliseconds): 12 msecs to read 1024 segs from /proc/sysvipc/shm 18 msecs to read 2048 segs from /proc/sysvipc/shm 65 msecs to read 4096 segs from /proc/sysvipc/shm 325 msecs to read 8192 segs from /proc/sysvipc/shm 1303 msecs to read 16384 segs from /proc/sysvipc/shm 5182 msecs to read 32768 segs from /proc/sysvipc/shm The root problem lies with the loop that computes the total amount of ids in use to check if the "pos" feeded to sysvipc_find_ipc() grew bigger than "ids->in_use". That is a quite inneficient way to get to the maximum index in the id lookup table, specially when that value is already provided by struct ipc_ids.max_idx. This patch follows up on the optimization introduced via commit 15df03c879836 ("sysvipc: make get_maxid O(1) again") and gets rid of the aforementioned costly loop replacing it by a simpler checkpoint based on ipc_get_maxidx() returned value, which allows for a smooth linear increase in time complexity for the same custom benchmark: 2 msecs to read 1024 segs from /proc/sysvipc/shm 2 msecs to read 2048 segs from /proc/sysvipc/shm 4 msecs to read 4096 segs from /proc/sysvipc/shm 9 msecs to read 8192 segs from /proc/sysvipc/shm 19 msecs to read 16384 segs from /proc/sysvipc/shm 39 msecs to read 32768 segs from /proc/sysvipc/shm Link: https://lkml.kernel.org/r/20210809203554.1562989-1-aquini@redhat.com Signed-off-by: Rafael Aquini Acked-by: Davidlohr Bueso Acked-by: Manfred Spraul Cc: Waiman Long Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- ipc/util.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) --- a/ipc/util.c +++ b/ipc/util.c @@ -754,21 +754,13 @@ struct pid_namespace *ipc_seq_pid_ns(str static struct kern_ipc_perm *sysvipc_find_ipc(struct ipc_ids *ids, loff_t pos, loff_t *new_pos) { - struct kern_ipc_perm *ipc; - int total, id; + struct kern_ipc_perm *ipc = NULL; + int max_idx = ipc_get_maxidx(ids); - total = 0; - for (id = 0; id < pos && total < ids->in_use; id++) { - ipc = idr_find(&ids->ipcs_idr, id); - if (ipc != NULL) - total++; - } - - ipc = NULL; - if (total >= ids->in_use) + if (max_idx == -1 || pos > max_idx) goto out; - for (; pos < ipc_mni; pos++) { + for (; pos <= max_idx; pos++) { ipc = idr_find(&ids->ipcs_idr, pos); if (ipc != NULL) { rcu_read_lock();