From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 29C11C27C53 for ; Wed, 12 Jun 2024 15:38:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=V4ZZpNsNwwD2u/9JyLmnxreTEErTtNRajeg3OYXM8m4=; b=RoqALbQo9SKzeb0VR+vx1FfMTw LHlUjUZKoxSAjPtFC6x8mmuJywCiqPPn51JT0ogS5WPeTQJUMqlzgrbLQsAoy8D/T5mD1UMqCa2Le fs0QVN6PrXcTy4VPw6i9wQACgn/A3J6Ky88gg+pHo5qPgdIWs94xT8beJR94s1SmFtiT90ZNXQ9Xh KtW7CWwDL8gnHnTswPw6SZBheudCz7pcaSdOQVyBpU+BspD7MgqcQCRmrKjxEEYPZpbV5l48vsMG1 0hqwEK1MURPFJhva1futeBbHFWvdTWLr+XwCndE2ejvNjX4BhJtMggTQHH3VAEalqGfq/OF1eYK49 o1MJI3sw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sHQ3X-0000000DD1b-05iw; Wed, 12 Jun 2024 15:38:35 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sHQ3U-0000000DCzM-0fKT for linux-arm-kernel@lists.infradead.org; Wed, 12 Jun 2024 15:38:33 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 49B6A614D2; Wed, 12 Jun 2024 15:38:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A13C8C116B1; Wed, 12 Jun 2024 15:38:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718206711; bh=g55WUS79UqWb2vcT2SLnwjgCqvRpBv4jwQTnch6tqy8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=AT6C4Fg7PnFDHKkPaxNGpuWF4djK2aMRuhJAoJFr/3nq5R3PbOHQZgCPRPaBU/UwK pXXuri4ppxubt4AtO57Xd8VY4fR7Af+9k6mumd31OFnGEW4e+olv1buTFfCm0QJ+pC kQnjLpI8sSQqeE7pdFeDOqhUaMpQBPV4JteDQug3RyZaYdWJie6YnMF54KxuPOYPNY KJhQwiWbmZcCx+cICN6dXIqR5CN0//8SrE6lxxeW0bqY9oP0sYFg8UqsA9ORONvuxH x0H4rRnvZngxQ/MqD1ENi0nLGldcjf7Po+TjTHsWA0vUjUaaN+XSNgNGE0JmiTebO3 ODeQC/TnUFehg== Date: Wed, 12 Jun 2024 08:38:29 -0700 From: Eric Biggers To: Herbert Xu Cc: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev, x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Bart Van Assche Subject: Re: [PATCH v5 15/15] dm-verity: improve performance by using multibuffer hashing Message-ID: <20240612153829.GC1170@sol.localdomain> References: <20240611034822.36603-1-ebiggers@kernel.org> <20240611034822.36603-16-ebiggers@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240612_083832_274384_DFC41A99 X-CRM114-Status: GOOD ( 16.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Jun 12, 2024 at 05:31:17PM +0800, Herbert Xu wrote: > On Mon, Jun 10, 2024 at 08:48:22PM -0700, Eric Biggers wrote: > > > > + if (++io->num_pending == v->mb_max_msgs) { > > + r = verity_verify_pending_blocks(v, io, bio); > > + if (unlikely(r)) > > + goto error; > > } > > What is the overhead if you just let it accumulate as large a > request as possible? We should let the underlying algorithm decide > how to divide this up in the most optimal fashion. > The queue adds 144*num_messages bytes to each bio. It's desirable to keep this memory overhead down. So it makes sense to limit the queue length to the multibuffer hashing interleaving factor. Yes we could build something where you could get a marginal performance benefit from amounts higher than that by saving indirect calls, but I think it wouldn't be worth bloating the per-IO memory. Another thing to keep in mind is that with how the dm-verity code is currently structured, for each data block it gets the wanted hash from the Merkle tree (which it prefetched earlier) before hashing the data block. So I also worry that if we wait too long before starting to hash the data blocks, dm-verity will spend more time unnecessarily blocked on waiting for Merkle tree I/O. - Eric