From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74473C87FD2 for ; Fri, 1 Aug 2025 04:36:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B01B6B008C; Fri, 1 Aug 2025 00:36:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 186B36B0093; Fri, 1 Aug 2025 00:36:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 04F626B0095; Fri, 1 Aug 2025 00:36:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E85766B008C for ; Fri, 1 Aug 2025 00:36:49 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 959F6C0F21 for ; Fri, 1 Aug 2025 04:36:49 +0000 (UTC) X-FDA: 83726928138.18.C8480C2 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by imf23.hostedemail.com (Postfix) with ESMTP id 5C7D3140007 for ; Fri, 1 Aug 2025 04:36:47 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MrgvJMwr; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf23.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1754023007; a=rsa-sha256; cv=none; b=C1pewTDnbo6zA9FNKST7lqpieV4smnbSkqZSM2UeCwEeQ8E+rAijRtfGoEQbMMjwCBZPCP Md4rq9hlrFNKBiNitK0chy4znWV6L6vfurjR7LJBFAjgz5PTVjxzX8BX5pdLIX+vTw+cKz YRHJAtaem02d3l+r/J9LKmbi11jf9uQ= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MrgvJMwr; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf23.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1754023007; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c8NRcm+GXoAVcyFPfQPpQNshxcjIS3hAdrVSL9CmB8w=; b=DCnZSqOqxZ99y9MwqFi1GL3xj3cU4HZoihgMIPHcaFvEkjfuWPdeuYlXazu3whWzb9/Suz HB1aSz59wPn0wZ4K7bPYW1z9VVg3KLIonDRgqYgugKZmZydTz8OLDlIZF2bPwg5iOnpSky I4GzaLrrp3mWLvyjS8rspd3uJFTQ3LY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1754023008; x=1785559008; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=erTRCgiW9fd483jvSWwU2S3tiPSwqtDUA8Bc8Iypv+M=; b=MrgvJMwre0B0PTedZ6DVrGZ58uldgdqF4tfMYfaRy63fBU1OISERACFj jsQU+ubfUxpi+seoNVMsje8o3kV30sWqrc3G2mqttgueuVccs0yGF4v58 NJuxA0eUSxQWXXkra/oIkD0Y7qc8vHE++WC1UNqdnDiC2tv1pfuEh1ocu g72EOVSK4jk8C4fYcBmvuU3rUhuAZFyaIFh5x5XZYodM40pnQK+zJ20Oy cc0WJ1naN+/6aSt1xdpS4Dx85uV1NjEcI/667/Cq9unDb26F2CyHXXNaH a6LvqI8yQ7GU9aTSFju3utm0YHGCXA5ksOJQXgdxy9zT5UnJVQ4DdVscS Q==; X-CSE-ConnectionGUID: o9hpb6a+Qem3DMhL/q5zkQ== X-CSE-MsgGUID: uxj4J1aTRCOn0eGjh2Ur1g== X-IronPort-AV: E=McAfee;i="6800,10657,11508"; a="73820167" X-IronPort-AV: E=Sophos;i="6.17,255,1747724400"; d="scan'208";a="73820167" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jul 2025 21:36:44 -0700 X-CSE-ConnectionGUID: q2OHbPJXQoagPDO9qOJySg== X-CSE-MsgGUID: MClL2KHlSZW+IuKjwzIguA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.17,255,1747724400"; d="scan'208";a="163796231" Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by orviesa008.jf.intel.com with ESMTP; 31 Jul 2025 21:36:43 -0700 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, senozhatsky@chromium.org, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, vinicius.gomes@intel.com Cc: wajdi.k.feghali@intel.com, vinodh.gopal@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v11 03/24] crypto: iaa - Simplify, consistency of function parameters, minor stats bug fix. Date: Thu, 31 Jul 2025 21:36:21 -0700 Message-Id: <20250801043642.8103-4-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20250801043642.8103-1-kanchana.p.sridhar@intel.com> References: <20250801043642.8103-1-kanchana.p.sridhar@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 5C7D3140007 X-Stat-Signature: 68jigs7umi889b4tfegzoj9dn6nzsauk X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1754023007-458641 X-HE-Meta: U2FsdGVkX1+MuLiHpcMnc1Haa4kFPX+84U2c1PM/vzfZ9ENnfpyEFLHc2IpsTqFkFy3wW+i2BLk81ECKNSkgGvjgRuAYNzqBbHhSZBHgS/iTk+G9nXNgw4JY1Rzd/iko1ibwMHHNNT6LtXIR1U71/N0bLIDAfct3LcJ5oxDC3K/rqswZsvnNwFGYS2+g5u117gsbirXaeuMr66iDvGBc+kslUGespmVBflPNeV7M6Nve6jdn5RiDUthW1y8wJjp5H7MJWNe959QMqdNn/DhGp0LHVhOgTqC7E+fkXFDq2aTaRWEEzpokv2FnRpSNQwyDCGNBsvsFKhqiXjR4Gtvx/uRkRiHVlwsA5aiCTuUFYiQc459r7G2Iy/J9os0X0ifdzwYbF1aMdBaPcxjZQVRX6tcgfLLTB+QHXii30bpPBHd1tvfHbVozBMbY0tkYgdjN2n8FPLm5eDP9dfXlXQm4ebRb2UPqsWyi/2ZQx2/8rIuHno973+1nO0BOKAxQM9C3OWwjWRL9V799BUcsSJldyuyRx3OcBVX94sHxeakqlTdDoPrBZiJWoE7JK+Vm/F/xIBE6hLo3vSmjwygOWxp0rh1hvypNS5FocQ62O5tjkTw9X+Y9PerQ0DjCRT5YfqXcuNvCi+Y3ElU6esPdiF5GelGTVqYQaz/GXMe5l7jIKESNTjWwH+NYT87P8bJvJDPMZmFCv8+hHFU6Csv/laD6yOsd1niIhueWoJcSb07AESR0y+iUAssP2Ao6ZoC3mMfUZq6dAzGZNZIWbbC2en1Nu5wTWK9gXc915eeYhCy719nhXutJq4Xgt9obQiSWcY64F9aJotdofgdgocZ6AbMgd9IArxxbPILSidJ5Z9SpGzzCPKyQhB2R9eKNDMhcpEWH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch further simplifies the code in some places and makes it more consistent and readable: 1) Change iaa_compress_verify() @dlen parameter to be a value instead of a pointer, because @dlen's value is only read, not modified by this procedure. 2) Simplify the success/error return paths in iaa_compress(), iaa_decompress() and iaa_compress_verify(). 3) Delete dev_dbg() statements to make the code more readable. 4) Change return value from descriptor allocation failures to be -ENODEV, for better maintainability. 5) Fix a minor statistics bug in iaa_decompress(), with the decomp_bytes getting updated in case of errors. Signed-off-by: Kanchana P Sridhar --- drivers/crypto/intel/iaa/iaa_crypto_main.c | 107 +++++---------------- 1 file changed, 22 insertions(+), 85 deletions(-) diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c index c6db721eaa799..ed3325bb32918 100644 --- a/drivers/crypto/intel/iaa/iaa_crypto_main.c +++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c @@ -1590,7 +1590,7 @@ static int iaa_remap_for_verify(struct device *dev, struct iaa_wq *iaa_wq, static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, struct idxd_wq *wq, dma_addr_t src_addr, unsigned int slen, - dma_addr_t dst_addr, unsigned int *dlen) + dma_addr_t dst_addr, unsigned int dlen) { struct iaa_device_compression_mode *active_compression_mode; struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); @@ -1614,10 +1614,8 @@ static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); if (IS_ERR(idxd_desc)) { - dev_dbg(dev, "idxd descriptor allocation failed\n"); - dev_dbg(dev, "iaa compress failed: ret=%ld\n", - PTR_ERR(idxd_desc)); - return PTR_ERR(idxd_desc); + dev_dbg(dev, "iaa compress_verify failed: idxd descriptor allocation failure: ret=%ld\n", PTR_ERR(idxd_desc)); + return -ENODEV; } desc = idxd_desc->iax_hw; @@ -1629,19 +1627,11 @@ static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, desc->priv = 0; desc->src1_addr = (u64)dst_addr; - desc->src1_size = *dlen; + desc->src1_size = dlen; desc->dst_addr = (u64)src_addr; desc->max_dst_size = slen; desc->completion_addr = idxd_desc->compl_dma; - dev_dbg(dev, "(verify) compression mode %s," - " desc->src1_addr %llx, desc->src1_size %d," - " desc->dst_addr %llx, desc->max_dst_size %d," - " desc->src2_addr %llx, desc->src2_size %d\n", - active_compression_mode->name, - desc->src1_addr, desc->src1_size, desc->dst_addr, - desc->max_dst_size, desc->src2_addr, desc->src2_size); - ret = idxd_submit_desc(wq, idxd_desc); if (ret) { dev_dbg(dev, "submit_desc (verify) failed ret=%d\n", ret); @@ -1664,14 +1654,10 @@ static int iaa_compress_verify(struct crypto_tfm *tfm, struct acomp_req *req, goto err; } - idxd_free_desc(wq, idxd_desc); -out: - return ret; err: idxd_free_desc(wq, idxd_desc); - dev_dbg(dev, "iaa compress failed: ret=%d\n", ret); - goto out; + return ret; } static void iaa_desc_complete(struct idxd_desc *idxd_desc, @@ -1751,7 +1737,7 @@ static void iaa_desc_complete(struct idxd_desc *idxd_desc, } ret = iaa_compress_verify(ctx->tfm, ctx->req, iaa_wq->wq, src_addr, - ctx->req->slen, dst_addr, &ctx->req->dlen); + ctx->req->slen, dst_addr, ctx->req->dlen); if (ret) { dev_dbg(dev, "%s: compress verify failed ret=%d\n", __func__, ret); err = -EIO; @@ -1777,7 +1763,7 @@ static void iaa_desc_complete(struct idxd_desc *idxd_desc, iaa_wq_put(idxd_desc->wq); } -static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, +static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, struct idxd_wq *wq, dma_addr_t src_addr, unsigned int slen, dma_addr_t dst_addr, unsigned int *dlen) @@ -1804,9 +1790,9 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); if (IS_ERR(idxd_desc)) { - dev_dbg(dev, "idxd descriptor allocation failed\n"); - dev_dbg(dev, "iaa compress failed: ret=%ld\n", PTR_ERR(idxd_desc)); - return PTR_ERR(idxd_desc); + dev_dbg(dev, "iaa compress failed: idxd descriptor allocation failure: ret=%ld\n", + PTR_ERR(idxd_desc)); + return -ENODEV; } desc = idxd_desc->iax_hw; @@ -1832,21 +1818,8 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, idxd_desc->crypto.src_addr = src_addr; idxd_desc->crypto.dst_addr = dst_addr; idxd_desc->crypto.compress = true; - - dev_dbg(dev, "%s use_async_irq: compression mode %s," - " src_addr %llx, dst_addr %llx\n", __func__, - active_compression_mode->name, - src_addr, dst_addr); } - dev_dbg(dev, "%s: compression mode %s," - " desc->src1_addr %llx, desc->src1_size %d," - " desc->dst_addr %llx, desc->max_dst_size %d," - " desc->src2_addr %llx, desc->src2_size %d\n", __func__, - active_compression_mode->name, - desc->src1_addr, desc->src1_size, desc->dst_addr, - desc->max_dst_size, desc->src2_addr, desc->src2_size); - ret = idxd_submit_desc(wq, idxd_desc); if (ret) { dev_dbg(dev, "submit_desc failed ret=%d\n", ret); @@ -1859,7 +1832,6 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, if (ctx->async_mode) { ret = -EINPROGRESS; - dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__); goto out; } @@ -1877,15 +1849,10 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, *compression_crc = idxd_desc->iax_completion->crc; - if (!ctx->async_mode) - idxd_free_desc(wq, idxd_desc); -out: - return ret; err: idxd_free_desc(wq, idxd_desc); - dev_dbg(dev, "iaa compress failed: ret=%d\n", ret); - - goto out; +out: + return ret; } static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, @@ -1914,10 +1881,10 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); if (IS_ERR(idxd_desc)) { - dev_dbg(dev, "idxd descriptor allocation failed\n"); - dev_dbg(dev, "iaa decompress failed: ret=%ld\n", + ret = -ENODEV; + dev_dbg(dev, "%s: idxd descriptor allocation failed: ret=%ld\n", __func__, PTR_ERR(idxd_desc)); - return PTR_ERR(idxd_desc); + return ret; } desc = idxd_desc->iax_hw; @@ -1941,21 +1908,8 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, idxd_desc->crypto.src_addr = src_addr; idxd_desc->crypto.dst_addr = dst_addr; idxd_desc->crypto.compress = false; - - dev_dbg(dev, "%s: use_async_irq compression mode %s," - " src_addr %llx, dst_addr %llx\n", __func__, - active_compression_mode->name, - src_addr, dst_addr); } - dev_dbg(dev, "%s: decompression mode %s," - " desc->src1_addr %llx, desc->src1_size %d," - " desc->dst_addr %llx, desc->max_dst_size %d," - " desc->src2_addr %llx, desc->src2_size %d\n", __func__, - active_compression_mode->name, - desc->src1_addr, desc->src1_size, desc->dst_addr, - desc->max_dst_size, desc->src2_addr, desc->src2_size); - ret = idxd_submit_desc(wq, idxd_desc); if (ret) { dev_dbg(dev, "submit_desc failed ret=%d\n", ret); @@ -1968,7 +1922,6 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, if (ctx->async_mode) { ret = -EINPROGRESS; - dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__); goto out; } @@ -1990,23 +1943,19 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, } } else { req->dlen = idxd_desc->iax_completion->output_size; + + /* Update stats */ + update_total_decomp_bytes_in(slen); + update_wq_decomp_bytes(wq, slen); } *dlen = req->dlen; - if (!ctx->async_mode) +err: + if (idxd_desc) idxd_free_desc(wq, idxd_desc); - - /* Update stats */ - update_total_decomp_bytes_in(slen); - update_wq_decomp_bytes(wq, slen); out: return ret; -err: - idxd_free_desc(wq, idxd_desc); - dev_dbg(dev, "iaa decompress failed: ret=%d\n", ret); - - goto out; } static int iaa_comp_acompress(struct acomp_req *req) @@ -2053,9 +2002,6 @@ static int iaa_comp_acompress(struct acomp_req *req) goto out; } src_addr = sg_dma_address(req->src); - dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p," - " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs, - req->src, req->slen, sg_dma_len(req->src)); nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); if (nr_sgs <= 0 || nr_sgs > 1) { @@ -2066,9 +2012,6 @@ static int iaa_comp_acompress(struct acomp_req *req) goto err_map_dst; } dst_addr = sg_dma_address(req->dst); - dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p," - " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs, - req->dst, req->dlen, sg_dma_len(req->dst)); ret = iaa_compress(tfm, req, wq, src_addr, req->slen, dst_addr, &req->dlen); @@ -2083,7 +2026,7 @@ static int iaa_comp_acompress(struct acomp_req *req) } ret = iaa_compress_verify(tfm, req, wq, src_addr, req->slen, - dst_addr, &req->dlen); + dst_addr, req->dlen); if (ret) dev_dbg(dev, "asynchronous compress verification failed ret=%d\n", ret); @@ -2146,9 +2089,6 @@ static int iaa_comp_adecompress(struct acomp_req *req) goto out; } src_addr = sg_dma_address(req->src); - dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p," - " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs, - req->src, req->slen, sg_dma_len(req->src)); nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); if (nr_sgs <= 0 || nr_sgs > 1) { @@ -2159,9 +2099,6 @@ static int iaa_comp_adecompress(struct acomp_req *req) goto err_map_dst; } dst_addr = sg_dma_address(req->dst); - dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p," - " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs, - req->dst, req->dlen, sg_dma_len(req->dst)); ret = iaa_decompress(tfm, req, wq, src_addr, req->slen, dst_addr, &req->dlen); -- 2.27.0