netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: davem@davemloft.net
Cc: netdev@vger.kernel.org, edumazet@google.com, pabeni@redhat.com,
	borisp@nvidia.com, john.fastabend@gmail.com, maximmi@nvidia.com,
	tariqt@nvidia.com, Jakub Kicinski <kuba@kernel.org>
Subject: [PATCH net-next 4/6] tls: rx: coalesce exit paths in tls_decrypt_sg()
Date: Wed,  6 Jul 2022 18:35:08 -0700	[thread overview]
Message-ID: <20220707013510.1372695-5-kuba@kernel.org> (raw)
In-Reply-To: <20220707013510.1372695-1-kuba@kernel.org>

Jump to the free() call, instead of having to remember
to free the memory in multiple places.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 net/tls/tls_sw.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 5534962963c2..2afcf99105fb 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -1494,10 +1494,8 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
 		err = skb_copy_bits(skb, rxm->offset + TLS_HEADER_SIZE,
 				    &dctx->iv[iv_offset] + prot->salt_size,
 				    prot->iv_size);
-		if (err < 0) {
-			kfree(mem);
-			return err;
-		}
+		if (err < 0)
+			goto exit_free;
 		memcpy(&dctx->iv[iv_offset], tls_ctx->rx.iv, prot->salt_size);
 	}
 	xor_iv_with_seq(prot, &dctx->iv[iv_offset], tls_ctx->rx.rec_seq);
@@ -1513,10 +1511,8 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
 	err = skb_to_sgvec(skb, &sgin[1],
 			   rxm->offset + prot->prepend_size,
 			   rxm->full_len - prot->prepend_size);
-	if (err < 0) {
-		kfree(mem);
-		return err;
-	}
+	if (err < 0)
+		goto exit_free;
 
 	if (n_sgout) {
 		if (out_iov) {
@@ -1559,7 +1555,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
 	/* Release the pages in case iov was mapped to pages */
 	for (; pages > 0; pages--)
 		put_page(sg_page(&sgout[pages]));
-
+exit_free:
 	kfree(mem);
 	return err;
 }
-- 
2.36.1


  parent reply	other threads:[~2022-07-07  1:35 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-07  1:35 [PATCH net-next 0/6] tls: pad strparser, internal header, decrypt_ctx etc Jakub Kicinski
2022-07-07  1:35 ` [PATCH net-next 1/6] strparser: pad sk_skb_cb to avoid straddling cachelines Jakub Kicinski
2022-07-07  1:35 ` [PATCH net-next 2/6] tls: rx: always allocate max possible aad size for decrypt Jakub Kicinski
2022-07-07  1:35 ` [PATCH net-next 3/6] tls: rx: wrap decrypt params in a struct Jakub Kicinski
2022-07-07  1:35 ` Jakub Kicinski [this message]
2022-07-07  1:35 ` [PATCH net-next 5/6] tls: create an internal header Jakub Kicinski
2022-07-07 16:21   ` kernel test robot
2022-07-07 16:54   ` kernel test robot
2022-07-07  1:35 ` [PATCH net-next 6/6] tls: rx: make tls_wait_data() return an recvmsg retcode Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220707013510.1372695-5-kuba@kernel.org \
    --to=kuba@kernel.org \
    --cc=borisp@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=john.fastabend@gmail.com \
    --cc=maximmi@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).