linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: "Jernej Škrabec" <jernej.skrabec@gmail.com>
To: mripard@kernel.org, Ian Cowan <ian@linux.cowan.aero>
Cc: paul.kocialkowski@bootlin.com, mchehab@kernel.org,
	gregkh@linuxfoundation.org, wens@csie.org, samuel@sholland.org,
	linux-media@vger.kernel.org, linux-staging@lists.linux.dev,
	linux-arm-kernel@lists.infradead.org,
	linux-sunxi@lists.linux.dev, ian@linux.cowan.aero
Subject: Re: [PATCH] staging: sunxi: cedrus: centralize cedrus_open exit
Date: Mon, 25 Apr 2022 17:52:50 +0200	[thread overview]
Message-ID: <22617338.6Emhk5qWAg@kista> (raw)
In-Reply-To: <20220423180111.91602-1-ian@linux.cowan.aero>

Hi Ian,

Dne sobota, 23. april 2022 ob 20:01:11 CEST je Ian Cowan napisal(a):
> Refactor the cedrus_open() function so that there is only one exit to
> the function instead of 2. This prevents a future change from preventing
> the mutex from being unlocked after a successful exit.
> 
> Signed-off-by: Ian Cowan <ian@linux.cowan.aero>

If this patch would be part of series and "future" would mean next patch, I 
would be ok with that. However, in current form I don't see any benefit in 
changing it. Doing another hop lessens readability IMO. Let us worry about the 
future when/if it comes.

Best regards,
Jernej

> ---
>  drivers/staging/media/sunxi/cedrus/cedrus.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/staging/media/sunxi/cedrus/cedrus.c b/drivers/staging/
media/sunxi/cedrus/cedrus.c
> index 68b3dcdb5df3..5236d9e4f4e8 100644
> --- a/drivers/staging/media/sunxi/cedrus/cedrus.c
> +++ b/drivers/staging/media/sunxi/cedrus/cedrus.c
> @@ -348,14 +348,14 @@ static int cedrus_open(struct file *file)
>  
>  	v4l2_fh_add(&ctx->fh);
>  
> -	mutex_unlock(&dev->dev_mutex);
> -
> -	return 0;
> +	ret = 0;
> +	goto succ_unlock;
>  
>  err_ctrls:
>  	v4l2_ctrl_handler_free(&ctx->hdl);
>  err_free:
>  	kfree(ctx);
> +succ_unlock:
>  	mutex_unlock(&dev->dev_mutex);
>  
>  	return ret;
> -- 
> 2.35.1
> 
> 



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

      parent reply	other threads:[~2022-04-25 15:54 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-23 18:01 [PATCH] staging: sunxi: cedrus: centralize cedrus_open exit Ian Cowan
2022-04-25  9:20 ` Dan Carpenter
2022-04-25  9:29   ` Paul Kocialkowski
2022-04-25 10:00     ` Dan Carpenter
2022-04-26  7:39       ` Paul Kocialkowski
2022-04-28 10:26         ` Dan Carpenter
2022-04-28 11:56           ` Paul Kocialkowski
2022-04-25  9:36 ` Dan Carpenter
2022-04-25 15:52 ` Jernej Škrabec [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=22617338.6Emhk5qWAg@kista \
    --to=jernej.skrabec@gmail.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=ian@linux.cowan.aero \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-staging@lists.linux.dev \
    --cc=linux-sunxi@lists.linux.dev \
    --cc=mchehab@kernel.org \
    --cc=mripard@kernel.org \
    --cc=paul.kocialkowski@bootlin.com \
    --cc=samuel@sholland.org \
    --cc=wens@csie.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).