public inbox for dmaengine@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Jiang <dave.jiang@intel.com>
To: Miaoqian Lin <linmq006@gmail.com>,
	Peter Ujfalusi <peter.ujfalusi@gmail.com>,
	Vinod Koul <vkoul@kernel.org>,
	dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] dmaengine: ti: Fix refcount leak in ti_dra7_xbar_route_allocate
Date: Thu, 12 May 2022 10:03:48 -0700	[thread overview]
Message-ID: <d1017a7a-4f3d-4218-13da-71f89cf81c81@intel.com> (raw)
In-Reply-To: <20220512051815.11946-1-linmq006@gmail.com>


On 5/11/2022 10:18 PM, Miaoqian Lin wrote:
> 1. of_find_device_by_node() takes reference, we should use put_device()
> to release it when not need anymore.
> 2. of_parse_phandle() returns a node pointer with refcount
> incremented, we should use of_node_put() on it when not needed anymore.
>
> Add put_device() and of_node_put() in some error paths to fix.
Sounds like you need 2 patches for this? One just for the put_device() 
and the other for the of_node_put()?
>
> Fixes: ec9bfa1e1a79 ("dmaengine: ti-dma-crossbar: dra7: Use bitops instead of idr")
> Fixes: a074ae38f859 ("dmaengine: Add driver for TI DMA crossbar on DRA7x")
> Signed-off-by: Miaoqian Lin <linmq006@gmail.com>
> ---
>   drivers/dma/ti/dma-crossbar.c | 5 +++++
>   1 file changed, 5 insertions(+)
>
> diff --git a/drivers/dma/ti/dma-crossbar.c b/drivers/dma/ti/dma-crossbar.c
> index 71d24fc07c00..f744ddbbbad7 100644
> --- a/drivers/dma/ti/dma-crossbar.c
> +++ b/drivers/dma/ti/dma-crossbar.c
> @@ -245,6 +245,7 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
>   	if (dma_spec->args[0] >= xbar->xbar_requests) {
>   		dev_err(&pdev->dev, "Invalid XBAR request number: %d\n",
>   			dma_spec->args[0]);
> +		put_device(&pdev->dev);
>   		return ERR_PTR(-EINVAL);
>   	}
>   
> @@ -252,12 +253,14 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
>   	dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0);
>   	if (!dma_spec->np) {
>   		dev_err(&pdev->dev, "Can't get DMA master\n");
> +		put_device(&pdev->dev);
>   		return ERR_PTR(-EINVAL);
>   	}
>   
>   	map = kzalloc(sizeof(*map), GFP_KERNEL);
>   	if (!map) {
>   		of_node_put(dma_spec->np);
> +		put_device(&pdev->dev);
>   		return ERR_PTR(-ENOMEM);
>   	}
>   
> @@ -268,6 +271,8 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec,
>   		mutex_unlock(&xbar->mutex);
>   		dev_err(&pdev->dev, "Run out of free DMA requests\n");
>   		kfree(map);
> +		of_node_put(dma_spec->np);
> +		put_device(&pdev->dev);
>   		return ERR_PTR(-ENOMEM);
>   	}
>   	set_bit(map->xbar_out, xbar->dma_inuse);

  reply	other threads:[~2022-05-12 17:05 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-12  5:18 [PATCH] dmaengine: ti: Fix refcount leak in ti_dra7_xbar_route_allocate Miaoqian Lin
2022-05-12 17:03 ` Dave Jiang [this message]
2022-05-16  5:56   ` Miaoqian Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d1017a7a-4f3d-4218-13da-71f89cf81c81@intel.com \
    --to=dave.jiang@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=linmq006@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peter.ujfalusi@gmail.com \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox