devicetree.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vinod Koul <vinod.koul@intel.com>
To: Rameshwar Prasad Sahu <rsahu@apm.com>
Cc: dan.j.williams@intel.com, dmaengine@vger.kernel.org,
	arnd@arndb.de, linux-kernel@vger.kernel.org,
	devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	jcm@redhat.com, patches@apm.com
Subject: Re: [PATCH] dmaengine: xgene-dma: Fix holding lock while calling tx callback in cleanup path
Date: Fri, 21 Aug 2015 14:09:46 +0530	[thread overview]
Message-ID: <20150821083946.GO13546@localhost> (raw)
In-Reply-To: <1440066656-15516-1-git-send-email-rsahu@apm.com>

On Thu, Aug 20, 2015 at 04:00:56PM +0530, Rameshwar Prasad Sahu wrote:
> This patch fixes the an locking issue where client callback performs
		^^^^^^^^^^^^
??

> further submission.
Do you men you are preventing that or fixing for this to be allowed?

> 
> Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com>
> ---
>  drivers/dma/xgene-dma.c | 33 ++++++++++++++++++++++-----------
>  1 file changed, 22 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/dma/xgene-dma.c b/drivers/dma/xgene-dma.c
> index d1c8809..0b82bc0 100644
> --- a/drivers/dma/xgene-dma.c
> +++ b/drivers/dma/xgene-dma.c
> @@ -763,12 +763,17 @@ static void xgene_dma_cleanup_descriptors(struct xgene_dma_chan *chan)
>  	struct xgene_dma_ring *ring = &chan->rx_ring;
>  	struct xgene_dma_desc_sw *desc_sw, *_desc_sw;
>  	struct xgene_dma_desc_hw *desc_hw;
> +	struct list_head ld_completed;
>  	u8 status;
> 
> +	INIT_LIST_HEAD(&ld_completed);
> +
> +	spin_lock_bh(&chan->lock);
> +
>  	/* Clean already completed and acked descriptors */
>  	xgene_dma_clean_completed_descriptor(chan);
> 
> -	/* Run the callback for each descriptor, in order */
> +	/* Move all completed descriptors to ld completed queue, in order */
>  	list_for_each_entry_safe(desc_sw, _desc_sw, &chan->ld_running, node) {
>  		/* Get subsequent hw descriptor from DMA rx ring */
>  		desc_hw = &ring->desc_hw[ring->head];
> @@ -811,15 +816,17 @@ static void xgene_dma_cleanup_descriptors(struct xgene_dma_chan *chan)
>  		/* Mark this hw descriptor as processed */
>  		desc_hw->m0 = cpu_to_le64(XGENE_DMA_DESC_EMPTY_SIGNATURE);
> 
> -		xgene_dma_run_tx_complete_actions(chan, desc_sw);
> -
> -		xgene_dma_clean_running_descriptor(chan, desc_sw);
> -
>  		/*
>  		 * Decrement the pending transaction count
>  		 * as we have processed one
>  		 */
>  		chan->pending--;
> +
> +		/*
> +		 * Delete this node from ld running queue and append it to
> +		 * ld completed queue for further processing
> +		 */
> +		list_move_tail(&desc_sw->node, &ld_completed);
>  	}
> 
>  	/*
> @@ -828,6 +835,14 @@ static void xgene_dma_cleanup_descriptors(struct xgene_dma_chan *chan)
>  	 * ahead and free the descriptors below.
>  	 */
>  	xgene_chan_xfer_ld_pending(chan);
> +
> +	spin_unlock_bh(&chan->lock);
> +
> +	/* Run the callback for each descriptor, in order */
> +	list_for_each_entry_safe(desc_sw, _desc_sw, &ld_completed, node) {
> +		xgene_dma_run_tx_complete_actions(chan, desc_sw);
> +		xgene_dma_clean_running_descriptor(chan, desc_sw);
> +	}
>  }
> 
>  static int xgene_dma_alloc_chan_resources(struct dma_chan *dchan)
> @@ -876,11 +891,11 @@ static void xgene_dma_free_chan_resources(struct dma_chan *dchan)
>  	if (!chan->desc_pool)
>  		return;
> 
> -	spin_lock_bh(&chan->lock);
> -
>  	/* Process all running descriptor */
>  	xgene_dma_cleanup_descriptors(chan);
> 
> +	spin_lock_bh(&chan->lock);
> +
>  	/* Clean all link descriptor queues */
>  	xgene_dma_free_desc_list(chan, &chan->ld_pending);
>  	xgene_dma_free_desc_list(chan, &chan->ld_running);
> @@ -1200,15 +1215,11 @@ static void xgene_dma_tasklet_cb(unsigned long data)
>  {
>  	struct xgene_dma_chan *chan = (struct xgene_dma_chan *)data;
> 
> -	spin_lock_bh(&chan->lock);
> -
>  	/* Run all cleanup for descriptors which have been completed */
>  	xgene_dma_cleanup_descriptors(chan);
> 
>  	/* Re-enable DMA channel IRQ */
>  	enable_irq(chan->rx_irq);
> -
> -	spin_unlock_bh(&chan->lock);
>  }
> 
>  static irqreturn_t xgene_dma_chan_ring_isr(int irq, void *id)
> --
> 1.8.2.1
> 

-- 
~Vinod

  reply	other threads:[~2015-08-21  8:39 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-20 10:30 [PATCH] dmaengine: xgene-dma: Fix holding lock while calling tx callback in cleanup path Rameshwar Prasad Sahu
2015-08-21  8:39 ` Vinod Koul [this message]
2015-08-21  8:45   ` Rameshwar Sahu
     [not found]     ` <CAFd313wEAWjzJa82R1ujh3vyMasDvicQ4nZHq1qqZ5y7D4ixzQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2015-08-21  8:51       ` Vinod Koul
2015-08-21  8:50         ` Rameshwar Sahu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150821083946.GO13546@localhost \
    --to=vinod.koul@intel.com \
    --cc=arnd@arndb.de \
    --cc=dan.j.williams@intel.com \
    --cc=devicetree@vger.kernel.org \
    --cc=dmaengine@vger.kernel.org \
    --cc=jcm@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=patches@apm.com \
    --cc=rsahu@apm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).