From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
To: Mans Rullgard <mans@mansr.com>, Viresh Kumar <vireshk@kernel.org>,
Dan Williams <dan.j.williams@intel.com>,
Vinod Koul <vinod.koul@intel.com>,
dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/2] dmaengine: dw: fix cyclic transfer callbacks
Date: Mon, 11 Jan 2016 17:04:04 +0200 [thread overview]
Message-ID: <1452524644.26146.34.camel@linux.intel.com> (raw)
In-Reply-To: <1452517469-26922-3-git-send-email-mans@mansr.com>
On Mon, 2016-01-11 at 13:04 +0000, Mans Rullgard wrote:
> Cyclic transfer callbacks rely on block completion interrupts which
> were
> disabled in commit ff7b05f29fd4 ("dmaengine/dw_dmac: Don't handle
> block
> interrupts"). This re-enables block interrupts so the cyclic
> callbacks
> can work. Other transfer types are not affected as they set the
> INT_EN
> bit only on the last block.
>
> Fixes: ff7b05f29fd4 ("dmaengine/dw_dmac: Don't handle block
> interrupts")
> Signed-off-by: Mans Rullgard <mans@mansr.com>
How did you test that?
>From my understanding the custom stuff that does cyclic interrupts
prepares a set of descriptors per period, which at the end of transfer
will generate XFER interrupt. Next period will go in the same way.
Maybe I missed something.
> ---
> Changes:
> - new patch
> ---
> drivers/dma/dw/core.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
> index af2b92f8501e..b92662722404 100644
> --- a/drivers/dma/dw/core.c
> +++ b/drivers/dma/dw/core.c
> @@ -156,6 +156,7 @@ static void dwc_initialize(struct dw_dma_chan
> *dwc)
>
> /* Enable interrupts */
> channel_set_bit(dw, MASK.XFER, dwc->mask);
> + channel_set_bit(dw, MASK.BLOCK, dwc->mask);
> channel_set_bit(dw, MASK.ERROR, dwc->mask);
>
> dwc->initialized = true;
> @@ -536,16 +537,17 @@ EXPORT_SYMBOL(dw_dma_get_dst_addr);
>
> /* Called with dwc->lock held and all DMAC interrupts disabled */
> static void dwc_handle_cyclic(struct dw_dma *dw, struct dw_dma_chan
> *dwc,
> - u32 status_err, u32 status_xfer)
> + u32 status_block, u32 status_err, u32 status_xfer)
> {
> unsigned long flags;
>
> - if (dwc->mask) {
> + if (status_block & dwc->mask) {
> void (*callback)(void *param);
> void *callback_param;
>
> dev_vdbg(chan2dev(&dwc->chan), "new cyclic period
> llp 0x%08x\n",
> channel_readl(dwc, LLP));
> + dma_writel(dw, CLEAR.BLOCK, dwc->mask);
>
> callback = dwc->cdesc->period_callback;
> callback_param = dwc->cdesc->period_callback_param;
> @@ -577,6 +579,7 @@ static void dwc_handle_cyclic(struct dw_dma *dw,
> struct dw_dma_chan *dwc,
> channel_writel(dwc, CTL_LO, 0);
> channel_writel(dwc, CTL_HI, 0);
>
> + dma_writel(dw, CLEAR.BLOCK, dwc->mask);
> dma_writel(dw, CLEAR.ERROR, dwc->mask);
> dma_writel(dw, CLEAR.XFER, dwc->mask);
>
> @@ -593,10 +596,12 @@ static void dw_dma_tasklet(unsigned long data)
> {
> struct dw_dma *dw = (struct dw_dma *)data;
> struct dw_dma_chan *dwc;
> + u32 status_block;
> u32 status_xfer;
> u32 status_err;
> int i;
>
> + status_block = dma_readl(dw, RAW.BLOCK);
> status_xfer = dma_readl(dw, RAW.XFER);
> status_err = dma_readl(dw, RAW.ERROR);
>
> @@ -605,7 +610,8 @@ static void dw_dma_tasklet(unsigned long data)
> for (i = 0; i < dw->dma.chancnt; i++) {
> dwc = &dw->chan[i];
> if (test_bit(DW_DMA_IS_CYCLIC, &dwc->flags))
> - dwc_handle_cyclic(dw, dwc, status_err,
> status_xfer);
> + dwc_handle_cyclic(dw, dwc, status_block,
> status_err,
> + status_xfer);
> else if (status_err & (1 << i))
> dwc_handle_error(dw, dwc);
> else if (status_xfer & (1 << i))
> @@ -616,6 +622,7 @@ static void dw_dma_tasklet(unsigned long data)
> * Re-enable interrupts.
> */
> channel_set_bit(dw, MASK.XFER, dw->all_chan_mask);
> + channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask);
> channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask);
> }
>
> @@ -635,6 +642,7 @@ static irqreturn_t dw_dma_interrupt(int irq, void
> *dev_id)
> * softirq handler.
> */
> channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask);
> channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
>
> status = dma_readl(dw, STATUS_INT);
> @@ -645,6 +653,7 @@ static irqreturn_t dw_dma_interrupt(int irq, void
> *dev_id)
>
> /* Try to recover */
> channel_clear_bit(dw, MASK.XFER, (1 << 8) - 1);
> + channel_clear_bit(dw, MASK.BLOCK, (1 << 8) - 1);
> channel_clear_bit(dw, MASK.SRC_TRAN, (1 << 8) - 1);
> channel_clear_bit(dw, MASK.DST_TRAN, (1 << 8) - 1);
> channel_clear_bit(dw, MASK.ERROR, (1 << 8) - 1);
> @@ -1111,6 +1120,7 @@ static void dw_dma_off(struct dw_dma *dw)
> dma_writel(dw, CFG, 0);
>
> channel_clear_bit(dw, MASK.XFER, dw->all_chan_mask);
> + channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask);
> channel_clear_bit(dw, MASK.SRC_TRAN, dw->all_chan_mask);
> channel_clear_bit(dw, MASK.DST_TRAN, dw->all_chan_mask);
> channel_clear_bit(dw, MASK.ERROR, dw->all_chan_mask);
> @@ -1216,6 +1226,7 @@ static void dwc_free_chan_resources(struct
> dma_chan *chan)
>
> /* Disable interrupts */
> channel_clear_bit(dw, MASK.XFER, dwc->mask);
> + channel_clear_bit(dw, MASK.BLOCK, dwc->mask);
> channel_clear_bit(dw, MASK.ERROR, dwc->mask);
>
> spin_unlock_irqrestore(&dwc->lock, flags);
> @@ -1458,6 +1469,7 @@ void dw_dma_cyclic_free(struct dma_chan *chan)
>
> dwc_chan_disable(dw, dwc);
>
> + dma_writel(dw, CLEAR.BLOCK, dwc->mask);
> dma_writel(dw, CLEAR.ERROR, dwc->mask);
> dma_writel(dw, CLEAR.XFER, dwc->mask);
>
> @@ -1546,9 +1558,6 @@ int dw_dma_probe(struct dw_dma_chip *chip,
> struct dw_dma_platform_data *pdata)
> /* Force dma off, just in case */
> dw_dma_off(dw);
>
> - /* Disable BLOCK interrupts as well */
> - channel_clear_bit(dw, MASK.BLOCK, dw->all_chan_mask);
> -
> /* Create a pool of consistent memory blocks for hardware
> descriptors */
> dw->desc_pool = dmam_pool_create("dw_dmac_desc_pool", chip-
> >dev,
> sizeof(struct dw_desc), 4,
> 0);
--
Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Intel Finland Oy
next prev parent reply other threads:[~2016-01-11 15:03 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-11 13:04 [PATCH v2 0/2] dmaengine: dw: fix cyclic transfers Mans Rullgard
2016-01-11 13:04 ` [PATCH v2 1/2] dmaengine: dw: fix cyclic transfer setup Mans Rullgard
2016-01-11 14:08 ` Viresh Kumar
2016-01-11 13:04 ` [PATCH v2 2/2] dmaengine: dw: fix cyclic transfer callbacks Mans Rullgard
2016-01-11 14:10 ` Viresh Kumar
2016-01-11 14:14 ` Måns Rullgård
2016-01-11 16:08 ` Viresh Kumar
2016-01-14 5:44 ` Vinod Koul
2016-01-11 15:04 ` Andy Shevchenko [this message]
2016-01-11 15:09 ` Måns Rullgård
2016-01-11 15:17 ` Andy Shevchenko
2016-01-11 15:22 ` Måns Rullgård
2016-01-11 15:24 ` Andy Shevchenko
2016-01-14 5:52 ` [PATCH v2 0/2] dmaengine: dw: fix cyclic transfers Vinod Koul
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1452524644.26146.34.camel@linux.intel.com \
--to=andriy.shevchenko@linux.intel.com \
--cc=dan.j.williams@intel.com \
--cc=dmaengine@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mans@mansr.com \
--cc=vinod.koul@intel.com \
--cc=vireshk@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).