dmaengine.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [v5,2/7] dmaengine: xilinx_dma: in axidma slave_sg and dma_cyclic mode align split descriptors
@ 2018-09-07  6:24 Andrea Merello
  0 siblings, 0 replies; 4+ messages in thread
From: Andrea Merello @ 2018-09-07  6:24 UTC (permalink / raw)
  To: vkoul, dan.j.williams, michal.simek, appana.durga.rao, dmaengine
  Cc: linux-arm-kernel, linux-kernel, robh+dt, mark.rutland, devicetree,
	radhey.shyam.pandey, Andrea Merello

Whenever a single or cyclic transaction is prepared, the driver
could eventually split it over several SG descriptors in order
to deal with the HW maximum transfer length.

This could end up in DMA operations starting from a misaligned
address. This seems fatal for the HW if DRE (Data Realignment Engine)
is not enabled.

This patch eventually adjusts the transfer size in order to make sure
all operations start from an aligned address.

Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
---
Changes in v2:
        - don't introduce copy_mask field, rather rely on already-esistent
          copy_align field. Suggested by Radhey Shyam Pandey
        - reword title
Changes in v3:
	- fix bug introduced in v2: wrong copy size when DRE is enabled
	- use implementation suggested by Radhey Shyam Pandey
Changes in v4:
	- rework on the top of 1/6
Changes in v5:
	- fix typo in commit title
	- add hint about "DRE" meaning in commit message
---
 drivers/dma/xilinx/xilinx_dma.c | 22 ++++++++++++++++++----
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index a3aaa0e34cc7..aaa6de8a70e4 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -954,15 +954,28 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
 
 /**
  * xilinx_dma_calc_copysize - Calculate the amount of data to copy
+ * @chan: Driver specific DMA channel
  * @size: Total data that needs to be copied
  * @done: Amount of data that has been already copied
  *
  * Return: Amount of data that has to be copied
  */
-static int xilinx_dma_calc_copysize(int size, int done)
+static int xilinx_dma_calc_copysize(struct xilinx_dma_chan *chan,
+				    int size, int done)
 {
-	return min_t(size_t, size - done,
+	size_t copy = min_t(size_t, size - done,
 		     XILINX_DMA_MAX_TRANS_LEN);
+
+	if ((copy + done < size) &&
+	    chan->xdev->common.copy_align) {
+		/*
+		 * If this is not the last descriptor, make sure
+		 * the next one will be properly aligned
+		 */
+		copy = rounddown(copy,
+				 (1 << chan->xdev->common.copy_align));
+	}
+	return copy;
 }
 
 /**
@@ -1804,7 +1817,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
 			 * Calculate the maximum number of bytes to transfer,
 			 * making sure it is less than the hw limit
 			 */
-			copy = xilinx_dma_calc_copysize(sg_dma_len(sg),
+			copy = xilinx_dma_calc_copysize(chan, sg_dma_len(sg),
 							sg_used);
 			hw = &segment->hw;
 
@@ -1909,7 +1922,8 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_dma_cyclic(
 			 * Calculate the maximum number of bytes to transfer,
 			 * making sure it is less than the hw limit
 			 */
-			copy = xilinx_dma_calc_copysize(period_len, sg_used);
+			copy = xilinx_dma_calc_copysize(chan,
+							period_len, sg_used);
 			hw = &segment->hw;
 			xilinx_axidma_buf(chan, hw, buf_addr, sg_used,
 					  period_len * i);

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [v5,2/7] dmaengine: xilinx_dma: in axidma slave_sg and dma_cyclic mode align split descriptors
@ 2018-09-18 16:21 Vinod Koul
  0 siblings, 0 replies; 4+ messages in thread
From: Vinod Koul @ 2018-09-18 16:21 UTC (permalink / raw)
  To: Andrea Merello
  Cc: dan.j.williams, michal.simek, appana.durga.rao, dmaengine,
	linux-arm-kernel, linux-kernel, robh+dt, mark.rutland, devicetree,
	radhey.shyam.pandey

On 07-09-18, 08:24, Andrea Merello wrote:
> Whenever a single or cyclic transaction is prepared, the driver
> could eventually split it over several SG descriptors in order
> to deal with the HW maximum transfer length.
> 
> This could end up in DMA operations starting from a misaligned
> address. This seems fatal for the HW if DRE (Data Realignment Engine)
> is not enabled.
> 
> This patch eventually adjusts the transfer size in order to make sure
> all operations start from an aligned address.
> 
> Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
> Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
> ---
> Changes in v2:
>         - don't introduce copy_mask field, rather rely on already-esistent
>           copy_align field. Suggested by Radhey Shyam Pandey
>         - reword title
> Changes in v3:
> 	- fix bug introduced in v2: wrong copy size when DRE is enabled
> 	- use implementation suggested by Radhey Shyam Pandey
> Changes in v4:
> 	- rework on the top of 1/6
> Changes in v5:
> 	- fix typo in commit title
> 	- add hint about "DRE" meaning in commit message
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 22 ++++++++++++++++++----
>  1 file changed, 18 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> index a3aaa0e34cc7..aaa6de8a70e4 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -954,15 +954,28 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
>  
>  /**
>   * xilinx_dma_calc_copysize - Calculate the amount of data to copy
> + * @chan: Driver specific DMA channel
>   * @size: Total data that needs to be copied
>   * @done: Amount of data that has been already copied
>   *
>   * Return: Amount of data that has to be copied
>   */
> -static int xilinx_dma_calc_copysize(int size, int done)
> +static int xilinx_dma_calc_copysize(struct xilinx_dma_chan *chan,
> +				    int size, int done)

align to preceeding line opening brace please

>  {
> -	return min_t(size_t, size - done,
> +	size_t copy = min_t(size_t, size - done,
>  		     XILINX_DMA_MAX_TRANS_LEN);

so we can do this way in patch 1:

        size t copy;

        copy =  min_t(size_t, size - done,
                      XILINX_DMA_MAX_TRANS_LEN);

        return copy;

and then add these here, feels like we are redoing change introduced in
patch 1..


> +	if ((copy + done < size) &&
> +	    chan->xdev->common.copy_align) {
> +		/*
> +		 * If this is not the last descriptor, make sure
> +		 * the next one will be properly aligned
> +		 */
> +		copy = rounddown(copy,
> +				 (1 << chan->xdev->common.copy_align));
> +	}
> +	return copy;
>  }
>  
>  /**
> @@ -1804,7 +1817,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
>  			 * Calculate the maximum number of bytes to transfer,
>  			 * making sure it is less than the hw limit
>  			 */
> -			copy = xilinx_dma_calc_copysize(sg_dma_len(sg),
> +			copy = xilinx_dma_calc_copysize(chan, sg_dma_len(sg),

why not keep chan in patch 1 and add only handling in patch 2, seems
less churn to me..

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [v5,2/7] dmaengine: xilinx_dma: in axidma slave_sg and dma_cyclic mode align split descriptors
@ 2018-09-28  7:11 Andrea Merello
  0 siblings, 0 replies; 4+ messages in thread
From: Andrea Merello @ 2018-09-28  7:11 UTC (permalink / raw)
  To: Vinod
  Cc: dan.j.williams, michal.simek, appana.durga.rao, dmaengine,
	linux-arm-kernel, linux-kernel, Rob Herring, Mark Rutland,
	devicetree, Radhey Shyam Pandey

On Tue, Sep 18, 2018 at 6:21 PM Vinod <vkoul@kernel.org> wrote:
>
> On 07-09-18, 08:24, Andrea Merello wrote:
> > Whenever a single or cyclic transaction is prepared, the driver
> > could eventually split it over several SG descriptors in order
> > to deal with the HW maximum transfer length.
> >
> > This could end up in DMA operations starting from a misaligned
> > address. This seems fatal for the HW if DRE (Data Realignment Engine)
> > is not enabled.
> >
> > This patch eventually adjusts the transfer size in order to make sure
> > all operations start from an aligned address.
> >
> > Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
> > Signed-off-by: Andrea Merello <andrea.merello@gmail.com>
> > Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
> > ---
> > Changes in v2:
> >         - don't introduce copy_mask field, rather rely on already-esistent
> >           copy_align field. Suggested by Radhey Shyam Pandey
> >         - reword title
> > Changes in v3:
> >       - fix bug introduced in v2: wrong copy size when DRE is enabled
> >       - use implementation suggested by Radhey Shyam Pandey
> > Changes in v4:
> >       - rework on the top of 1/6
> > Changes in v5:
> >       - fix typo in commit title
> >       - add hint about "DRE" meaning in commit message
> > ---
> >  drivers/dma/xilinx/xilinx_dma.c | 22 ++++++++++++++++++----
> >  1 file changed, 18 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> > index a3aaa0e34cc7..aaa6de8a70e4 100644
> > --- a/drivers/dma/xilinx/xilinx_dma.c
> > +++ b/drivers/dma/xilinx/xilinx_dma.c
> > @@ -954,15 +954,28 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
> >
> >  /**
> >   * xilinx_dma_calc_copysize - Calculate the amount of data to copy
> > + * @chan: Driver specific DMA channel
> >   * @size: Total data that needs to be copied
> >   * @done: Amount of data that has been already copied
> >   *
> >   * Return: Amount of data that has to be copied
> >   */
> > -static int xilinx_dma_calc_copysize(int size, int done)
> > +static int xilinx_dma_calc_copysize(struct xilinx_dma_chan *chan,
> > +                                 int size, int done)
>
> align to preceeding line opening brace please

After applying, I'm seeing it already aligned as you requested; 4 tabs
+ 4 spaces so the 2nd line starts right under the "s" near the opened
brace..
Patch sent using git, so it should pass through without being ruined;
don't know why you see it misaligned :(

> >  {
> > -     return min_t(size_t, size - done,
> > +     size_t copy = min_t(size_t, size - done,
> >                    XILINX_DMA_MAX_TRANS_LEN);
>
> so we can do this way in patch 1:
>
>         size t copy;
>
>         copy =  min_t(size_t, size - done,
>                       XILINX_DMA_MAX_TRANS_LEN);
>
>         return copy;
>
> and then add these here, feels like we are redoing change introduced in
> patch 1..

OK, this sounds good :)

>
> > +     if ((copy + done < size) &&
> > +         chan->xdev->common.copy_align) {
> > +             /*
> > +              * If this is not the last descriptor, make sure
> > +              * the next one will be properly aligned
> > +              */
> > +             copy = rounddown(copy,
> > +                              (1 << chan->xdev->common.copy_align));
> > +     }
> > +     return copy;
> >  }
> >
> >  /**
> > @@ -1804,7 +1817,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
> >                        * Calculate the maximum number of bytes to transfer,
> >                        * making sure it is less than the hw limit
> >                        */
> > -                     copy = xilinx_dma_calc_copysize(sg_dma_len(sg),
> > +                     copy = xilinx_dma_calc_copysize(chan, sg_dma_len(sg),
>
> why not keep chan in patch 1 and add only handling in patch 2, seems
> less churn to me..

Indeed this was something I was unsure about.. I ended up in feeling
better not to add introduce a function that takes an unused (yet)
argument, but I can change this of course :)

> --
> ~Vinod

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [v5,2/7] dmaengine: xilinx_dma: in axidma slave_sg and dma_cyclic mode align split descriptors
@ 2018-10-02 14:58 Vinod Koul
  0 siblings, 0 replies; 4+ messages in thread
From: Vinod Koul @ 2018-10-02 14:58 UTC (permalink / raw)
  To: Andrea Merello
  Cc: dan.j.williams, michal.simek, appana.durga.rao, dmaengine,
	linux-arm-kernel, linux-kernel, Rob Herring, Mark Rutland,
	devicetree, Radhey Shyam Pandey

On 28-09-18, 09:11, Andrea Merello wrote:
> On Tue, Sep 18, 2018 at 6:21 PM Vinod <vkoul@kernel.org> wrote:

> > > @@ -1804,7 +1817,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
> > >                        * Calculate the maximum number of bytes to transfer,
> > >                        * making sure it is less than the hw limit
> > >                        */
> > > -                     copy = xilinx_dma_calc_copysize(sg_dma_len(sg),
> > > +                     copy = xilinx_dma_calc_copysize(chan, sg_dma_len(sg),
> >
> > why not keep chan in patch 1 and add only handling in patch 2, seems
> > less churn to me..
> 
> Indeed this was something I was unsure about.. I ended up in feeling
> better not to add introduce a function that takes an unused (yet)
> argument, but I can change this of course :)

IMO It is fine to add a user in subsequent patch in a series. Not fine to
add something and not use in "that" series :)

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-10-02 14:58 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-09-18 16:21 [v5,2/7] dmaengine: xilinx_dma: in axidma slave_sg and dma_cyclic mode align split descriptors Vinod Koul
  -- strict thread matches above, loose matches on Subject: below --
2018-10-02 14:58 Vinod Koul
2018-09-28  7:11 Andrea Merello
2018-09-07  6:24 Andrea Merello

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).