dmaengine.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* dmaengine: avoid map_cnt overflow with CONFIG_DMA_ENGINE_RAID
@ 2018-01-08 15:50 Zi Yan
  0 siblings, 0 replies; 4+ messages in thread
From: Zi Yan @ 2018-01-08 15:50 UTC (permalink / raw)
  To: dmaengine; +Cc: linux-kernel, Zi Yan, Vinod Koul, Dan Williams

From: Zi Yan <zi.yan@cs.rutgers.edu>

When CONFIG_DMA_ENGINE_RAID is enabled, unmap pool size can reach to
256. But in struct dmaengine_unmap_data, map_cnt is only u8, wrapping
to 0, if the unmap pool is maximally used. This triggers BUG() when
struct dmaengine_unmap_data is freed. Use u16 to fix the problem.

Signed-off-by: Zi Yan <zi.yan@cs.rutgers.edu>
---
 include/linux/dmaengine.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index f838764993eb..861be5cab1df 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -470,7 +470,11 @@ typedef void (*dma_async_tx_callback_result)(void *dma_async_param,
 				const struct dmaengine_result *result);
 
 struct dmaengine_unmap_data {
+#if IS_ENABLED(CONFIG_DMA_ENGINE_RAID)
+	u16 map_cnt;
+#else
 	u8 map_cnt;
+#endif
 	u8 to_cnt;
 	u8 from_cnt;
 	u8 bidi_cnt;

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* dmaengine: avoid map_cnt overflow with CONFIG_DMA_ENGINE_RAID
@ 2018-01-12 16:56 Vinod Koul
  0 siblings, 0 replies; 4+ messages in thread
From: Vinod Koul @ 2018-01-12 16:56 UTC (permalink / raw)
  To: Zi Yan, Dan Williams; +Cc: dmaengine, linux-kernel, Zi Yan

On Mon, Jan 08, 2018 at 10:50:50AM -0500, Zi Yan wrote:
> From: Zi Yan <zi.yan@cs.rutgers.edu>
> 
> When CONFIG_DMA_ENGINE_RAID is enabled, unmap pool size can reach to
> 256. But in struct dmaengine_unmap_data, map_cnt is only u8, wrapping
> to 0, if the unmap pool is maximally used. This triggers BUG() when
> struct dmaengine_unmap_data is freed. Use u16 to fix the problem.
> 
> Signed-off-by: Zi Yan <zi.yan@cs.rutgers.edu>
> ---
>  include/linux/dmaengine.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index f838764993eb..861be5cab1df 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -470,7 +470,11 @@ typedef void (*dma_async_tx_callback_result)(void *dma_async_param,
>  				const struct dmaengine_result *result);
>  
>  struct dmaengine_unmap_data {
> +#if IS_ENABLED(CONFIG_DMA_ENGINE_RAID)
> +	u16 map_cnt;
> +#else
>  	u8 map_cnt;
> +#endif
>  	u8 to_cnt;
>  	u8 from_cnt;
>  	u8 bidi_cnt;

Would that cause adverse performance, the data structure is not aligned
anymore. Dan was that a consideration while adding this?

^ permalink raw reply	[flat|nested] 4+ messages in thread

* dmaengine: avoid map_cnt overflow with CONFIG_DMA_ENGINE_RAID
@ 2018-01-16 20:00 Zi Yan
  0 siblings, 0 replies; 4+ messages in thread
From: Zi Yan @ 2018-01-16 20:00 UTC (permalink / raw)
  To: Vinod Koul; +Cc: Dan Williams, dmaengine, linux-kernel

On 12 Jan 2018, at 11:56, Vinod Koul wrote:

> On Mon, Jan 08, 2018 at 10:50:50AM -0500, Zi Yan wrote:
>> From: Zi Yan <zi.yan@cs.rutgers.edu>
>>
>> When CONFIG_DMA_ENGINE_RAID is enabled, unmap pool size can reach to
>> 256. But in struct dmaengine_unmap_data, map_cnt is only u8, wrapping
>> to 0, if the unmap pool is maximally used. This triggers BUG() when
>> struct dmaengine_unmap_data is freed. Use u16 to fix the problem.
>>
>> Signed-off-by: Zi Yan <zi.yan@cs.rutgers.edu>
>> ---
>>  include/linux/dmaengine.h | 4 ++++
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
>> index f838764993eb..861be5cab1df 100644
>> --- a/include/linux/dmaengine.h
>> +++ b/include/linux/dmaengine.h
>> @@ -470,7 +470,11 @@ typedef void 
>> (*dma_async_tx_callback_result)(void *dma_async_param,
>>  				const struct dmaengine_result *result);
>>
>>  struct dmaengine_unmap_data {
>> +#if IS_ENABLED(CONFIG_DMA_ENGINE_RAID)
>> +	u16 map_cnt;
>> +#else
>>  	u8 map_cnt;
>> +#endif
>>  	u8 to_cnt;
>>  	u8 from_cnt;
>>  	u8 bidi_cnt;
>
> Would that cause adverse performance, the data structure is not 
> aligned
> anymore. Dan was that a consideration while adding this?
>

It will be only two more cache misses (one for map the data, the other 
for unmap the data)
for each DMA engine operation, no matter what data size is. And there is 
no impact on
the actual DMA transfers. So the impact should be minimal.

—
Best Regards,
Yan Zi
---
To unsubscribe from this list: send the line "unsubscribe dmaengine" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* dmaengine: avoid map_cnt overflow with CONFIG_DMA_ENGINE_RAID
@ 2018-02-05  5:39 Vinod Koul
  0 siblings, 0 replies; 4+ messages in thread
From: Vinod Koul @ 2018-02-05  5:39 UTC (permalink / raw)
  To: Zi Yan; +Cc: dmaengine, linux-kernel, Zi Yan, Dan Williams

On Mon, Jan 08, 2018 at 10:50:50AM -0500, Zi Yan wrote:
> From: Zi Yan <zi.yan@cs.rutgers.edu>
> 
> When CONFIG_DMA_ENGINE_RAID is enabled, unmap pool size can reach to
> 256. But in struct dmaengine_unmap_data, map_cnt is only u8, wrapping
> to 0, if the unmap pool is maximally used. This triggers BUG() when
> struct dmaengine_unmap_data is freed. Use u16 to fix the problem.

Applied, thanks

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-02-05  5:39 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-02-05  5:39 dmaengine: avoid map_cnt overflow with CONFIG_DMA_ENGINE_RAID Vinod Koul
  -- strict thread matches above, loose matches on Subject: below --
2018-01-16 20:00 Zi Yan
2018-01-12 16:56 Vinod Koul
2018-01-08 15:50 Zi Yan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).