linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] dmapool: push new blocks in ascending order
@ 2023-02-21 16:54 Keith Busch
  2023-02-21 17:20 ` Bryan O'Donoghue
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Keith Busch @ 2023-02-21 16:54 UTC (permalink / raw)
  To: Andrew Morton, linux-mm, linux-kernel; +Cc: Keith Busch, Bryan O'Donoghue

From: Keith Busch <kbusch@kernel.org>

Some users of the dmapool need their allocations to happen in ascending
order. The recent optimizations pushed the blocks in reverse order, so
restore the previous behavior by linking the next available block from
low-to-high.

Fixes: ced6d06a81fb69 ("dmapool: link blocks across pages")
Reported-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 mm/dmapool.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 1920890ff8d3d..a151a21e571b7 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -300,7 +300,7 @@ EXPORT_SYMBOL(dma_pool_create);
 static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
 {
 	unsigned int next_boundary = pool->boundary, offset = 0;
-	struct dma_block *block;
+	struct dma_block *block, *first = NULL, *last = NULL;
 
 	pool_init_page(pool, page);
 	while (offset + pool->size <= pool->allocation) {
@@ -311,11 +311,22 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
 		}
 
 		block = page->vaddr + offset;
-		pool_block_push(pool, block, page->dma + offset);
+		block->dma = page->dma + offset;
+		block->next_block = NULL;
+
+		if (last)
+			last->next_block = block;
+		else
+			first = block;
+		last = block;
+
 		offset += pool->size;
 		pool->nr_blocks++;
 	}
 
+	last->next_block = pool->next_block;
+	pool->next_block = first;
+
 	list_add(&page->page_list, &pool->page_list);
 	pool->nr_pages++;
 }
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-21 16:54 [PATCH] dmapool: push new blocks in ascending order Keith Busch
@ 2023-02-21 17:20 ` Bryan O'Donoghue
  2023-02-21 18:02 ` Christoph Hellwig
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 11+ messages in thread
From: Bryan O'Donoghue @ 2023-02-21 17:20 UTC (permalink / raw)
  To: Keith Busch, Andrew Morton, linux-mm, linux-kernel; +Cc: Keith Busch

On 21/02/2023 16:54, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> Some users of the dmapool need their allocations to happen in ascending
> order. The recent optimizations pushed the blocks in reverse order, so
> restore the previous behavior by linking the next available block from
> low-to-high.
> 
> Fixes: ced6d06a81fb69 ("dmapool: link blocks across pages")
> Reported-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
> Signed-off-by: Keith Busch <kbusch@kernel.org>

Tested-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-21 16:54 [PATCH] dmapool: push new blocks in ascending order Keith Busch
  2023-02-21 17:20 ` Bryan O'Donoghue
@ 2023-02-21 18:02 ` Christoph Hellwig
  2023-02-21 18:07   ` Keith Busch
  2023-02-26  4:42 ` Andrew Morton
  2023-02-28  2:14 ` Guenter Roeck
  3 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2023-02-21 18:02 UTC (permalink / raw)
  To: Keith Busch
  Cc: Andrew Morton, linux-mm, linux-kernel, Keith Busch,
	Bryan O'Donoghue

On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> Some users of the dmapool need their allocations to happen in ascending
> order. The recent optimizations pushed the blocks in reverse order, so
> restore the previous behavior by linking the next available block from
> low-to-high.

Who are those users?

Also should we document this behavior somewhere so that it isn't
accidentally changed again some time in the future?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-21 18:02 ` Christoph Hellwig
@ 2023-02-21 18:07   ` Keith Busch
  2023-02-23 20:41     ` Andrew Morton
  0 siblings, 1 reply; 11+ messages in thread
From: Keith Busch @ 2023-02-21 18:07 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Keith Busch, Andrew Morton, linux-mm, linux-kernel,
	Bryan O'Donoghue

On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
> On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> > From: Keith Busch <kbusch@kernel.org>
> > 
> > Some users of the dmapool need their allocations to happen in ascending
> > order. The recent optimizations pushed the blocks in reverse order, so
> > restore the previous behavior by linking the next available block from
> > low-to-high.
> 
> Who are those users?
> 
> Also should we document this behavior somewhere so that it isn't
> accidentally changed again some time in the future?

usb/chipidea/udc.c qh_pool called "ci_hw_qh". My initial thought was dmapool
isn't the right API if you need a specific order when allocating from it, but I
can't readily test any changes to that driver. Restoring the previous behavior
is easy enough.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-21 18:07   ` Keith Busch
@ 2023-02-23 20:41     ` Andrew Morton
  2023-02-24 18:24       ` Keith Busch
  0 siblings, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2023-02-23 20:41 UTC (permalink / raw)
  To: Keith Busch
  Cc: Christoph Hellwig, Keith Busch, linux-mm, linux-kernel,
	Bryan O'Donoghue

On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <kbusch@kernel.org> wrote:

> On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
> > On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> > > From: Keith Busch <kbusch@kernel.org>
> > > 
> > > Some users of the dmapool need their allocations to happen in ascending
> > > order. The recent optimizations pushed the blocks in reverse order, so
> > > restore the previous behavior by linking the next available block from
> > > low-to-high.
> > 
> > Who are those users?
> > 
> > Also should we document this behavior somewhere so that it isn't
> > accidentally changed again some time in the future?
> 
> usb/chipidea/udc.c qh_pool called "ci_hw_qh".

It would be helpful to know why these users need this side-effect.  Did
the drivers break?   Or just get slower?

Are those drivers misbehaving by assuming this behavior?   Should we
require that they be altered instead of forever constraining the dmapool
implementation in this fashion?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-23 20:41     ` Andrew Morton
@ 2023-02-24 18:24       ` Keith Busch
  2023-02-24 22:28         ` Bryan O'Donoghue
  0 siblings, 1 reply; 11+ messages in thread
From: Keith Busch @ 2023-02-24 18:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Christoph Hellwig, Keith Busch, linux-mm, linux-kernel,
	Bryan O'Donoghue

On Thu, Feb 23, 2023 at 12:41:37PM -0800, Andrew Morton wrote:
> On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <kbusch@kernel.org> wrote:
> 
> > On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
> > > On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> > > > From: Keith Busch <kbusch@kernel.org>
> > > > 
> > > > Some users of the dmapool need their allocations to happen in ascending
> > > > order. The recent optimizations pushed the blocks in reverse order, so
> > > > restore the previous behavior by linking the next available block from
> > > > low-to-high.
> > > 
> > > Who are those users?
> > > 
> > > Also should we document this behavior somewhere so that it isn't
> > > accidentally changed again some time in the future?
> > 
> > usb/chipidea/udc.c qh_pool called "ci_hw_qh".
> 
> It would be helpful to know why these users need this side-effect.  Did
> the drivers break?   Or just get slower?

The affected driver was reported to be unusable without this behavior.
 
> Are those drivers misbehaving by assuming this behavior?   Should we

I do think they're using the wrong API. You you shouldn't use the dmapool if
your blocks need to be arranged in a contiguous address order. They should just
directly use dma_alloc_coherent() instead.

> require that they be altered instead of forever constraining the dmapool
> implementation in this fashion?

This change isn't really constraining dmapool where it matters. It's just an
unexpected one-time initialization thing.

As far as altering those drivers, I'll reach out to someone on that side for
comment (I'm currently not familiar with the affected subsystem).


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-24 18:24       ` Keith Busch
@ 2023-02-24 22:28         ` Bryan O'Donoghue
  0 siblings, 0 replies; 11+ messages in thread
From: Bryan O'Donoghue @ 2023-02-24 22:28 UTC (permalink / raw)
  To: Keith Busch, Andrew Morton
  Cc: Christoph Hellwig, Keith Busch, linux-mm, linux-kernel

On 24/02/2023 18:24, Keith Busch wrote:
> On Thu, Feb 23, 2023 at 12:41:37PM -0800, Andrew Morton wrote:
>> On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <kbusch@kernel.org> wrote:
>>
>>> On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
>>>> On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
>>>>> From: Keith Busch <kbusch@kernel.org>
>>>>>
>>>>> Some users of the dmapool need their allocations to happen in ascending
>>>>> order. The recent optimizations pushed the blocks in reverse order, so
>>>>> restore the previous behavior by linking the next available block from
>>>>> low-to-high.
>>>>
>>>> Who are those users?
>>>>
>>>> Also should we document this behavior somewhere so that it isn't
>>>> accidentally changed again some time in the future?
>>>
>>> usb/chipidea/udc.c qh_pool called "ci_hw_qh".
>>
>> It would be helpful to know why these users need this side-effect.  Did
>> the drivers break?   Or just get slower?
> 
> The affected driver was reported to be unusable without this behavior.
>   
>> Are those drivers misbehaving by assuming this behavior?   Should we
> 
> I do think they're using the wrong API. You you shouldn't use the dmapool if
> your blocks need to be arranged in a contiguous address order. They should just
> directly use dma_alloc_coherent() instead.
> 
>> require that they be altered instead of forever constraining the dmapool
>> implementation in this fashion?
> 
> This change isn't really constraining dmapool where it matters. It's just an
> unexpected one-time initialization thing.
> 
> As far as altering those drivers, I'll reach out to someone on that side for
> comment (I'm currently not familiar with the affected subsystem).

We can always change this driver, I'm fine to do that in-parallel/instead.

The symptom we have is a silent failure absent this change so, I just 
wonder are we really the _only_ code path that would be affected absent 
the change in this patch ?

---
bod




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-21 16:54 [PATCH] dmapool: push new blocks in ascending order Keith Busch
  2023-02-21 17:20 ` Bryan O'Donoghue
  2023-02-21 18:02 ` Christoph Hellwig
@ 2023-02-26  4:42 ` Andrew Morton
  2023-02-27 17:20   ` Keith Busch
  2023-02-28  1:25   ` Keith Busch
  2023-02-28  2:14 ` Guenter Roeck
  3 siblings, 2 replies; 11+ messages in thread
From: Andrew Morton @ 2023-02-26  4:42 UTC (permalink / raw)
  To: Keith Busch; +Cc: linux-mm, linux-kernel, Keith Busch, Bryan O'Donoghue

On Tue, 21 Feb 2023 08:54:00 -0800 Keith Busch <kbusch@meta.com> wrote:

> Some users of the dmapool need their allocations to happen in ascending
> order. The recent optimizations pushed the blocks in reverse order, so
> restore the previous behavior by linking the next available block from
> low-to-high.

As I understand it, this fixes the only known issues with patch series
"dmapool enhancements", v4.  So we're good for a merge before 6.3-rc1,
yes?


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-26  4:42 ` Andrew Morton
@ 2023-02-27 17:20   ` Keith Busch
  2023-02-28  1:25   ` Keith Busch
  1 sibling, 0 replies; 11+ messages in thread
From: Keith Busch @ 2023-02-27 17:20 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Keith Busch, linux-mm, linux-kernel, Bryan O'Donoghue

On Sat, Feb 25, 2023 at 08:42:39PM -0800, Andrew Morton wrote:
> On Tue, 21 Feb 2023 08:54:00 -0800 Keith Busch <kbusch@meta.com> wrote:
> 
> > Some users of the dmapool need their allocations to happen in ascending
> > order. The recent optimizations pushed the blocks in reverse order, so
> > restore the previous behavior by linking the next available block from
> > low-to-high.
> 
> As I understand it, this fixes the only known issues with patch series
> "dmapool enhancements", v4.  So we're good for a merge before 6.3-rc1,
> yes?

I was going to say "yes", but Guenter is reporting a new error with the
original series. I working on that right now.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-26  4:42 ` Andrew Morton
  2023-02-27 17:20   ` Keith Busch
@ 2023-02-28  1:25   ` Keith Busch
  1 sibling, 0 replies; 11+ messages in thread
From: Keith Busch @ 2023-02-28  1:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Keith Busch, linux-mm, linux-kernel, Bryan O'Donoghue

On Sat, Feb 25, 2023 at 08:42:39PM -0800, Andrew Morton wrote:
> On Tue, 21 Feb 2023 08:54:00 -0800 Keith Busch <kbusch@meta.com> wrote:
> 
> > Some users of the dmapool need their allocations to happen in ascending
> > order. The recent optimizations pushed the blocks in reverse order, so
> > restore the previous behavior by linking the next available block from
> > low-to-high.
> 
> As I understand it, this fixes the only known issues with patch series
> "dmapool enhancements", v4.  So we're good for a merge before 6.3-rc1,
> yes?

Okay, I think this is good to go to merge now. My local testing also show this
fixes the megaraid issue that Guenter reported on the other thread, so I
believe this does indeed fix the only reported issues with the dmapool
enhancements.
.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] dmapool: push new blocks in ascending order
  2023-02-21 16:54 [PATCH] dmapool: push new blocks in ascending order Keith Busch
                   ` (2 preceding siblings ...)
  2023-02-26  4:42 ` Andrew Morton
@ 2023-02-28  2:14 ` Guenter Roeck
  3 siblings, 0 replies; 11+ messages in thread
From: Guenter Roeck @ 2023-02-28  2:14 UTC (permalink / raw)
  To: Keith Busch
  Cc: Andrew Morton, linux-mm, linux-kernel, Keith Busch,
	Bryan O'Donoghue

On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> Some users of the dmapool need their allocations to happen in ascending
> order. The recent optimizations pushed the blocks in reverse order, so
> restore the previous behavior by linking the next available block from
> low-to-high.
> 
> Fixes: ced6d06a81fb69 ("dmapool: link blocks across pages")
> Reported-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
> Signed-off-by: Keith Busch <kbusch@kernel.org>

This patch fixes the problem I had observed when trying to boot from
the megasas SCSI controller on powernv.

Tested-by: Guenter Roeck <linux@roeck-us.net>

Thanks,
Guenter

> ---
>  mm/dmapool.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/dmapool.c b/mm/dmapool.c
> index 1920890ff8d3d..a151a21e571b7 100644
> --- a/mm/dmapool.c
> +++ b/mm/dmapool.c
> @@ -300,7 +300,7 @@ EXPORT_SYMBOL(dma_pool_create);
>  static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
>  {
>  	unsigned int next_boundary = pool->boundary, offset = 0;
> -	struct dma_block *block;
> +	struct dma_block *block, *first = NULL, *last = NULL;
>  
>  	pool_init_page(pool, page);
>  	while (offset + pool->size <= pool->allocation) {
> @@ -311,11 +311,22 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
>  		}
>  
>  		block = page->vaddr + offset;
> -		pool_block_push(pool, block, page->dma + offset);
> +		block->dma = page->dma + offset;
> +		block->next_block = NULL;
> +
> +		if (last)
> +			last->next_block = block;
> +		else
> +			first = block;
> +		last = block;
> +
>  		offset += pool->size;
>  		pool->nr_blocks++;
>  	}
>  
> +	last->next_block = pool->next_block;
> +	pool->next_block = first;
> +
>  	list_add(&page->page_list, &pool->page_list);
>  	pool->nr_pages++;
>  }
> -- 
> 2.30.2
> 
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-02-28  2:14 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-21 16:54 [PATCH] dmapool: push new blocks in ascending order Keith Busch
2023-02-21 17:20 ` Bryan O'Donoghue
2023-02-21 18:02 ` Christoph Hellwig
2023-02-21 18:07   ` Keith Busch
2023-02-23 20:41     ` Andrew Morton
2023-02-24 18:24       ` Keith Busch
2023-02-24 22:28         ` Bryan O'Donoghue
2023-02-26  4:42 ` Andrew Morton
2023-02-27 17:20   ` Keith Busch
2023-02-28  1:25   ` Keith Busch
2023-02-28  2:14 ` Guenter Roeck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).