linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] drivers: dma-contiguous: Ensure cma reserve region never cross the low/high mem boundary
@ 2016-11-16 14:19 Jason Liu
  2016-11-16 20:00 ` Laura Abbott
  0 siblings, 1 reply; 4+ messages in thread
From: Jason Liu @ 2016-11-16 14:19 UTC (permalink / raw)
  To: linux-arm-kernel

If the cma reserve region goes through the device-tree method,
also need ensure the cma reserved region not cross the low/high
mem boundary. This patch did the similar fix as commit:16195dd
("mm: cma: Ensure that reservations never cross the low/high mem boundary")

Signed-off-by: Jason Liu <jason.hui.liu@nxp.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 drivers/base/dma-contiguous.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index e167a1e1..2bc093c 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -244,6 +244,7 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem)
 {
 	phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
 	phys_addr_t mask = align - 1;
+	phys_addr_t highmem_start;
 	unsigned long node = rmem->fdt_node;
 	struct cma *cma;
 	int err;
@@ -256,6 +257,32 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem)
 		pr_err("Reserved memory: incorrect alignment of CMA region\n");
 		return -EINVAL;
 	}
+#ifdef CONFIG_X86
+	/*
+	 * high_memory isn't direct mapped memory so retrieving its physical
+	 * address isn't appropriate.  But it would be useful to check the
+	 * physical address of the highmem boundary so it's justfiable to get
+	 * the physical address from it.  On x86 there is a validation check for
+	 * this case, so the following workaround is needed to avoid it.
+	 */
+	highmem_start = __pa_nodebug(high_memory);
+#else
+	highmem_start = __pa(high_memory);
+#endif
+
+	/*
+	 * All pages in the reserved area must come from the same zone.
+	 * If the reserved region crosses the low/high memory boundary,
+	 * try to fix it up and then fall back to allocate from the low mem
+	 */
+	if (rmem->base < highmem_start &&
+		(rmem->base + rmem->size) > highmem_start) {
+		memblock_free(rmem->base, rmem->size);
+		rmem->base = memblock_alloc_range(rmem->size, align, 0,
+						highmem_start, MEMBLOCK_NONE);
+		if (!rmem->base)
+			return -ENOMEM;
+	}
 
 	err = cma_init_reserved_mem(rmem->base, rmem->size, 0, &cma);
 	if (err) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 1/1] drivers: dma-contiguous: Ensure cma reserve region never cross the low/high mem boundary
  2016-11-16 14:19 [PATCH 1/1] drivers: dma-contiguous: Ensure cma reserve region never cross the low/high mem boundary Jason Liu
@ 2016-11-16 20:00 ` Laura Abbott
  2016-11-17  5:21   ` Jason Liu
  0 siblings, 1 reply; 4+ messages in thread
From: Laura Abbott @ 2016-11-16 20:00 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/16/2016 06:19 AM, Jason Liu wrote:
> If the cma reserve region goes through the device-tree method,
> also need ensure the cma reserved region not cross the low/high
> mem boundary. This patch did the similar fix as commit:16195dd
> ("mm: cma: Ensure that reservations never cross the low/high mem boundary")
> 
> Signed-off-by: Jason Liu <jason.hui.liu@nxp.com>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> ---
>  drivers/base/dma-contiguous.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index e167a1e1..2bc093c 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/base/dma-contiguous.c
> @@ -244,6 +244,7 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem)
>  {
>  	phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
>  	phys_addr_t mask = align - 1;
> +	phys_addr_t highmem_start;
>  	unsigned long node = rmem->fdt_node;
>  	struct cma *cma;
>  	int err;
> @@ -256,6 +257,32 @@ static int __init rmem_cma_setup(struct reserved_mem *rmem)
>  		pr_err("Reserved memory: incorrect alignment of CMA region\n");
>  		return -EINVAL;
>  	}
> +#ifdef CONFIG_X86
> +	/*
> +	 * high_memory isn't direct mapped memory so retrieving its physical
> +	 * address isn't appropriate.  But it would be useful to check the
> +	 * physical address of the highmem boundary so it's justfiable to get
> +	 * the physical address from it.  On x86 there is a validation check for
> +	 * this case, so the following workaround is needed to avoid it.
> +	 */
> +	highmem_start = __pa_nodebug(high_memory);
> +#else
> +	highmem_start = __pa(high_memory);
> +#endif

The inline #ifdef is not great style, we shouldn't be spreading it around.

> +
> +	/*
> +	 * All pages in the reserved area must come from the same zone.
> +	 * If the reserved region crosses the low/high memory boundary,
> +	 * try to fix it up and then fall back to allocate from the low mem
> +	 */
> +	if (rmem->base < highmem_start &&
> +		(rmem->base + rmem->size) > highmem_start) {
> +		memblock_free(rmem->base, rmem->size);
> +		rmem->base = memblock_alloc_range(rmem->size, align, 0,
> +						highmem_start, MEMBLOCK_NONE);
> +		if (!rmem->base)
> +			return -ENOMEM;
> +	}

Given the alloc happened in the of code, it seems bad form to be
bringing the free and re-alloc here. Perhaps we should be doing the
limiting and checking in the reserved mem code?

If there is no other solution, at the least this deserves a pr_warn
so users know why a reason specified may not be getting requested.

>  
>  	err = cma_init_reserved_mem(rmem->base, rmem->size, 0, &cma);
>  	if (err) {
> 


Thanks,
Laura

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/1] drivers: dma-contiguous: Ensure cma reserve region never cross the low/high mem boundary
  2016-11-16 20:00 ` Laura Abbott
@ 2016-11-17  5:21   ` Jason Liu
  2016-11-17 21:43     ` Laura Abbott
  0 siblings, 1 reply; 4+ messages in thread
From: Jason Liu @ 2016-11-17  5:21 UTC (permalink / raw)
  To: linux-arm-kernel



> -----Original Message-----
> From: Laura Abbott [mailto:labbott at redhat.com]
> Sent: Thursday, November 17, 2016 4:00 AM
> To: Jason Liu <jason.hui.liu@nxp.com>; linux-arm-kernel at lists.infradead.org
> Cc: gregkh at linuxfoundation.org; iamjoonsoo.kim at lge.com; linux-
> kernel at vger.kernel.org; m.szyprowski at samsung.com
> Subject: Re: [PATCH 1/1] drivers: dma-contiguous: Ensure cma reserve region
> never cross the low/high mem boundary
> 
> On 11/16/2016 06:19 AM, Jason Liu wrote:
> > If the cma reserve region goes through the device-tree method, also
> > need ensure the cma reserved region not cross the low/high mem
> > boundary. This patch did the similar fix as commit:16195dd
> > ("mm: cma: Ensure that reservations never cross the low/high mem
> > boundary")
> >
> > Signed-off-by: Jason Liu <jason.hui.liu@nxp.com>
> > Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > ---
> >  drivers/base/dma-contiguous.c | 27 +++++++++++++++++++++++++++
> >  1 file changed, 27 insertions(+)
> >
> > diff --git a/drivers/base/dma-contiguous.c
> > b/drivers/base/dma-contiguous.c index e167a1e1..2bc093c 100644
> > --- a/drivers/base/dma-contiguous.c
> > +++ b/drivers/base/dma-contiguous.c
> > @@ -244,6 +244,7 @@ static int __init rmem_cma_setup(struct
> > reserved_mem *rmem)  {
> >  	phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1,
> pageblock_order);
> >  	phys_addr_t mask = align - 1;
> > +	phys_addr_t highmem_start;
> >  	unsigned long node = rmem->fdt_node;
> >  	struct cma *cma;
> >  	int err;
> > @@ -256,6 +257,32 @@ static int __init rmem_cma_setup(struct
> reserved_mem *rmem)
> >  		pr_err("Reserved memory: incorrect alignment of CMA
> region\n");
> >  		return -EINVAL;
> >  	}
> > +#ifdef CONFIG_X86
> > +	/*
> > +	 * high_memory isn't direct mapped memory so retrieving its physical
> > +	 * address isn't appropriate.  But it would be useful to check the
> > +	 * physical address of the highmem boundary so it's justfiable to get
> > +	 * the physical address from it.  On x86 there is a validation check for
> > +	 * this case, so the following workaround is needed to avoid it.
> > +	 */
> > +	highmem_start = __pa_nodebug(high_memory); #else
> > +	highmem_start = __pa(high_memory);
> > +#endif
> 
> The inline #ifdef is not great style, we shouldn't be spreading it around.

This is the similar fix in the 16195dd ("mm: cma: Ensure that reservations never cross
the low/high mem boundary". Do you have a better idea for this? 

> 
> > +
> > +	/*
> > +	 * All pages in the reserved area must come from the same zone.
> > +	 * If the reserved region crosses the low/high memory boundary,
> > +	 * try to fix it up and then fall back to allocate from the low mem
> > +	 */
> > +	if (rmem->base < highmem_start &&
> > +		(rmem->base + rmem->size) > highmem_start) {
> > +		memblock_free(rmem->base, rmem->size);
> > +		rmem->base = memblock_alloc_range(rmem->size, align, 0,
> > +						highmem_start,
> MEMBLOCK_NONE);
> > +		if (!rmem->base)
> > +			return -ENOMEM;
> > +	}
> 
> Given the alloc happened in the of code, it seems bad form to be bringing the
> free and re-alloc here. Perhaps we should be doing the limiting and checking in
> the reserved mem code?

I original though to fix it into the drivers/of/of_reserved_mem.c, but hesitate to
do it due to this of_reserved_mem is common code to do the reservation, which
is something not related with CMA requirement. 

Appreciated that anyone can provide comments to improve this solution. Without this,
the Linux kernel will not boot up when do the CMA reservation from the DTS method,
since the dma_alloc_coherent will fail when do the dma memory allocation. 

> 
> If there is no other solution, at the least this deserves a pr_warn so users know
> why a reason specified may not be getting requested.

Yes, it deserves a pr_warn here. I will add it. 

Thanks Laura for the review. 

Jason Liu
> 
> >
> >  	err = cma_init_reserved_mem(rmem->base, rmem->size, 0, &cma);
> >  	if (err) {
> >
> 
> 
> Thanks,
> Laura

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/1] drivers: dma-contiguous: Ensure cma reserve region never cross the low/high mem boundary
  2016-11-17  5:21   ` Jason Liu
@ 2016-11-17 21:43     ` Laura Abbott
  0 siblings, 0 replies; 4+ messages in thread
From: Laura Abbott @ 2016-11-17 21:43 UTC (permalink / raw)
  To: linux-arm-kernel

On 11/16/2016 09:21 PM, Jason Liu wrote:
> 
> 
>> -----Original Message-----
>> From: Laura Abbott [mailto:labbott at redhat.com]
>> Sent: Thursday, November 17, 2016 4:00 AM
>> To: Jason Liu <jason.hui.liu@nxp.com>; linux-arm-kernel at lists.infradead.org
>> Cc: gregkh at linuxfoundation.org; iamjoonsoo.kim at lge.com; linux-
>> kernel at vger.kernel.org; m.szyprowski at samsung.com
>> Subject: Re: [PATCH 1/1] drivers: dma-contiguous: Ensure cma reserve region
>> never cross the low/high mem boundary
>>
>> On 11/16/2016 06:19 AM, Jason Liu wrote:
>>> If the cma reserve region goes through the device-tree method, also
>>> need ensure the cma reserved region not cross the low/high mem
>>> boundary. This patch did the similar fix as commit:16195dd
>>> ("mm: cma: Ensure that reservations never cross the low/high mem
>>> boundary")
>>>
>>> Signed-off-by: Jason Liu <jason.hui.liu@nxp.com>
>>> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
>>> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>>> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>>> ---
>>>  drivers/base/dma-contiguous.c | 27 +++++++++++++++++++++++++++
>>>  1 file changed, 27 insertions(+)
>>>
>>> diff --git a/drivers/base/dma-contiguous.c
>>> b/drivers/base/dma-contiguous.c index e167a1e1..2bc093c 100644
>>> --- a/drivers/base/dma-contiguous.c
>>> +++ b/drivers/base/dma-contiguous.c
>>> @@ -244,6 +244,7 @@ static int __init rmem_cma_setup(struct
>>> reserved_mem *rmem)  {
>>>  	phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1,
>> pageblock_order);
>>>  	phys_addr_t mask = align - 1;
>>> +	phys_addr_t highmem_start;
>>>  	unsigned long node = rmem->fdt_node;
>>>  	struct cma *cma;
>>>  	int err;
>>> @@ -256,6 +257,32 @@ static int __init rmem_cma_setup(struct
>> reserved_mem *rmem)
>>>  		pr_err("Reserved memory: incorrect alignment of CMA
>> region\n");
>>>  		return -EINVAL;
>>>  	}
>>> +#ifdef CONFIG_X86
>>> +	/*
>>> +	 * high_memory isn't direct mapped memory so retrieving its physical
>>> +	 * address isn't appropriate.  But it would be useful to check the
>>> +	 * physical address of the highmem boundary so it's justfiable to get
>>> +	 * the physical address from it.  On x86 there is a validation check for
>>> +	 * this case, so the following workaround is needed to avoid it.
>>> +	 */
>>> +	highmem_start = __pa_nodebug(high_memory); #else
>>> +	highmem_start = __pa(high_memory);
>>> +#endif
>>
>> The inline #ifdef is not great style, we shouldn't be spreading it around.
> 
> This is the similar fix in the 16195dd ("mm: cma: Ensure that reservations never cross
> the low/high mem boundary". Do you have a better idea for this? 
> 

See an example in https://marc.info/?l=linux-kernel&m=147812049024611&w=2
this is getting cleaned up as part of some work I'm doing for CONFIG_DEBUG_VIRTUAL

>>
>>> +
>>> +	/*
>>> +	 * All pages in the reserved area must come from the same zone.
>>> +	 * If the reserved region crosses the low/high memory boundary,
>>> +	 * try to fix it up and then fall back to allocate from the low mem
>>> +	 */
>>> +	if (rmem->base < highmem_start &&
>>> +		(rmem->base + rmem->size) > highmem_start) {
>>> +		memblock_free(rmem->base, rmem->size);
>>> +		rmem->base = memblock_alloc_range(rmem->size, align, 0,
>>> +						highmem_start,
>> MEMBLOCK_NONE);
>>> +		if (!rmem->base)
>>> +			return -ENOMEM;
>>> +	}
>>
>> Given the alloc happened in the of code, it seems bad form to be bringing the
>> free and re-alloc here. Perhaps we should be doing the limiting and checking in
>> the reserved mem code?
> 
> I original though to fix it into the drivers/of/of_reserved_mem.c, but hesitate to
> do it due to this of_reserved_mem is common code to do the reservation, which
> is something not related with CMA requirement. 
> 

Agreed this case is CMA specific. It might be worth discussion whether allowing
reservation across zones is even something that should be allowed by the generic
code though.

> Appreciated that anyone can provide comments to improve this solution. Without this,
> the Linux kernel will not boot up when do the CMA reservation from the DTS method,
> since the dma_alloc_coherent will fail when do the dma memory allocation. 
> 

I'd suggest bringing this up on the devicetree mailing list. If you get a negative
or no response there this approach can be reviewed some more.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-11-17 21:43 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-16 14:19 [PATCH 1/1] drivers: dma-contiguous: Ensure cma reserve region never cross the low/high mem boundary Jason Liu
2016-11-16 20:00 ` Laura Abbott
2016-11-17  5:21   ` Jason Liu
2016-11-17 21:43     ` Laura Abbott

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).