* Mismatched aliases with DMA mappings?
@ 2012-09-21 15:21 Dave Martin
2012-09-22 5:22 ` Kyungmin Park
0 siblings, 1 reply; 5+ messages in thread
From: Dave Martin @ 2012-09-21 15:21 UTC (permalink / raw)
To: linux-arm-kernel
Hi Marek,
I've been trying to understand whether (and if so, how) and DMA buffer
allocation code in dma-mapping.c avoids mismatched alises in the kernel
linear map.
I need a way of getting some uncached memory for communicating with
temporarily noncoherent CPUs during CPU bringup/teardown. Although
the DMA API does not seem quite the right solution for this, nothing
else currently feels like quite the right solution either. Approaches
based on memblock_steal() and on using cacheable memory with explicit
flushing both have problems, and reserving specific physical memory
via DT seems ugly, because we really don't care where the memory is.
What is needed is something like an ioremap of anonymous memory with
specific attributes, using largely the same infrastructure as the DMA
API, but eliminating a mismatched alias of the allocated memory in the
kernel linear mapping is likely to be important.
Can you explain how the DMA mapping code eliminates mismatched aliases?
I can see the attributes of new mappings being set, but currently I
don't see how the linear map gets modified.
Cheers
---Dave
^ permalink raw reply [flat|nested] 5+ messages in thread
* Mismatched aliases with DMA mappings?
2012-09-21 15:21 Mismatched aliases with DMA mappings? Dave Martin
@ 2012-09-22 5:22 ` Kyungmin Park
2012-09-24 9:52 ` Dave Martin
0 siblings, 1 reply; 5+ messages in thread
From: Kyungmin Park @ 2012-09-22 5:22 UTC (permalink / raw)
To: linux-arm-kernel
Hi Dave,
Marek is on vacation and will back 24 Sep. he will explain it in detail.
I just show how CMA addresses mismatched aliases codes.
In reserve function, it declares the require memory size with base
address. At that time it calles 'dma_contiguous_early_fixup()'. it
just registered address and size.
void __init dma_contiguous_early_fixup(phys_addr_t base, unsigned long size)
{
dma_mmu_remap[dma_mmu_remap_num].base = base;
dma_mmu_remap[dma_mmu_remap_num].size = size;
dma_mmu_remap_num++;
}
These registerd base and size will be remap at dma_contiguous_remap
function at paging_init.
void __init dma_contiguous_remap(void)
{
int i;
for (i = 0; i < dma_mmu_remap_num; i++) {
phys_addr_t start = dma_mmu_remap[i].base;
phys_addr_t end = start + dma_mmu_remap[i].size;
struct map_desc map;
unsigned long addr;
if (end > arm_lowmem_limit)
end = arm_lowmem_limit;
if (start >= end)
continue;
map.pfn = __phys_to_pfn(start);
map.virtual = __phys_to_virt(start);
map.length = end - start;
map.type = MT_MEMORY_DMA_READY;
/*
* Clear previous low-memory mapping
*/
for (addr = __phys_to_virt(start); addr < __phys_to_virt(end);
addr += PMD_SIZE)
pmd_clear(pmd_off_k(addr));
iotable_init(&map, 1);
}
}
Thank you,
Kyungmin Park
On 9/22/12, Dave Martin <dave.martin@linaro.org> wrote:
> Hi Marek,
>
> I've been trying to understand whether (and if so, how) and DMA buffer
> allocation code in dma-mapping.c avoids mismatched alises in the kernel
> linear map.
>
>
> I need a way of getting some uncached memory for communicating with
> temporarily noncoherent CPUs during CPU bringup/teardown. Although
> the DMA API does not seem quite the right solution for this, nothing
> else currently feels like quite the right solution either. Approaches
> based on memblock_steal() and on using cacheable memory with explicit
> flushing both have problems, and reserving specific physical memory
> via DT seems ugly, because we really don't care where the memory is.
>
> What is needed is something like an ioremap of anonymous memory with
> specific attributes, using largely the same infrastructure as the DMA
> API, but eliminating a mismatched alias of the allocated memory in the
> kernel linear mapping is likely to be important.
>
> Can you explain how the DMA mapping code eliminates mismatched aliases?
> I can see the attributes of new mappings being set, but currently I
> don't see how the linear map gets modified.
>
> Cheers
> ---Dave
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Mismatched aliases with DMA mappings?
2012-09-22 5:22 ` Kyungmin Park
@ 2012-09-24 9:52 ` Dave Martin
2012-10-07 7:02 ` Marek Szyprowski
0 siblings, 1 reply; 5+ messages in thread
From: Dave Martin @ 2012-09-24 9:52 UTC (permalink / raw)
To: linux-arm-kernel
On Sat, Sep 22, 2012 at 02:22:07PM +0900, Kyungmin Park wrote:
> Hi Dave,
>
> Marek is on vacation and will back 24 Sep. he will explain it in detail.
Hi, thanks for your reply
> I just show how CMA addresses mismatched aliases codes.
>
> In reserve function, it declares the require memory size with base
> address. At that time it calles 'dma_contiguous_early_fixup()'. it
> just registered address and size.
>
> void __init dma_contiguous_early_fixup(phys_addr_t base, unsigned long size)
> {
> dma_mmu_remap[dma_mmu_remap_num].base = base;
> dma_mmu_remap[dma_mmu_remap_num].size = size;
> dma_mmu_remap_num++;
> }
>
> These registerd base and size will be remap at dma_contiguous_remap
> function at paging_init.
>
> void __init dma_contiguous_remap(void)
> {
> int i;
> for (i = 0; i < dma_mmu_remap_num; i++) {
> phys_addr_t start = dma_mmu_remap[i].base;
> phys_addr_t end = start + dma_mmu_remap[i].size;
> struct map_desc map;
> unsigned long addr;
>
> if (end > arm_lowmem_limit)
> end = arm_lowmem_limit;
> if (start >= end)
> continue;
>
> map.pfn = __phys_to_pfn(start);
> map.virtual = __phys_to_virt(start);
> map.length = end - start;
> map.type = MT_MEMORY_DMA_READY;
>
> /*
> * Clear previous low-memory mapping
> */
> for (addr = __phys_to_virt(start); addr < __phys_to_virt(end);
> addr += PMD_SIZE)
> pmd_clear(pmd_off_k(addr));
>
> iotable_init(&map, 1);
> }
> }
OK, so it looks like this is done early and can't happen after the
kernel has booted (?)
Do you know whether the linear alias for DMA memory is removed when
not using CMA?
Cheers
---Dave
>
> Thank you,
> Kyungmin Park
>
> On 9/22/12, Dave Martin <dave.martin@linaro.org> wrote:
> > Hi Marek,
> >
> > I've been trying to understand whether (and if so, how) and DMA buffer
> > allocation code in dma-mapping.c avoids mismatched alises in the kernel
> > linear map.
> >
> >
> > I need a way of getting some uncached memory for communicating with
> > temporarily noncoherent CPUs during CPU bringup/teardown. Although
> > the DMA API does not seem quite the right solution for this, nothing
> > else currently feels like quite the right solution either. Approaches
> > based on memblock_steal() and on using cacheable memory with explicit
> > flushing both have problems, and reserving specific physical memory
> > via DT seems ugly, because we really don't care where the memory is.
> >
> > What is needed is something like an ioremap of anonymous memory with
> > specific attributes, using largely the same infrastructure as the DMA
> > API, but eliminating a mismatched alias of the allocated memory in the
> > kernel linear mapping is likely to be important.
> >
> > Can you explain how the DMA mapping code eliminates mismatched aliases?
> > I can see the attributes of new mappings being set, but currently I
> > don't see how the linear map gets modified.
> >
> > Cheers
> > ---Dave
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel at lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> >
^ permalink raw reply [flat|nested] 5+ messages in thread
* Mismatched aliases with DMA mappings?
2012-09-24 9:52 ` Dave Martin
@ 2012-10-07 7:02 ` Marek Szyprowski
2012-10-08 10:06 ` Dave Martin
0 siblings, 1 reply; 5+ messages in thread
From: Marek Szyprowski @ 2012-10-07 7:02 UTC (permalink / raw)
To: linux-arm-kernel
Hello,
I'm sorry for very late response but I was busy with other urgent items
after getting back from holidays.
On 9/24/2012 11:52 AM, Dave Martin wrote:
> On Sat, Sep 22, 2012 at 02:22:07PM +0900, Kyungmin Park wrote:
>> I just show how CMA addresses mismatched aliases codes.
>>
>> In reserve function, it declares the require memory size with base
>> address. At that time it calles 'dma_contiguous_early_fixup()'. it
>> just registered address and size.
>>
>> void __init dma_contiguous_early_fixup(phys_addr_t base, unsigned long
>> size)
>> {
>> dma_mmu_remap[dma_mmu_remap_num].base = base;
>> dma_mmu_remap[dma_mmu_remap_num].size = size;
>> dma_mmu_remap_num++;
>> }
>>
>> These registerd base and size will be remap at dma_contiguous_remap
>> function at paging_init.
>>
>> void __init dma_contiguous_remap(void)
>> {
>> int i;
>> for (i = 0; i < dma_mmu_remap_num; i++) {
>> phys_addr_t start = dma_mmu_remap[i].base;
>> phys_addr_t end = start + dma_mmu_remap[i].size;
>> struct map_desc map;
>> unsigned long addr;
>>
>> if (end > arm_lowmem_limit)
>> end = arm_lowmem_limit;
>> if (start >= end)
>> continue;
>>
>> map.pfn = __phys_to_pfn(start);
>> map.virtual = __phys_to_virt(start);
>> map.length = end - start;
>> map.type = MT_MEMORY_DMA_READY;
>>
>> /*
>> * Clear previous low-memory mapping
>> */
>> for (addr = __phys_to_virt(start); addr <
>> __phys_to_virt(end);
>> addr += PMD_SIZE)
>> pmd_clear(pmd_off_k(addr));
>>
>> iotable_init(&map, 1);
>> }
>> }
>
> OK, so it looks like this is done early and can't happen after the
> kernel has booted (?)
Right, the changes in the linear mapping for CMA areas are done very
early to make sure that the proper mapping will be available for all
processes in the system (CMA changes the size of the pages used for
holding low memory linear mappings from 1MiB/2MiB 1-level sections to
4KiB pages which require 2 levels of pte). Once the kernel has fully
started it is not (easily) possible to alter the linear mappings.
> Do you know whether the linear alias for DMA memory is removed when
> not using CMA?
Nope, when standard page_alloc() implementation of dma mapping is used
there exist 2 mappings for each allocated buffer: one in linear low
memory kernel mapping (cache'able) and the second created by the
dma-mapping subsystem (non-cache'able or writecombined). Right now no
one observed any issues caused by such situation assuming that no
process is accessing linear cache'able lowmem mappings when dma
non-cache'able mapping exist.
Best regards
--
Marek Szyprowski
Samsung Poland R&D Center
^ permalink raw reply [flat|nested] 5+ messages in thread
* Mismatched aliases with DMA mappings?
2012-10-07 7:02 ` Marek Szyprowski
@ 2012-10-08 10:06 ` Dave Martin
0 siblings, 0 replies; 5+ messages in thread
From: Dave Martin @ 2012-10-08 10:06 UTC (permalink / raw)
To: linux-arm-kernel
On Sun, Oct 07, 2012 at 09:02:34AM +0200, Marek Szyprowski wrote:
> Hello,
>
> I'm sorry for very late response but I was busy with other urgent
> items after getting back from holidays.
>
> On 9/24/2012 11:52 AM, Dave Martin wrote:
> >On Sat, Sep 22, 2012 at 02:22:07PM +0900, Kyungmin Park wrote:
>
> >>I just show how CMA addresses mismatched aliases codes.
> >>
> >>In reserve function, it declares the require memory size with base
> >>address. At that time it calles 'dma_contiguous_early_fixup()'. it
> >>just registered address and size.
> >>
> >>void __init dma_contiguous_early_fixup(phys_addr_t base, unsigned long
> >>size)
> >>{
> >> dma_mmu_remap[dma_mmu_remap_num].base = base;
> >> dma_mmu_remap[dma_mmu_remap_num].size = size;
> >> dma_mmu_remap_num++;
> >>}
> >>
> >>These registerd base and size will be remap at dma_contiguous_remap
> >>function at paging_init.
> >>
> >>void __init dma_contiguous_remap(void)
> >>{
> >> int i;
> >> for (i = 0; i < dma_mmu_remap_num; i++) {
> >> phys_addr_t start = dma_mmu_remap[i].base;
> >> phys_addr_t end = start + dma_mmu_remap[i].size;
> >> struct map_desc map;
> >> unsigned long addr;
> >>
> >> if (end > arm_lowmem_limit)
> >> end = arm_lowmem_limit;
> >> if (start >= end)
> >> continue;
> >>
> >> map.pfn = __phys_to_pfn(start);
> >> map.virtual = __phys_to_virt(start);
> >> map.length = end - start;
> >> map.type = MT_MEMORY_DMA_READY;
> >>
> >> /*
> >> * Clear previous low-memory mapping
> >> */
> >> for (addr = __phys_to_virt(start); addr <
> >>__phys_to_virt(end);
> >> addr += PMD_SIZE)
> >> pmd_clear(pmd_off_k(addr));
> >>
> >> iotable_init(&map, 1);
> >> }
> >>}
> >
> >OK, so it looks like this is done early and can't happen after the
> >kernel has booted (?)
>
> Right, the changes in the linear mapping for CMA areas are done very
> early to make sure that the proper mapping will be available for all
> processes in the system (CMA changes the size of the pages used for
> holding low memory linear mappings from 1MiB/2MiB 1-level sections
> to 4KiB pages which require 2 levels of pte). Once the kernel has
> fully started it is not (easily) possible to alter the linear
> mappings.
>
> >Do you know whether the linear alias for DMA memory is removed when
> >not using CMA?
>
> Nope, when standard page_alloc() implementation of dma mapping is
> used there exist 2 mappings for each allocated buffer: one in linear
> low memory kernel mapping (cache'able) and the second created by the
> dma-mapping subsystem (non-cache'able or writecombined). Right now
> no one observed any issues caused by such situation assuming that no
> process is accessing linear cache'able lowmem mappings when dma
> non-cache'able mapping exist.
Thanks, that clarifies my understanding.
Eventually, I decided it was not worth attempting to allocate non-
cacheable memory for my case, and I use normal memory with explicit
maintenance instead. This is a little more cumbersome, but so far
it appears to work.
Cheers
---Dave
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2012-10-08 10:06 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-21 15:21 Mismatched aliases with DMA mappings? Dave Martin
2012-09-22 5:22 ` Kyungmin Park
2012-09-24 9:52 ` Dave Martin
2012-10-07 7:02 ` Marek Szyprowski
2012-10-08 10:06 ` Dave Martin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).