* User buffers and cache flushing.
@ 2000-03-10 1:02 Greg Johnson
2000-03-10 2:49 ` Dan Malek
0 siblings, 1 reply; 6+ messages in thread
From: Greg Johnson @ 2000-03-10 1:02 UTC (permalink / raw)
To: Linux PPC Mailing List
Hi All,
Following the other thread on USB. The driver I am writing for an
MPC855T embedded board, I create a character device that a user
can 'write (2)' to. I then need to DMA this data directly to our
device. I now realise that I will need to be calling
flush_dcache_range prior to kicking of the IDMA transfer. Since I
am not wanting to be double handling this data, and the data will
most likely be fragmented in memory, it looks like I will need to
resolve kernel addresses and sizes for each memory fragment of the
users data buffer (something like a scatter/gather list, I guess)
and call flush_dcache_range on each of these fragments.
Is this reasoning correct?
Have I overlooked anything? (other than the lack of support for IDMA
currently in the MPC8xx kernel)
Thanks.
Greg.
--
+----------------------------------------------------+
| Do you want to know more? |
| --== Greg Johnson ==-- |
| HW/SW Engineer gjohnson@research.canon.com.au |
| Canon Information Systems Research Australia |
| 1 Thomas Holt Dr, North Ryde, NSW, 2113, Australia |
| "I FLEXed my BISON and it went YACC!" - me. |
+----------------------------------------------------+
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: User buffers and cache flushing.
2000-03-10 1:02 User buffers and cache flushing Greg Johnson
@ 2000-03-10 2:49 ` Dan Malek
2000-03-16 2:40 ` Greg Johnson
0 siblings, 1 reply; 6+ messages in thread
From: Dan Malek @ 2000-03-10 2:49 UTC (permalink / raw)
To: Greg Johnson; +Cc: Linux PPC Mailing List
Greg Johnson wrote:
> ....Since I
> am not wanting to be double handling this data, and the data will
> most likely be fragmented in memory,
That's no problem, CPM buffer descriptors are made for that.
> ....and call flush_dcache_range on each of these fragments.
Depends upon how the data is handled. In some cases I find it is
faster to just map the pages uncached and skip the cache push. Do
some system design and analysis to determine what works best.
> Have I overlooked anything? (other than the lack of support for IDMA
> currently in the MPC8xx kernel)
This is pretty trivial, considering all CPM devices use the same DMA
engine as IDMA. Take an existing driver, rip the guts of data comm
out of it, and you nearly have the IDMA driver.
-- Dan
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: User buffers and cache flushing.
2000-03-10 2:49 ` Dan Malek
@ 2000-03-16 2:40 ` Greg Johnson
2000-03-16 10:07 ` Marcus Sundberg
0 siblings, 1 reply; 6+ messages in thread
From: Greg Johnson @ 2000-03-16 2:40 UTC (permalink / raw)
To: Linux PPC Mailing List
Hi y'all,
The buffers I need to IDMA out to a peripheral are allocated by
the user with conventional malloc, and are rather dynamic so
the driver does not have a lot of control over them that I can
see.
So I have decided that I will be flushing the cache, either by
calling 'flush_page_to_ram' or 'flush_dcache_range'. Now, these
require a Kernel virtual address to work on (correct?), and the
caller has given me a User virtual address (through 'write (2)').
Therefore, before calling the flush function, I need to:
* Convert the User virtual address for each page to a Physical
address by doing something along the lines of:
pgd = pgd_offset(current->mm, addr)
pmd = pmd_offset(pgd, addr)
pte = pte_offset(pte, addr)
etc.
* Convert the Physical address to a Kernel virtual address by
calling 'phys_to_virt'.
Then I can call flush on each page I intend to IDMA out.
Does this sound reasonable?
Thanks
Greg.
Quoth Dan Malek:
>
>
> Greg Johnson wrote:
>
> > ....Since I
> > am not wanting to be double handling this data, and the data will
> > most likely be fragmented in memory,
>
> That's no problem, CPM buffer descriptors are made for that.
>
>
> > ....and call flush_dcache_range on each of these fragments.
>
> Depends upon how the data is handled. In some cases I find it is
> faster to just map the pages uncached and skip the cache push. Do
> some system design and analysis to determine what works best.
>
> > Have I overlooked anything? (other than the lack of support for IDMA
> > currently in the MPC8xx kernel)
>
> This is pretty trivial, considering all CPM devices use the same DMA
> engine as IDMA. Take an existing driver, rip the guts of data comm
> out of it, and you nearly have the IDMA driver.
>
>
> -- Dan
>
>
>
--
+----------------------------------------------------+
| Do you want to know more? |
| --== Greg Johnson ==-- |
| HW/SW Engineer gjohnson@research.canon.com.au |
| Canon Information Systems Research Australia |
| 1 Thomas Holt Dr, North Ryde, NSW, 2113, Australia |
| "I FLEXed my BISON and it went YACC!" - me. |
+----------------------------------------------------+
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: User buffers and cache flushing.
2000-03-16 2:40 ` Greg Johnson
@ 2000-03-16 10:07 ` Marcus Sundberg
2000-03-16 14:08 ` Brad Parker
2000-03-16 21:48 ` Greg Johnson
0 siblings, 2 replies; 6+ messages in thread
From: Marcus Sundberg @ 2000-03-16 10:07 UTC (permalink / raw)
To: Greg Johnson; +Cc: Linux PPC Mailing List
gjohnson@research.canon.com.au (Greg Johnson) writes:
> Hi y'all,
>
> The buffers I need to IDMA out to a peripheral are allocated by
> the user with conventional malloc, and are rather dynamic so
> the driver does not have a lot of control over them that I can
> see.
>
> So I have decided that I will be flushing the cache, either by
> calling 'flush_page_to_ram' or 'flush_dcache_range'. Now, these
> require a Kernel virtual address to work on (correct?), and the
> caller has given me a User virtual address (through 'write (2)').
> Therefore, before calling the flush function, I need to:
>
> * Convert the User virtual address for each page to a Physical
> address by doing something along the lines of:
>
> pgd = pgd_offset(current->mm, addr)
> pmd = pmd_offset(pgd, addr)
> pte = pte_offset(pte, addr)
> etc.
>
> * Convert the Physical address to a Kernel virtual address by
> calling 'phys_to_virt'.
>
> Then I can call flush on each page I intend to IDMA out.
>
> Does this sound reasonable?
No, you can't do DMA on user-space allocated memory, as you do not
know where the pages are, or even if they are in memory at all,
when the DMA starts. (Well, you could if you use mlock(), know
what you are doing, and are 100% sure that the process will not
die while DMA operations are still pending, but it's bad practise.)
The right thing to do is allocate memory in the driver and allow
the user process to mmap() it.
//Marcus
--
Signature under construction, please come back later.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: User buffers and cache flushing.
2000-03-16 10:07 ` Marcus Sundberg
@ 2000-03-16 14:08 ` Brad Parker
2000-03-16 21:48 ` Greg Johnson
1 sibling, 0 replies; 6+ messages in thread
From: Brad Parker @ 2000-03-16 14:08 UTC (permalink / raw)
To: Marcus Sundberg; +Cc: Greg Johnson, Linux PPC Mailing List
Marcus Sundberg wrote:
>
>The right thing to do is allocate memory in the driver and allow
>the user process to mmap() it.
>
years ago the linux BT848 pci video driver did this. it allocated a
huge (like 1mb) buffer and the userland app would mmap it and could
then do things to the video frames in (almost) real time... I'll bet
the current generation 'video for linux' code has examples of this...
-brad
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: User buffers and cache flushing.
2000-03-16 10:07 ` Marcus Sundberg
2000-03-16 14:08 ` Brad Parker
@ 2000-03-16 21:48 ` Greg Johnson
1 sibling, 0 replies; 6+ messages in thread
From: Greg Johnson @ 2000-03-16 21:48 UTC (permalink / raw)
To: Marcus Sundberg; +Cc: Greg Johnson, Linux PPC Mailing List
Quoth Marcus Sundberg:
> gjohnson@research.canon.com.au (Greg Johnson) writes:
> > Hi y'all,
> >
> > The buffers I need to IDMA out to a peripheral are allocated by
> > the user with conventional malloc, and are rather dynamic so
> > the driver does not have a lot of control over them that I can
> > see.
> >
> > So I have decided that I will be flushing the cache, either by
> > calling 'flush_page_to_ram' or 'flush_dcache_range'. Now, these
> > require a Kernel virtual address to work on (correct?), and the
> > caller has given me a User virtual address (through 'write (2)').
> > Therefore, before calling the flush function, I need to:
> >
> > * Convert the User virtual address for each page to a Physical
> > address by doing something along the lines of:
> >
> > pgd = pgd_offset(current->mm, addr)
> > pmd = pmd_offset(pgd, addr)
> > pte = pte_offset(pte, addr)
> > etc.
> >
> > * Convert the Physical address to a Kernel virtual address by
> > calling 'phys_to_virt'.
> >
> > Then I can call flush on each page I intend to IDMA out.
> >
> > Does this sound reasonable?
>
> No, you can't do DMA on user-space allocated memory, as you do not
> know where the pages are, or even if they are in memory at all,
> when the DMA starts. (Well, you could if you use mlock(), know
> what you are doing, and are 100% sure that the process will not
> die while DMA operations are still pending, but it's bad practise.)
>
> The right thing to do is allocate memory in the driver and allow
> the user process to mmap() it.
I can find out where they are using the above method. I know that they
have not been paged out since our system has no disk. The only thing
that is uncertain is the cache coherency, hence the flush. The idea of
fidling with page table flags so that the areas are not cached sounds
way too complicated.
I know that the user allocated buffer will likely be fragmented and
that is why I need to resolve each user virtual page address to its
physical to set up the DMA transfer.
I supose that I could lock the physical pages once I have resolved
their addresses, but this is a rather fiddly process and I would
like to avoid that if possible.
In addition, there will unlikely be any great catastrophy if the app
died while DMA's were pending. The driver could stop the DMAs as soon
as the app exits. Also they way the system works, the app dying is
a really bad thing in general, and ensuring the DMA completes properly
in this case is not a priority.
--
+----------------------------------------------------+
| Do you want to know more? |
| --== Greg Johnson ==-- |
| HW/SW Engineer gjohnson@research.canon.com.au |
| Canon Information Systems Research Australia |
| 1 Thomas Holt Dr, North Ryde, NSW, 2113, Australia |
| "I FLEXed my BISON and it went YACC!" - me. |
+----------------------------------------------------+
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2000-03-16 21:48 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2000-03-10 1:02 User buffers and cache flushing Greg Johnson
2000-03-10 2:49 ` Dan Malek
2000-03-16 2:40 ` Greg Johnson
2000-03-16 10:07 ` Marcus Sundberg
2000-03-16 14:08 ` Brad Parker
2000-03-16 21:48 ` Greg Johnson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).