linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Clemens Koller <clemens.koller@anagramm.de>
To: Stephen Williams <steve@icarus.com>
Cc: linuxppc-embedded@ozlabs.org
Subject: Re: Mapping huge user buffers for DMA
Date: Wed, 31 Aug 2005 16:03:12 +0200	[thread overview]
Message-ID: <4315B8A0.4040600@anagramm.de> (raw)
In-Reply-To: <431496FC.3090208@icarus.com>

Hello, Stephen!

Please have a look at the whole thread around my posts at:
http://lkml.org/lkml/2005/8/4/201

I am working on an mpc8540 (e500) ppc and we are pushing
data from the local bus directly to the userspace
mlock()ed memory with the dma engine, using scatter/gather
(chaining and page/chunk-defragmentation/compression) transfers.

Currently, I send the application's user virtual and mlock()ed
address to the kernel driver through an ioctl and do an iopa()
at the user virtual addresses (pages) to get to the physical
addresses (pages). The system works stable and we can fill up
the whole memory (192MByte) with currently 144MByte/s
(still improvements possible).

If you are interested in our design, feel free to contact
me directly.

Greets,

Clemens Koller
_______________________________
R&D Imaging Devices
Anagramm GmbH
Rupert-Mayer-Str. 45/1
81379 Muenchen
Germany

http://www.anagramm.de
Phone: +49-89-741518-50
Fax: +49-89-741518-19


Stephen Williams wrote:
> 
> I have a PPC405GPr system with an image processing device, that
> is creating potentially huge amounts of data. In one setup I
> have a 256Meg system, and I'm trying to map a 192Meg destination
> buffer using map_user_kiovec and an array of kiobufs.
> 
> I'm finding, however, that I'm getting an Oops in map_user_kiovec
> when it tries this, and I'm wondering where I need to look for
> any limits I might be overrunning.
> 
> Also, I've been considering skipping kiobufs all together and
> instead using code like this (lifted from map_user_kiobuf)
> 
>     /* Try to fault in all of the necessary pages */
>     down_read(&mm->mmap_sem);
>     /* rw==READ means read from disk, write into memory area */
>     err = get_user_pages(current, mm, va, pgcount,
>             (rw==READ), 0, iobuf->maplist, NULL);
>     up_read(&mm->mmap_sem);
> 
> to get the user pages directly. This is really what I want, and
> I do not need the other functionality of kiobufs. Is the
> get_user_pages function kosher for use by drivers? Is there
> a limit to what get_user_pages may map?
> 
> 

      parent reply	other threads:[~2005-08-31 14:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-08-30 17:27 Mapping huge user buffers for DMA Stephen Williams
2005-08-31  0:10 ` Roger Larsson
2005-08-31  0:35   ` Stephen Williams
2005-08-31 14:03 ` Clemens Koller [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4315B8A0.4040600@anagramm.de \
    --to=clemens.koller@anagramm.de \
    --cc=linuxppc-embedded@ozlabs.org \
    --cc=steve@icarus.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).