From: Matt Porter <mporter@kernel.crashing.org>
To: Jamey Hicks <jamey.hicks@hp.com>
Cc: Matt Porter <mporter@kernel.crashing.org>,
Ian Molton <spyro@f2s.com>,
linux-kernel@vger.kernel.org, greg@kroah.com, tony@atomide.com,
david-b@pacbell.net, joshua@joshuawise.com
Subject: Re: DMA API issues
Date: Fri, 18 Jun 2004 12:21:12 -0700 [thread overview]
Message-ID: <20040618122112.D3851@home.com> (raw)
In-Reply-To: <40D3356E.8040800@hp.com>; from jamey.hicks@hp.com on Fri, Jun 18, 2004 at 02:33:18PM -0400
On Fri, Jun 18, 2004 at 02:33:18PM -0400, Jamey Hicks wrote:
> Matt Porter wrote:
> >On Fri, Jun 18, 2004 at 05:59:02PM +0100, Ian Molton wrote:
> >>1) The DMA API provides no methods to set up a mapping between the host
> >> memory map and the devices view of the space
> >> example:
> >> the OHCI controller above would see its 32K of SRAM as
> >> mapped from 0x10000 - 0x1ffff and not 0xXXX10000 - 0xXXX1ffff
> >> which is the address the CPU sees.
> >>2) The DMA API assumes the device can access SDRAM
> >> example:
> >> the OHCI controller base is mapped at 0x10000000 on my platform.
> >> this is NOT is SDRAM, its in IO space.
> >
> >Can't you just implement an arch-specific allocator for your 32KB
> >SRAM, then implement the DMA API streaming and dma_alloc/free APIs
> >on top of that? Since this architecture is obviously not designed
> >for performance, it doesn't seem to be a big deal to have the streaming
> >APIs copy to/from the kmalloced (or whatever) buffer to/from the SRAM
> >allocated memory and then have those APIs return the proper dma_addr_t
> >for the embedded OHCI's address space view of the SRAM. Same thing
> >goes for implementing dma_alloc/free (used by dma_pool*). I don't
> >have the knowledge of USB subsystem buffer usage to know how quickly
> >that little 32KB of SRAM is going to run out. But at a DMA API level,
> >this seems doable, albeit with the greater possibility of negative
> >retvals from these calls.
> >
> Your analysis of what is needed is correct. However, we would prefer to
> fit this into the DMA API in a generic way, rather than having a
> specialized API that is not acceptable upstream. I'm more concerned
> with finding the best way to address this problem than with whether this
> is a 2.6 or 2.7 issue.
>
> We do need a memory allocator for the pool of SRAM. If we're not going
> to have to build customized versions of the OHCI driver, we need that
> pool hooked into the dma_pool and dma_alloc_coherent interfaces. We
> could do that with a platform specific implementation of
> dma_alloc_coherent, but a pointer to dma_{alloc,free} from struct device
> seems like a cleaner solution. We also will need bounce buffers, as you
Yes, it can be cleaner, but it's not something I would say is
completely broken with the DMA API. It does provide a way to
make your device specific implementation, regardless of whether
it's managed ideally via a struct device pointer. Migrating to
dev->dma* sounds suspiciously like a 2.7ism.
> described, but there is already a good start towards that support in
> 2.6. The other thing that might be needed is passing device to
Where's that code?
> page_to_dma so that device specific dma addresses can be constructed.
A struct device argument to page_to_dma seems like a no brainer to be
included.
> Deepak Saxena wrote a pretty good summary as part if the discussion
> about this issue on the linux-arm-kernel mailing list:
>
> http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2004-June/022796.html
Ahh, ok. Deepak and I have discussed this idea F2F on a several
occassions, I recall he needed it for the small floating PCI window
he has to manage on the IXP* ports. It may help in some embedded
PPC areas as well.
> I think I'm looking for something like the PARISC hppa_dma_ops but more
> generic:
>
> http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2004-June/022813.html
I see that's somewhat like what David Brownell suggested before...a single
pointer to a set of dma ops from struct device. hppa_dma_ops translated
into a generic dma_ops entity with fields corresponding to existing
DMA API calls would be a good starting point. We can get rid of some
address translation hacks in a lot of custom embedded PPC drivers
with something like this.
-Matt
next prev parent reply other threads:[~2004-06-18 19:25 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-06-18 16:59 DMA API issues Ian Molton
2004-06-18 18:07 ` Matt Porter
2004-06-18 18:19 ` Ian Molton
2004-06-18 18:58 ` Matt Porter
2004-06-18 18:33 ` Jamey Hicks
2004-06-18 19:21 ` Matt Porter [this message]
2004-06-18 19:43 ` Russell King
2004-06-21 13:35 ` Takashi Iwai
2004-06-21 23:08 ` Russell King
2004-06-22 2:06 ` Jeff Garzik
2004-06-22 3:18 ` Linus Torvalds
2004-06-22 3:26 ` Linus Torvalds
2004-06-22 10:40 ` Takashi Iwai
2004-06-23 12:34 ` Russell King
2004-06-23 15:36 ` Takashi Iwai
2004-06-23 15:44 ` Russell King
2004-06-23 16:01 ` Takashi Iwai
2004-06-23 16:10 ` Russell King
2004-06-22 10:48 ` Takashi Iwai
2004-06-18 19:48 ` Jamey Hicks
2004-06-18 21:08 ` Jeff Garzik
2004-06-18 22:12 ` Richard B. Johnson
2004-06-18 23:27 ` Ian Molton
2004-06-18 23:26 ` Ian Molton
2004-06-18 23:30 ` James Bottomley
2004-06-18 23:32 ` Jeff Garzik
[not found] ` <20040619005714.37b68453.spyro@f2s.com>
[not found] ` <40D3838B.2070608@pobox.com>
[not found] ` <20040619011621.4491600a.spyro@f2s.com>
[not found] ` <40D3872F.5010007@pobox.com>
2004-06-19 0:34 ` Ian Molton
2004-06-19 21:15 ` Tony Lindgren
-- strict thread matches above, loose matches on Subject: below --
2004-06-18 18:20 James Bottomley
2004-06-18 18:35 ` Ian Molton
2004-06-18 18:52 ` James Bottomley
2004-06-18 18:57 ` Ian Molton
2004-06-18 19:20 ` David Brownell
2004-06-18 19:44 ` Ian Molton
2004-06-18 19:57 ` James Bottomley
2004-06-18 21:08 ` David Brownell
2004-06-18 21:14 ` James Bottomley
2004-06-18 22:38 ` David Brownell
2004-06-18 23:07 ` James Bottomley
2004-06-18 23:31 ` Ian Molton
2004-06-19 18:23 ` David Brownell
2004-06-19 20:41 ` Russell King
2004-06-19 21:46 ` James Bottomley
2004-06-19 22:49 ` Ian Molton
2004-06-20 13:37 ` James Bottomley
2004-06-20 15:50 ` Ian Molton
2004-06-20 16:26 ` Jeff Garzik
2004-06-20 16:57 ` Ian Molton
2004-06-20 20:15 ` David Brownell
2004-06-20 16:46 ` James Bottomley
2004-06-20 18:02 ` Oliver Neukum
2004-06-20 19:27 ` James Bottomley
2004-06-20 19:34 ` Oliver Neukum
2004-06-20 20:07 ` David Brownell
2004-06-20 20:18 ` David Brownell
2004-06-20 20:02 ` David Brownell
2004-06-18 23:25 ` Ian Molton
2004-06-18 23:29 ` James Bottomley
2004-06-18 23:51 ` Ian Molton
2004-06-19 0:04 ` James Bottomley
2004-06-19 0:14 ` Ian Molton
2004-06-19 3:49 ` James Bottomley
2004-06-20 20:59 ` Jamey Hicks
2004-06-18 19:30 ` James Bottomley
2004-06-18 19:56 ` Ian Molton
2004-06-18 19:22 ` Jamey Hicks
2004-06-18 19:41 ` James Bottomley
2004-06-18 20:02 ` Oliver Neukum
2004-06-18 20:07 ` James Bottomley
2004-06-18 20:14 ` Benjamin Herrenschmidt
2004-06-18 20:24 ` James Bottomley
2004-06-18 21:20 ` Russell King
2004-06-18 23:20 ` Ian Molton
2004-06-20 18:25 ` Deepak Saxena
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040618122112.D3851@home.com \
--to=mporter@kernel.crashing.org \
--cc=david-b@pacbell.net \
--cc=greg@kroah.com \
--cc=jamey.hicks@hp.com \
--cc=joshua@joshuawise.com \
--cc=linux-kernel@vger.kernel.org \
--cc=spyro@f2s.com \
--cc=tony@atomide.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox