public inbox for linux-media@vger.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: "Clark, Rob" <rob@ti.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>,
	Daniel Vetter <daniel@ffwll.ch>,
	Tomasz Stanislawski <t.stanislaws@samsung.com>,
	Sumit Semwal <sumit.semwal@ti.com>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org,
	linaro-mm-sig@lists.linaro.org, dri-devel@lists.freedesktop.org,
	linux-media@vger.kernel.org, linux@arm.linux.org.uk,
	arnd@arndb.de, jesse.barker@linaro.org,
	Sumit Semwal <sumit.semwal@linaro.org>
Subject: Re: [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanismch
Date: Tue, 8 Nov 2011 18:42:27 +0100	[thread overview]
Message-ID: <20111108174122.GA4754@phenom.ffwll.local> (raw)
In-Reply-To: <CAO8GWqnNMGwADVnO4-RfJu0TPzHhANBdyctv2RyhCxbBJ0beXw@mail.gmail.com>

On Tue, Nov 08, 2011 at 10:59:56AM -0600, Clark, Rob wrote:
> On Thu, Nov 3, 2011 at 3:04 AM, Marek Szyprowski
> > 2. dma-mapping api is very limited in the area of the dynamic buffer management,
> > this API has been designed definitely for static buffer allocation and mapping.
> >
> > It looks that fully dynamic buffer management requires a complete change of
> > v4l2 api principles (V4L3?) and a completely new DMA API interface. That's
> > probably the reason by none of the GPU driver relies on the DMA-mapping API
> > and implements custom solution for managing the mappings.
> >
> > This reminds me one more issue I've noticed in the current dma buf proof-of-
> > concept. You assumed that the exporter will be responsible for mapping the
> > buffer into io address space of all the client devices. What if the device
> > needs additional custom hooks/hacks during the mappings? This will be a serious
> > problem for the current GPU drivers for example. IMHO the API will be much
> > clearer if each client driver will map the scatter list gathered from the
> > dma buf by itself. Only the client driver has the complete knowledge how
> > to do this correctly for this particular device. This way it will also work
> > with devices that don't do the real DMA (like for example USB devices that
> > copy all data from usb packets to the target buffer with the cpu).
> 
> The exporter doesn't map.. it returns a scatterlist to the importer.
> But the exporter does allocate and pin backing pages.  And it is
> preferable if the exporter has the opportunity to wait until as much
> is known about the various importing devices to know if it must
> allocate contiguous pages, or pages in a certain range.

Actually I think the importer should get a _mapped_ scatterlist when it
calls get_scatterlist. The simple reason is that for strange stuff like
memory remapped into e.g. omaps TILER doesn't have any sensible notion of
an address in physical memory. For the USB-example I think the right
approach is to attach the usb hci to the dma_buf, after all that is the
device that will read the data and move over the usb bus to the udl
device. Similar for any other device that sits behind a bus that can't do
dma (or it doesn't make sense to do dma).

Imo if there's a use-case where the client needs to frob the sg_list
before calling dma_map_sg, we have an issue with the dma subsystem in
general.

> That said, on a platform where everything had iommu's or somehow
> didn't have any particular memory requirements, or where the exporter
> had the strictest requirements (or at least knew of the strictest
> requirements), then the exporter is free to allocate/pin the backing
> pages earlier, such as even before the buffer is exported.

Yeah, I think the important thing is that the dma_buf api should allow
decent buffer management. If certain subsystems ignore that and just
allocate up-front, no problem for me. But given how all graphics drivers
for essentially all OS have moved to dynamic buffer management, I expect
decoders, encoders, v4l devices and whatever else might sit in a graphics
pipeline to follow.

Yours, Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

  reply	other threads:[~2011-11-08 17:41 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-10-11  9:23 [RFC 0/2] Introduce dma buffer sharing mechanism Sumit Semwal
2011-10-11  9:23 ` [RFC 1/2] dma-buf: " Sumit Semwal
2011-10-12 12:41   ` [Linaro-mm-sig] " Dave Airlie
2011-10-12 13:28     ` Rob Clark
2011-10-12 13:35       ` Dave Airlie
2011-10-12 13:50         ` Rob Clark
2011-10-12 14:01           ` Dave Airlie
2011-10-12 14:24             ` Rob Clark
2011-10-12 14:34               ` Dave Airlie
2011-10-12 14:49                 ` Daniel Vetter
2011-10-12 15:15                 ` Rob Clark
2011-10-14 10:00   ` Tomasz Stanislawski
2011-10-14 14:13     ` Sumit Semwal
2011-10-14 15:34     ` Rob Clark
2011-10-14 15:35     ` Daniel Vetter
2011-11-03  8:04       ` Marek Szyprowski
2011-11-08 16:59         ` Clark, Rob
2011-11-08 17:42           ` Daniel Vetter [this message]
2011-11-08 17:55             ` [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanismch Russell King - ARM Linux
2011-11-08 18:43               ` Daniel Vetter
2011-11-28  7:47                 ` Marek Szyprowski
2011-11-28 10:34                   ` Daniel Vetter
2011-11-25 14:13   ` [Linaro-mm-sig] [RFC 1/2] dma-buf: Introduce dma buffer sharing mechanism Dave Airlie
2011-11-25 16:02     ` Daniel Vetter
2011-11-25 16:15   ` Dave Airlie
2011-11-25 16:28     ` Dave Airlie
2011-11-26 14:00       ` Daniel Vetter
2011-11-27  6:59         ` Rob Clark
     [not found]           ` <CAB2ybb9Ti-2iz_qDfzMSgDhpUc6UOtGS8wi52nQaxhB-gH=azg@mail.gmail.com>
2011-12-01  5:55             ` Semwal, Sumit
2011-10-11  9:23 ` [RFC 2/2] dma-buf: Documentation for buffer sharing framework Sumit Semwal
2011-10-12 22:30   ` Randy Dunlap
2011-10-13  4:48     ` Semwal, Sumit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20111108174122.GA4754@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=arnd@arndb.de \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jesse.barker@linaro.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@arm.linux.org.uk \
    --cc=m.szyprowski@samsung.com \
    --cc=rob@ti.com \
    --cc=sumit.semwal@linaro.org \
    --cc=sumit.semwal@ti.com \
    --cc=t.stanislaws@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox