From: Maarten Lankhorst <maarten.lankhorst@canonical.com>
To: Rob Clark <rob.clark@linaro.org>
Cc: Tom Cooksey <tom.cooksey@arm.com>,
dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org,
linaro-mm-sig@lists.linaro.org, patches@linaro.org,
daniel.vetter@ffwll.ch, linux-kernel@vger.kernel.org,
sumit.semwal@linaro.org
Subject: Re: [RFC] dma-fence: dma-buf synchronization (v2)
Date: Fri, 13 Jul 2012 23:44:22 +0200 [thread overview]
Message-ID: <500096B6.2090208@canonical.com> (raw)
In-Reply-To: <CAF6AEGvP1+7BKo7+oCj4XBBw32NPjrH5EAZuodu2zb8oiyVP_Q@mail.gmail.com>
Hey,
Op 13-07-12 20:52, Rob Clark schreef:
> On Fri, Jul 13, 2012 at 12:35 PM, Tom Cooksey <tom.cooksey@arm.com> wrote:
>> My other thought is around atomicity. Could this be extended to
>> (safely) allow for hardware devices which might want to access
>> multiple buffers simultaneously? I think it probably can with
>> some tweaks to the interface? An atomic function which does
>> something like "give me all the fences for all these buffers
>> and add this fence to each instead/as-well-as"?
> fwiw, what I'm leaning towards right now is combining dma-fence w/
> Maarten's idea of dma-buf-mgr (not sure if you saw his patches?). And
> let dmabufmgr handle the multi-buffer reservation stuff. And possibly
> the read vs write access, although this I'm not 100% sure on... the
> other option being the concept of read vs write (or
> exclusive/non-exclusive) fences.
Agreed, dmabufmgr is meant for reserving multiple buffers without deadlocks.
The underlying mechanism for synchronization can be dma-fences, it wouldn't
really change dmabufmgr much.
> In the current state, the fence is quite simple, and doesn't care
> *what* it is fencing, which seems advantageous when you get into
> trying to deal with combinations of devices sharing buffers, some of
> whom can do hw sync, and some who can't. So having a bit of
> partitioning from the code dealing w/ sequencing who can access the
> buffers when and for what purpose seems like it might not be a bad
> idea. Although I'm still working through the different alternatives.
>
Yeah, I managed to get nouveau hooked up with generating irqs on
completion today using an invalid command. It's also no longer a
performance regression, so software syncing is no longer a problem
for nouveau. i915 already generates irqs and r600 presumably too.
Monday I'll take a better look at your patch, end of day now. :)
~Maarten
next prev parent reply other threads:[~2012-07-13 21:44 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-13 15:38 [RFC] dma-fence: dma-buf synchronization (v2) Rob Clark
[not found] ` <50005dfd.25f2440a.6e6b.ffffbcd9SMTPIN_ADDED@mx.google.com>
2012-07-13 18:52 ` Rob Clark
2012-07-13 21:44 ` Maarten Lankhorst [this message]
2012-07-13 22:38 ` Rob Clark
2012-07-16 10:11 ` Maarten Lankhorst
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=500096B6.2090208@canonical.com \
--to=maarten.lankhorst@canonical.com \
--cc=daniel.vetter@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=patches@linaro.org \
--cc=rob.clark@linaro.org \
--cc=sumit.semwal@linaro.org \
--cc=tom.cooksey@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).