From: Inki Dae <daeinki@gmail.com>
To: Jerome Glisse <j.glisse@gmail.com>
Cc: linux-fbdev <linux-fbdev@vger.kernel.org>,
Russell King - ARM Linux <linux@arm.linux.org.uk>,
DRI mailing list <dri-devel@lists.freedesktop.org>,
Kyungmin Park <kyungmin.park@samsung.com>,
"myungjoo.ham" <myungjoo.ham@samsung.com>,
YoungJun Cho <yj44.cho@samsung.com>,
"linux-media@vger.kernel.org" <linux-media@vger.kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: [RFC PATCH] dmabuf-sync: Introduce buffer synchronization framework
Date: Tue, 25 Jun 2013 18:09:32 +0900 [thread overview]
Message-ID: <CAAQKjZNnJRddACHzD+VF=A8vJpt9SEy2ttnS3Kw0y3hexu8dnw@mail.gmail.com> (raw)
In-Reply-To: <CAH3drwZVhs=odjFdB_Mf+K0JLT5NSSbz5mP9aOS=5fx-PVdzSg@mail.gmail.com>
[-- Attachment #1.1: Type: text/plain, Size: 9369 bytes --]
2013/6/22 Jerome Glisse <j.glisse@gmail.com>:
> On Fri, Jun 21, 2013 at 12:55 PM, Inki Dae <daeinki@gmail.com> wrote:
>> 2013/6/21 Lucas Stach <l.stach@pengutronix.de>:
>>> Hi Inki,
>>>
>>> please refrain from sending HTML Mails, it makes proper quoting without
>>> messing up the layout everywhere pretty hard.
>>>
>>
>> Sorry about that. I should have used text mode.
>>
>>> Am Freitag, den 21.06.2013, 20:01 +0900 schrieb Inki Dae:
>>> [...]
>>>
>>>> Yeah, you'll some knowledge and understanding about the API
>>>> you are
>>>> working with to get things right. But I think it's not an
>>>> unreasonable
>>>> thing to expect the programmer working directly with kernel
>>>> interfaces
>>>> to read up on how things work.
>>>>
>>>> Second thing: I'll rather have *one* consistent API for every
>>>> subsystem,
>>>> even if they differ from each other than having to implement
>>>> this
>>>> syncpoint thing in every subsystem. Remember: a single execbuf
>>>> in DRM
>>>> might reference both GEM objects backed by dma-buf as well
>>>> native SHM or
>>>> CMA backed objects. The dma-buf-mgr proposal already allows
>>>> you to
>>>> handle dma-bufs much the same way during validation than
>>>> native GEM
>>>> objects.
>>>>
>>>> Actually, at first I had implemented a fence helper framework based on
>>>> reservation and dma fence to provide easy-use-interface for device
>>>> drivers. However, that was wrong implemention: I had not only
>>>> customized the dma fence but also not considered dead lock issue.
>>>> After that, I have reimplemented it as dmabuf sync to solve dead
>>>> issue, and at that time, I realized that we first need to concentrate
>>>> on the most basic thing: the fact CPU and CPU, CPU and DMA, or DMA and
>>>> DMA can access a same buffer, And the fact simple is the best, and the
>>>> fact we need not only kernel side but also user side interfaces. After
>>>> that, I collected what is the common part for all subsystems, and I
>>>> have devised this dmabuf sync framework for it. I'm not really
>>>> specialist in Desktop world. So question. isn't the execbuf used only
>>>> for the GPU? the gpu has dedicated video memory(VRAM) so it needs
>>>> migration mechanism between system memory and the dedicated video
>>>> memory, and also to consider ordering issue while be migrated.
>>>>
>>>
>>> Yeah, execbuf is pretty GPU specific, but I don't see how this matters
>>> for this discussion. Also I don't see a big difference between embedded
>>> and desktop GPUs. Buffer migration is more of a detail here. Both take
>>> command stream that potentially reference other buffers, which might be
>>> native GEM or dma-buf backed objects. Both have to make sure the buffers
>>> are in the right domain (caches cleaned and address mappings set up) and
>>> are available for the desired operation, meaning you have to sync with
>>> other DMA engines and maybe also with CPU.
>>
>> Yeah, right. Then, in case of desktop gpu, does't it need additional
>> something to do when a buffer/s is/are migrated from system memory to
>> video memory domain, or from video memory to system memory domain? I
>> guess the below members does similar thing, and all other DMA devices
>> would not need them:
>> struct fence {
>> ...
>> unsigned int context, seqno;
>> ...
>> };
>>
>> And,
>> struct seqno_fence {
>> ...
>> uint32_t seqno_ofs;
>> ...
>> };
>>
>>>
>>> The only case where sync isn't clearly defined right now by the current
>>> API entrypoints is when you access memory through the dma-buf fallback
>>> mmap support, which might happen with some software processing element
>>> in a video pipeline or something. I agree that we will need a userspace
>>> interface here, but I think this shouldn't be yet another sync object,
>>> but rather more a prepare/fini_cpu_access ioctl on the dma-buf which
>>> hooks into the existing dma-fence and reservation stuff.
>>
>> I think we don't need addition ioctl commands for that. I am thinking
>> of using existing resources as possible. My idea also is similar in
>> using the reservation stuff to your idea because my approach also
>> should use the dma-buf resource. However, My idea is that a user
>> process, that wants buffer synchronization with the other, sees a sync
>> object as a file descriptor like dma-buf does. The below shows simple
>> my idea about it:
>>
>> ioctl(dmabuf_fd, DMA_BUF_IOC_OPEN_SYNC, &sync);
>>
>> flock(sync->fd, LOCK_SH); <- LOCK_SH means a shared lock.
>> CPU access for read
>> flock(sync->fd, LOCK_UN);
>>
>> Or
>>
>> flock(sync->fd, LOCK_EX); <- LOCK_EX means an exclusive lock
>> CPU access for write
>> flock(sync->fd, LOCK_UN);
>>
>> close(sync->fd);
>>
>> As you know, that's similar to dmabuf export feature.
>>
>> In addition, a more simple idea,
>> flock(dmabuf_fd, LOCK_SH/EX);
>> CPU access for read/write
>> flock(dmabuf_fd, LOCK_UN);
>>
>> However, I'm not sure that the above examples could be worked well,
>> and there are no problems yet: actually, I don't fully understand
>> flock mechanism, so looking into it.
>>
>>>
>>>>
>>>> And to get back to my original point: if you have more than
>>>> one task
>>>> operating together on a buffer you absolutely need some kind
>>>> of real IPC
>>>> to sync them up and do something useful. Both you syncpoints
>>>> and the
>>>> proposed dma-fences only protect the buffer accesses to make
>>>> sure
>>>> different task don't stomp on each other. There is nothing in
>>>> there to
>>>> make sure that the output of your pipeline is valid. You have
>>>> to take
>>>> care of that yourself in userspace. I'll reuse your example to
>>>> make it
>>>> clear what I mean:
>>>>
>>>> Task A Task B
>>>> ------ -------
>>>> dma_buf_sync_lock(buf1)
>>>> CPU write buf1
>>>> dma_buf_sync_unlock(buf1)
>>>> ---------schedule Task A again-------
>>>> dma_buf_sync_lock(buf1)
>>>> CPU write buf1
>>>> dma_buf_sync_unlock(buf1)
>>>> ---------schedule Task B---------
>>>> qbuf(buf1)
>>>>
>>>> dma_buf_sync_lock(buf1)
>>>> ....
>>>>
>>>> This is what can happen if you don't take care of proper
>>>> syncing. Task A
>>>> writes something to the buffer in expectation that Task B will
>>>> take care
>>>> of it, but before Task B even gets scheduled Task A overwrites
>>>> the
>>>> buffer again. Not what you wanted, isn't it?
>>>>
>>>> Exactly wrong example. I had already mentioned about that. "In case
>>>> that data flow goes from A to B, it needs some kind of IPC between the
>>>> two tasks every time" So again, your example would have no any
>>>> problem in case that *two tasks share the same buffer but these tasks
>>>> access the buffer(buf1) as write, and data of the buffer(buf1) isn't
>>>> needed to be shared*. They just need to use the buffer as *storage*.
>>>> So All they want is to avoid stomping on the buffer in this case.
>>>>
>>> Sorry, but I don't see the point. If no one is interested in the data of
>>> the buffer, why are you sharing it in the first place?
>>>
>>
>> Just used as a storage. i.e., Task A fills the buffer with "AAAAAA"
>> using CPU, And Task B fills the buffer with "BBBBBB" using DMA. They
>> don't share data of the buffer, but they share *memory region* of the
>> buffer. That would be very useful for the embedded systems with very
>> small size system memory.
>
> Just so i understand. You want to share backing memory, you don't want
> to share content ie you want to do memory management in userspace.
> This sounds wrong on so many level (not even considering the security
> implication).
>
> If Task A need memory and then can release it for Task B usage
Not true. Task A can never release memory because All task A can do is to
unreference dma buf object of sync object. And please know that user
interfaces hasn't been implemented yet, and we just have a plan for it as I
already mentioned.
> that
> should be the role of kernel memory management which of course needs
> synchronization btw A and B. But in no case this should be done using
> dma-buf. dma-buf is for sharing content btw different devices not
> sharing resources.
>
hmm, is that true? And are you sure? Then how do you think about
reservation? the reservation also uses dma-buf with same reason as long as
I know: actually, we use reservation to use dma-buf. As you may know, a
reservation object is allocated and initialized when a buffer object is
exported to a dma buf.
Thanks,
Inki Dae
>
> Also don't over complicate the vram case, just consider desktop gpu as
> using system memory directly. They can do it and they do it. Migration
> to vram is orthogonal to all this, it's an optimization so to speak.
>
> Cheers,
> Jerome
[-- Attachment #1.2: Type: text/html, Size: 12136 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2013-06-25 9:09 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-13 8:28 [RFC PATCH] dmabuf-sync: Introduce buffer synchronization framework Inki Dae
2013-06-13 11:25 ` Inki Dae
2013-06-13 17:26 ` Russell King - ARM Linux
2013-06-14 2:32 ` Inki Dae
2013-06-17 11:15 ` [RFC PATCH v2] " Inki Dae
2013-06-17 11:34 ` Maarten Lankhorst
2013-06-17 13:04 ` Inki Dae
2013-06-17 13:31 ` Russell King - ARM Linux
2013-06-17 15:03 ` Inki Dae
2013-06-17 15:42 ` Russell King - ARM Linux
2013-06-17 16:01 ` Russell King - ARM Linux
2013-06-17 17:19 ` Inki Dae
2013-06-17 18:21 ` Russell King - ARM Linux
2013-06-18 5:27 ` Inki Dae
2013-06-18 8:43 ` Russell King - ARM Linux
2013-06-18 9:04 ` Inki Dae
2013-06-18 9:38 ` Russell King - ARM Linux
2013-06-18 9:47 ` Lucas Stach
2013-06-19 5:45 ` Inki Dae
2013-06-19 10:22 ` Lucas Stach
2013-06-19 10:44 ` Inki Dae
2013-06-19 12:34 ` Lucas Stach
2013-06-19 15:10 ` Inki Dae
2013-06-19 18:29 ` Russell King - ARM Linux
2013-06-20 6:43 ` Inki Dae
2013-06-20 7:47 ` Lucas Stach
2013-06-20 8:17 ` Russell King - ARM Linux
2013-06-20 8:26 ` Lucas Stach
2013-06-20 8:24 ` Inki Dae
2013-06-20 10:11 ` Lucas Stach
2013-06-20 11:15 ` Inki Dae
2013-06-21 8:54 ` Lucas Stach
2013-06-21 11:01 ` Inki Dae
2013-06-21 12:27 ` Lucas Stach
2013-06-21 16:55 ` Inki Dae
2013-06-21 19:02 ` Jerome Glisse
2013-06-22 1:36 ` Inki Dae
2013-06-25 9:09 ` Inki Dae [this message]
2013-06-25 11:32 ` [RFC PATCH] " Rob Clark
2013-06-25 14:17 ` Inki Dae
2013-06-25 14:49 ` Jerome Glisse
2013-06-26 16:06 ` Inki Dae
2013-08-12 9:55 ` About buffer sychronization mechanism and cache operation Inki Dae
2013-06-18 7:00 ` [RFC PATCH v2] dmabuf-sync: Introduce buffer synchronization framework Daniel Vetter
2013-06-18 10:46 ` Russell King - ARM Linux
2013-06-25 9:23 ` Daniel Vetter
2013-06-26 17:18 ` Russell King - ARM Linux
2013-06-17 13:31 ` Maarten Lankhorst
2013-06-17 15:20 ` Inki Dae
2013-06-19 9:10 ` [RFC PATCH v3] " Inki Dae
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAAQKjZNnJRddACHzD+VF=A8vJpt9SEy2ttnS3Kw0y3hexu8dnw@mail.gmail.com' \
--to=daeinki@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=j.glisse@gmail.com \
--cc=kyungmin.park@samsung.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fbdev@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linux@arm.linux.org.uk \
--cc=myungjoo.ham@samsung.com \
--cc=yj44.cho@samsung.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).