linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: inki.dae@samsung.com (Inki Dae)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC PATCH v2] dmabuf-sync: Introduce buffer synchronization framework
Date: Thu, 20 Jun 2013 20:15:57 +0900	[thread overview]
Message-ID: <010801ce6da7$896affe0$9c40ffa0$%dae@samsung.com> (raw)
In-Reply-To: <1371723063.4114.12.camel@weser.hi.pengutronix.de>



> -----Original Message-----
> From: Lucas Stach [mailto:l.stach at pengutronix.de]
> Sent: Thursday, June 20, 2013 7:11 PM
> To: Inki Dae
> Cc: 'Russell King - ARM Linux'; 'Inki Dae'; 'linux-fbdev'; 'YoungJun Cho';
> 'Kyungmin Park'; 'myungjoo.ham'; 'DRI mailing list'; linux-arm-
> kernel at lists.infradead.org; linux-media at vger.kernel.org
> Subject: Re: [RFC PATCH v2] dmabuf-sync: Introduce buffer synchronization
> framework
> 
> Am Donnerstag, den 20.06.2013, 17:24 +0900 schrieb Inki Dae:
> [...]
> > > > In addition, please see the below more detail examples.
> > > >
> > > > The conventional way (without dmabuf-sync) is:
> > > > Task A
> > > > ----------------------------
> > > >  1. CPU accesses buf
> > > >  2. Send the buf to Task B
> > > >  3. Wait for the buf from Task B
> > > >  4. go to 1
> > > >
> > > > Task B
> > > > ---------------------------
> > > > 1. Wait for the buf from Task A
> > > > 2. qbuf the buf
> > > >     2.1 insert the buf to incoming queue
> > > > 3. stream on
> > > >     3.1 dma_map_sg if ready, and move the buf to ready queue
> > > >     3.2 get the buf from ready queue, and dma start.
> > > > 4. dqbuf
> > > >     4.1 dma_unmap_sg after dma operation completion
> > > >     4.2 move the buf to outgoing queue
> > > > 5. back the buf to Task A
> > > > 6. go to 1
> > > >
> > > > In case that two tasks share buffers, and data flow goes from Task A
> to
> > > Task
> > > > B, we would need IPC operation to send and receive buffers properly
> > > between
> > > > those two tasks every time CPU or DMA access to buffers is started
> or
> > > > completed.
> > > >
> > > >
> > > > With dmabuf-sync:
> > > >
> > > > Task A
> > > > ----------------------------
> > > >  1. dma_buf_sync_lock <- synpoint (call by user side)
> > > >  2. CPU accesses buf
> > > >  3. dma_buf_sync_unlock <- syncpoint (call by user side)
> > > >  4. Send the buf to Task B (just one time)
> > > >  5. go to 1
> > > >
> > > >
> > > > Task B
> > > > ---------------------------
> > > > 1. Wait for the buf from Task A (just one time)
> > > > 2. qbuf the buf
> > > >     1.1 insert the buf to incoming queue
> > > > 3. stream on
> > > >     3.1 dma_buf_sync_lock <- syncpoint (call by kernel side)
> > > >     3.2 dma_map_sg if ready, and move the buf to ready queue
> > > >     3.3 get the buf from ready queue, and dma start.
> > > > 4. dqbuf
> > > >     4.1 dma_buf_sync_unlock <- syncpoint (call by kernel side)
> > > >     4.2 dma_unmap_sg after dma operation completion
> > > >     4.3 move the buf to outgoing queue
> > > > 5. go to 1
> > > >
> > > > On the other hand, in case of using dmabuf-sync, as you can see the
> > > above
> > > > example, we would need IPC operation just one time. That way, I
> think we
> > > > could not only reduce performance overhead but also make user
> > > application
> > > > simplified. Of course, this approach can be used for all DMA device
> > > drivers
> > > > such as DRM. I'm not a specialist in v4l2 world so there may be
> missing
> > > > point.
> > > >
> > >
> > > You already need some kind of IPC between the two tasks, as I suspect
> > > even in your example it wouldn't make much sense to queue the buffer
> > > over and over again in task B without task A writing anything to it.
> So
> > > task A has to signal task B there is new data in the buffer to be
> > > processed.
> > >
> > > There is no need to share the buffer over and over again just to get
> the
> > > two processes to work together on the same thing. Just share the fd
> > > between both and then do out-of-band completion signaling, as you need
> > > this anyway. Without this you'll end up with unpredictable behavior.
> > > Just because sync allows you to access the buffer doesn't mean it's
> > > valid for your use-case. Without completion signaling you could easily
> > > end up overwriting your data from task A multiple times before task B
> > > even tries to lock the buffer for processing.
> > >
> > > So the valid flow is (and this already works with the current APIs):
> > > Task A                                    Task B
> > > ------                                    ------
> > > CPU access buffer
> > >          ----------completion signal--------->
> > >                                           qbuf (dragging buffer into
> > >                                           device domain, flush caches,
> > >                                           reserve buffer etc.)
> > >                                                     |
> > >                                           wait for device operation to
> > >                                           complete
> > >                                                     |
> > >                                           dqbuf (dragging buffer back
> > >                                           into CPU domain, invalidate
> > >                                           caches, unreserve)
> > >         <---------completion signal------------
> > > CPU access buffer
> > >
> >
> > Correct. In case that data flow goes from A to B, it needs some kind
> > of IPC between the two tasks every time as you said. Then, without
> > dmabuf-sync, how do think about the case that two tasks share the same
> > buffer but these tasks access the buffer(buf1) as write, and data of
> > the buffer(buf1) isn't needed to be shared?
> >
> Sorry, I don't see the point you are trying to solve here. If you share
> a buffer and want its content to be clearly defined at every point in
> time you have to synchronize the tasks working with the buffer, not just
> the buffer accesses itself.
> 
> Easiest way to do so is doing sync through userspace with out-of-band
> IPC, like in the example above.

In my opinion, that's not definitely easiest way. What I try to do is to avoid using *the out-of-band IPC*. As I mentioned in document file, the conventional mechanism not only makes user application complicated-user process needs to understand how the device driver is worked-but also may incur performance overhead by using the out-of-band IPC. The above my example may not be enough to you but there would be other cases able to use my approach efficiently.

> A more advanced way to achieve this
> would be using cross-device fences to avoid going through userspace for
> every syncpoint.
> 

Ok, maybe there is something I missed. So question. What is the cross-device fences? dma fence?. And how we can achieve the synchronization mechanism without going through user space for every syncpoint; CPU and DMA share a same buffer?. And could you explain it in detail as long as possible like I did?

> >
> > With dmabuf-sync is:
> >
> >  Task A
> >  ----------------------------
> >  1. dma_buf_sync_lock <- synpoint (call by user side)
> >  2. CPU writes something to buf1
> >  3. dma_buf_sync_unlock <- syncpoint (call by user side)
> >  4. copy buf1 to buf2
> Random contents here? What's in the buffer, content from the CPU write,
> or from V4L2 device write?
> 

Please presume that buf1 is physically non contiguous memory, and buf2 is physically contiguous memory; device A without IOMMU is seeing buf2. We would need to copy buf1 to buf2 to send the contents of the buf1 to device A because DMA of the device A cannot access the buf1 directly. And CPU and V4L2 device don't share the contents of the buf1 but share the buf1 as storage.

Thanks,
Inki Dae

> >  5. go to 1
> >
> >
> >  Task B
> >  ---------------------------
> >  1. dma_buf_sync_lock
> >  2. CPU writes something to buf3
> >  3. dma_buf_sync_unlock
> >  4. qbuf the buf3(src) and buf1(dst)
> >      4.1 insert buf3,1 to incoming queue
> >      4.2 dma_buf_sync_lock <- syncpoint (call by kernel side)
> >  5. stream on
> >      5.1 dma_map_sg if ready, and move the buf to ready queue
> >      5.2 get the buf from ready queue, and dma start.
> >  6. dqbuf
> >      6.1 dma_buf_sync_unlock <- syncpoint (call by kernel side)
> >      6.2 dma_unmap_sg after dma operation completion
> >      6.3 move the buf3,1 to outgoing queue
> > 7. go to 1
> >
> 
> Regards,
> Lucas
> --
> Pengutronix e.K.                           | Lucas Stach                 |
> Industrial Linux Solutions                 | http://www.pengutronix.de/  |
> Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-5076 |
> Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |

  reply	other threads:[~2013-06-20 11:15 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-13  8:28 [RFC PATCH] dmabuf-sync: Introduce buffer synchronization framework Inki Dae
2013-06-13 11:25 ` Inki Dae
2013-06-13 17:26 ` Russell King - ARM Linux
2013-06-14  2:32   ` Inki Dae
2013-06-17 11:15 ` [RFC PATCH v2] " Inki Dae
2013-06-17 11:34   ` Maarten Lankhorst
2013-06-17 13:04     ` Inki Dae
2013-06-17 13:31       ` Russell King - ARM Linux
     [not found]         ` <CAAQKjZO_t_kZkU46bUPTpoJs_oE1KkEqS2OTrTYjjJYZzBf+XA@mail.gmail.com>
2013-06-17 15:42           ` Russell King - ARM Linux
2013-06-17 16:01             ` Russell King - ARM Linux
     [not found]             ` <CAAQKjZOokFKN85pygVnm7ShSa+O0ZzwxvQ0rFssgNLp+RO5pGg@mail.gmail.com>
2013-06-17 18:21               ` Russell King - ARM Linux
2013-06-18  5:27                 ` Inki Dae
2013-06-18  8:43                   ` Russell King - ARM Linux
2013-06-18  9:04                     ` Inki Dae
2013-06-18  9:38                       ` Russell King - ARM Linux
2013-06-18  9:47                       ` Lucas Stach
2013-06-19  5:45                         ` Inki Dae
2013-06-19 10:22                           ` Lucas Stach
2013-06-19 10:44                             ` Inki Dae
2013-06-19 12:34                               ` Lucas Stach
     [not found]                                 ` <CAAQKjZNJD4HpnJQ7iE+Gez36066M6U0YQeUEdA0+UcSOKqeghg@mail.gmail.com>
2013-06-19 18:29                                   ` Russell King - ARM Linux
2013-06-20  6:43                                     ` Inki Dae
2013-06-20  7:47                                       ` Lucas Stach
2013-06-20  8:17                                         ` Russell King - ARM Linux
2013-06-20  8:26                                           ` Lucas Stach
2013-06-20  8:24                                         ` Inki Dae
2013-06-20 10:11                                           ` Lucas Stach
2013-06-20 11:15                                             ` Inki Dae [this message]
2013-06-21  8:54                                               ` Lucas Stach
     [not found]                                                 ` <CAAQKjZOxOMuL3zh_yV7tU2LBcZ7oVryiKa+LgjTM5HLY+va8zQ@mail.gmail.com>
2013-06-21 12:27                                                   ` Lucas Stach
2013-06-21 16:55                                                     ` Inki Dae
2013-06-21 19:02                                                       ` Jerome Glisse
     [not found]                                                         ` <CAAQKjZNnJRddACHzD+VF=A8vJpt9SEy2ttnS3Kw0y3hexu8dnw@mail.gmail.com>
2013-06-25 11:32                                                           ` [RFC PATCH] " Rob Clark
2013-06-25 14:17                                                             ` Inki Dae
2013-06-25 14:49                                                               ` Jerome Glisse
2013-06-26 16:06                                                                 ` Inki Dae
2013-06-18  7:00             ` [RFC PATCH v2] " Daniel Vetter
2013-06-18 10:46               ` Russell King - ARM Linux
2013-06-25  9:23                 ` Daniel Vetter
2013-06-26 17:18                   ` Russell King - ARM Linux
2013-06-17 13:31       ` Maarten Lankhorst
2013-06-19  9:10   ` [RFC PATCH v3] " Inki Dae

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='010801ce6da7$896affe0$9c40ffa0$%dae@samsung.com' \
    --to=inki.dae@samsung.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).