* [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls
@ 2012-03-18 23:34 Daniel Vetter
2012-03-18 23:34 ` [PATCH 2/4] dma-buf: add support for kernel cpu access Daniel Vetter
` (3 more replies)
0 siblings, 4 replies; 15+ messages in thread
From: Daniel Vetter @ 2012-03-18 23:34 UTC (permalink / raw)
To: linaro-mm-sig, LKML, DRI Development, linux-media; +Cc: Daniel Vetter
The mutex protects the attachment list and hence needs to be held
around the callbakc to the exporters (optional) attach/detach
functions.
Holding the mutex around the map/unmap calls doesn't protect any
dma_buf state. Exporters need to properly protect any of their own
state anyway (to protect against calls from their own interfaces).
So this only makes the locking messier (and lockdep easier to anger).
Therefore let's just drop this.
v2: Rebased on top of latest dma-buf-next git.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Rob Clark <rob.clark@linaro.org>
---
drivers/base/dma-buf.c | 5 -----
include/linux/dma-buf.h | 2 +-
2 files changed, 1 insertions(+), 6 deletions(-)
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 3c8c023..5641b9c 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -258,9 +258,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
if (WARN_ON(!attach || !attach->dmabuf))
return ERR_PTR(-EINVAL);
- mutex_lock(&attach->dmabuf->lock);
sg_table = attach->dmabuf->ops->map_dma_buf(attach, direction);
- mutex_unlock(&attach->dmabuf->lock);
return sg_table;
}
@@ -282,10 +280,7 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
return;
- mutex_lock(&attach->dmabuf->lock);
attach->dmabuf->ops->unmap_dma_buf(attach, sg_table,
direction);
- mutex_unlock(&attach->dmabuf->lock);
-
}
EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index bc4203dc..24e0f48 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -88,7 +88,7 @@ struct dma_buf {
struct file *file;
struct list_head attachments;
const struct dma_buf_ops *ops;
- /* mutex to serialize list manipulation and other ops */
+ /* mutex to serialize list manipulation and attach/detach */
struct mutex lock;
void *priv;
};
--
1.7.7.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/4] dma-buf: add support for kernel cpu access
2012-03-18 23:34 [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls Daniel Vetter
@ 2012-03-18 23:34 ` Daniel Vetter
2012-03-19 2:00 ` Rob Clark
2012-03-18 23:34 ` [PATCH 3/4] dma_buf: Add documentation for the new cpu access support Daniel Vetter
` (2 subsequent siblings)
3 siblings, 1 reply; 15+ messages in thread
From: Daniel Vetter @ 2012-03-18 23:34 UTC (permalink / raw)
To: linaro-mm-sig, LKML, DRI Development, linux-media; +Cc: Daniel Vetter
Big differences to other contenders in the field (like ion) is
that this also supports highmem, so we have to split up the cpu
access from the kernel side into a prepare and a kmap step.
Prepare is allowed to fail and should do everything required so that
the kmap calls can succeed (like swapin/backing storage allocation,
flushing, ...).
More in-depth explanations will follow in the follow-up documentation
patch.
Changes in v2:
- Clear up begin_cpu_access confusion noticed by Sumit Semwal.
- Don't automatically fallback from the _atomic variants to the
non-atomic variants. The _atomic callbacks are not allowed to
sleep, so we want exporters to make this decision explicit. The
function signatures are explicit, so simpler exporters can still
use the same function for both.
- Make the unmap functions optional. Simpler exporters with permanent
mappings don't need to do anything at unmap time.
Changes in v3:
- Adjust the WARN_ON checks for the new ->ops functions as suggested
by Rob Clark and Sumit Semwal.
- Rebased on top of latest dma-buf-next git.
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/base/dma-buf.c | 124 ++++++++++++++++++++++++++++++++++++++++++++++-
include/linux/dma-buf.h | 59 ++++++++++++++++++++++
2 files changed, 182 insertions(+), 1 deletions(-)
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 5641b9c..2226511 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -80,7 +80,9 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops,
if (WARN_ON(!priv || !ops
|| !ops->map_dma_buf
|| !ops->unmap_dma_buf
- || !ops->release)) {
+ || !ops->release
+ || !ops->kmap_atomic
+ || !ops->kmap)) {
return ERR_PTR(-EINVAL);
}
@@ -284,3 +286,123 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
direction);
}
EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
+
+
+/**
+ * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
+ * cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific
+ * preparations. Coherency is only guaranteed in the specified range for the
+ * specified access direction.
+ * @dma_buf: [in] buffer to prepare cpu access for.
+ * @start: [in] start of range for cpu access.
+ * @len: [in] length of range for cpu access.
+ * @direction: [in] length of range for cpu access.
+ *
+ * Can return negative error values, returns 0 on success.
+ */
+int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
+ enum dma_data_direction direction)
+{
+ int ret = 0;
+
+ if (WARN_ON(!dmabuf))
+ return EINVAL;
+
+ if (dmabuf->ops->begin_cpu_access)
+ ret = dmabuf->ops->begin_cpu_access(dmabuf, start, len, direction);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
+
+/**
+ * dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the
+ * cpu in the kernel context. Calls end_cpu_access to allow exporter-specific
+ * actions. Coherency is only guaranteed in the specified range for the
+ * specified access direction.
+ * @dma_buf: [in] buffer to complete cpu access for.
+ * @start: [in] start of range for cpu access.
+ * @len: [in] length of range for cpu access.
+ * @direction: [in] length of range for cpu access.
+ *
+ * This call must always succeed.
+ */
+void dma_buf_end_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
+ enum dma_data_direction direction)
+{
+ WARN_ON(!dmabuf);
+
+ if (dmabuf->ops->end_cpu_access)
+ dmabuf->ops->end_cpu_access(dmabuf, start, len, direction);
+}
+EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
+
+/**
+ * dma_buf_kmap_atomic - Map a page of the buffer object into kernel address
+ * space. The same restrictions as for kmap_atomic and friends apply.
+ * @dma_buf: [in] buffer to map page from.
+ * @page_num: [in] page in PAGE_SIZE units to map.
+ *
+ * This call must always succeed, any necessary preparations that might fail
+ * need to be done in begin_cpu_access.
+ */
+void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num)
+{
+ WARN_ON(!dmabuf);
+
+ return dmabuf->ops->kmap_atomic(dmabuf, page_num);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic);
+
+/**
+ * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic.
+ * @dma_buf: [in] buffer to unmap page from.
+ * @page_num: [in] page in PAGE_SIZE units to unmap.
+ * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap_atomic.
+ *
+ * This call must always succeed.
+ */
+void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num,
+ void *vaddr)
+{
+ WARN_ON(!dmabuf);
+
+ if (dmabuf->ops->kunmap_atomic)
+ dmabuf->ops->kunmap_atomic(dmabuf, page_num, vaddr);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic);
+
+/**
+ * dma_buf_kmap - Map a page of the buffer object into kernel address space. The
+ * same restrictions as for kmap and friends apply.
+ * @dma_buf: [in] buffer to map page from.
+ * @page_num: [in] page in PAGE_SIZE units to map.
+ *
+ * This call must always succeed, any necessary preparations that might fail
+ * need to be done in begin_cpu_access.
+ */
+void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long page_num)
+{
+ WARN_ON(!dmabuf);
+
+ return dmabuf->ops->kmap(dmabuf, page_num);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kmap);
+
+/**
+ * dma_buf_kunmap - Unmap a page obtained by dma_buf_kmap.
+ * @dma_buf: [in] buffer to unmap page from.
+ * @page_num: [in] page in PAGE_SIZE units to unmap.
+ * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap.
+ *
+ * This call must always succeed.
+ */
+void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num,
+ void *vaddr)
+{
+ WARN_ON(!dmabuf);
+
+ if (dmabuf->ops->kunmap)
+ dmabuf->ops->kunmap(dmabuf, page_num, vaddr);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kunmap);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 24e0f48..ee7ef99 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -50,6 +50,17 @@ struct dma_buf_attachment;
* @unmap_dma_buf: decreases usecount of buffer, might deallocate scatter
* pages.
* @release: release this buffer; to be called after the last dma_buf_put.
+ * @begin_cpu_access: [optional] called before cpu access to invalidate cpu
+ * caches and allocate backing storage (if not yet done)
+ * respectively pin the objet into memory.
+ * @end_cpu_access: [optional] called after cpu access to flush cashes.
+ * @kmap_atomic: maps a page from the buffer into kernel address
+ * space, users may not block until the subsequent unmap call.
+ * This callback must not sleep.
+ * @kunmap_atomic: [optional] unmaps a atomically mapped page from the buffer.
+ * This Callback must not sleep.
+ * @kmap: maps a page from the buffer into kernel address space.
+ * @kunmap: [optional] unmaps a page from the buffer.
*/
struct dma_buf_ops {
int (*attach)(struct dma_buf *, struct device *,
@@ -73,6 +84,14 @@ struct dma_buf_ops {
/* after final dma_buf_put() */
void (*release)(struct dma_buf *);
+ int (*begin_cpu_access)(struct dma_buf *, size_t, size_t,
+ enum dma_data_direction);
+ void (*end_cpu_access)(struct dma_buf *, size_t, size_t,
+ enum dma_data_direction);
+ void *(*kmap_atomic)(struct dma_buf *, unsigned long);
+ void (*kunmap_atomic)(struct dma_buf *, unsigned long, void *);
+ void *(*kmap)(struct dma_buf *, unsigned long);
+ void (*kunmap)(struct dma_buf *, unsigned long, void *);
};
/**
@@ -140,6 +159,14 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *,
enum dma_data_direction);
void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *,
enum dma_data_direction);
+int dma_buf_begin_cpu_access(struct dma_buf *dma_buf, size_t start, size_t len,
+ enum dma_data_direction dir);
+void dma_buf_end_cpu_access(struct dma_buf *dma_buf, size_t start, size_t len,
+ enum dma_data_direction dir);
+void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long);
+void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
+void *dma_buf_kmap(struct dma_buf *, unsigned long);
+void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
#else
static inline struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
@@ -188,6 +215,38 @@ static inline void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
return;
}
+static inline int dma_buf_begin_cpu_access(struct dma_buf *,
+ size_t, size_t,
+ enum dma_data_direction)
+{
+ return -ENODEV;
+}
+
+static inline void dma_buf_end_cpu_access(struct dma_buf *,
+ size_t, size_t,
+ enum dma_data_direction)
+{
+}
+
+static inline void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long)
+{
+ return NULL;
+}
+
+static inline void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long,
+ void *)
+{
+}
+
+static inline void *dma_buf_kmap(struct dma_buf *, unsigned long)
+{
+ return NULL;
+}
+
+static inline void dma_buf_kunmap(struct dma_buf *, unsigned long,
+ void *)
+{
+}
#endif /* CONFIG_DMA_SHARED_BUFFER */
#endif /* __DMA_BUF_H__ */
--
1.7.7.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 3/4] dma_buf: Add documentation for the new cpu access support
2012-03-18 23:34 [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls Daniel Vetter
2012-03-18 23:34 ` [PATCH 2/4] dma-buf: add support for kernel cpu access Daniel Vetter
@ 2012-03-18 23:34 ` Daniel Vetter
2012-03-19 1:54 ` Rob Clark
2012-03-18 23:34 ` [PATCH 4/4] dma-buf: document fd flags and O_CLOEXEC requirement Daniel Vetter
2012-03-22 6:03 ` [Linaro-mm-sig] [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls Sumit Semwal
3 siblings, 1 reply; 15+ messages in thread
From: Daniel Vetter @ 2012-03-18 23:34 UTC (permalink / raw)
To: linaro-mm-sig, LKML, DRI Development, linux-media; +Cc: Daniel Vetter
v2: Fix spelling issues noticed by Rob Clark.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
Documentation/dma-buf-sharing.txt | 102 +++++++++++++++++++++++++++++++++++-
1 files changed, 99 insertions(+), 3 deletions(-)
diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
index 225f96d..9f3aeef 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -32,8 +32,12 @@ The buffer-user
*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
For this first version, A buffer shared using the dma_buf sharing API:
- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
- this framework.
-- may be used *ONLY* by importers that do not need CPU access to the buffer.
+ this framework.
+- with this new iteration of the dma-buf api cpu access from the kernel has been
+ enable, see below for the details.
+
+dma-buf operations for device dma only
+--------------------------------------
The dma_buf buffer sharing API usage contains the following steps:
@@ -219,7 +223,99 @@ NOTES:
If the exporter chooses not to allow an attach() operation once a
map_dma_buf() API has been called, it simply returns an error.
-Miscellaneous notes:
+Kernel cpu access to a dma-buf buffer object
+--------------------------------------------
+
+The motivation to allow cpu access from the kernel to a dma-buf object from the
+importers side are:
+- fallback operations, e.g. if the devices is connected to a usb bus and the
+ kernel needs to shuffle the data around first before sending it away.
+- full transparency for existing users on the importer side, i.e. userspace
+ should not notice the difference between a normal object from that subsystem
+ and an imported one backed by a dma-buf. This is really important for drm
+ opengl drivers that expect to still use all the existing upload/download
+ paths.
+
+Access to a dma_buf from the kernel context involves three steps:
+
+1. Prepare access, which invalidate any necessary caches and make the object
+ available for cpu access.
+2. Access the object page-by-page with the dma_buf map apis
+3. Finish access, which will flush any necessary cpu caches and free reserved
+ resources.
+
+1. Prepare access
+
+ Before an importer can access a dma_buf object with the cpu from the kernel
+ context, it needs to notify the exporter of the access that is about to
+ happen.
+
+ Interface:
+ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ size_t start, size_t len,
+ enum dma_data_direction direction)
+
+ This allows the exporter to ensure that the memory is actually available for
+ cpu access - the exporter might need to allocate or swap-in and pin the
+ backing storage. The exporter also needs to ensure that cpu access is
+ coherent for the given range and access direction. The range and access
+ direction can be used by the exporter to optimize the cache flushing, i.e.
+ access outside of the range or with a different direction (read instead of
+ write) might return stale or even bogus data (e.g. when the exporter needs to
+ copy the data to temporary storage).
+
+ This step might fail, e.g. in oom conditions.
+
+2. Accessing the buffer
+
+ To support dma_buf objects residing in highmem cpu access is page-based using
+ an api similar to kmap. Accessing a dma_buf is done in aligned chunks of
+ PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which returns
+ a pointer in kernel virtual address space. Afterwards the chunk needs to be
+ unmapped again. There is no limit on how often a given chunk can be mapped
+ and unmapped, i.e. the importer does not need to call begin_cpu_access again
+ before mapping the same chunk again.
+
+ Interfaces:
+ void *dma_buf_kmap(struct dma_buf *, unsigned long);
+ void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
+
+ There are also atomic variants of these interfaces. Like for kmap they
+ facilitate non-blocking fast-paths. Neither the importer nor the exporter (in
+ the callback) is allowed to block when using these.
+
+ Interfaces:
+ void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long);
+ void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
+
+ For importers all the restrictions of using kmap apply, like the limited
+ supply of kmap_atomic slots. Hence an importer shall only hold onto at most 2
+ atomic dma_buf kmaps at the same time (in any given process context).
+
+ dma_buf kmap calls outside of the range specified in begin_cpu_access are
+ undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
+ the partial chunks at the beginning and end but may return stale or bogus
+ data outside of the range (in these partial chunks).
+
+ Note that these calls need to always succeed. The exporter needs to complete
+ any preparations that might fail in begin_cpu_access.
+
+3. Finish access
+
+ When the importer is done accessing the range specified in begin_cpu_access,
+ it needs to announce this to the exporter (to facilitate cache flushing and
+ unpinning of any pinned resources). The result of of any dma_buf kmap calls
+ after end_cpu_access is undefined.
+
+ Interface:
+ void dma_buf_end_cpu_access(struct dma_buf *dma_buf,
+ size_t start, size_t len,
+ enum dma_data_direction dir);
+
+
+Miscellaneous notes
+-------------------
+
- Any exporters or users of the dma-buf buffer sharing framework must have
a 'select DMA_SHARED_BUFFER' in their respective Kconfigs.
--
1.7.7.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 4/4] dma-buf: document fd flags and O_CLOEXEC requirement
2012-03-18 23:34 [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls Daniel Vetter
2012-03-18 23:34 ` [PATCH 2/4] dma-buf: add support for kernel cpu access Daniel Vetter
2012-03-18 23:34 ` [PATCH 3/4] dma_buf: Add documentation for the new cpu access support Daniel Vetter
@ 2012-03-18 23:34 ` Daniel Vetter
2012-03-19 10:51 ` [Linaro-mm-sig] " Dave Airlie
2012-03-22 6:03 ` [Linaro-mm-sig] [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls Sumit Semwal
3 siblings, 1 reply; 15+ messages in thread
From: Daniel Vetter @ 2012-03-18 23:34 UTC (permalink / raw)
To: linaro-mm-sig, LKML, DRI Development, linux-media; +Cc: Daniel Vetter
Otherwise subsystems will get this wrong and end up with and second
export ioctl with the flag and O_CLOEXEC support added.
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
Documentation/dma-buf-sharing.txt | 5 +++++
1 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
index 9f3aeef..087e261 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -319,6 +319,11 @@ Miscellaneous notes
- Any exporters or users of the dma-buf buffer sharing framework must have
a 'select DMA_SHARED_BUFFER' in their respective Kconfigs.
+- To avoid the wrath of userspace library writers exporting subsystems must have
+ a flag parameter in the ioctl that creates the dma-buf fd which needs to
+ support at least the O_CLOEXEC fd flag. This needs to be passed in the flag
+ parameter of dma_buf_export.
+
References:
[1] struct dma_buf_ops in include/linux/dma-buf.h
[2] All interfaces mentioned above defined in include/linux/dma-buf.h
--
1.7.7.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 3/4] dma_buf: Add documentation for the new cpu access support
2012-03-18 23:34 ` [PATCH 3/4] dma_buf: Add documentation for the new cpu access support Daniel Vetter
@ 2012-03-19 1:54 ` Rob Clark
2012-03-22 6:04 ` [Linaro-mm-sig] " Sumit Semwal
0 siblings, 1 reply; 15+ messages in thread
From: Rob Clark @ 2012-03-19 1:54 UTC (permalink / raw)
To: Daniel Vetter; +Cc: linaro-mm-sig, LKML, DRI Development, linux-media
On Sun, Mar 18, 2012 at 6:34 PM, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> v2: Fix spelling issues noticed by Rob Clark.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Rob Clark <rob@ti.com>
> ---
> Documentation/dma-buf-sharing.txt | 102 +++++++++++++++++++++++++++++++++++-
> 1 files changed, 99 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
> index 225f96d..9f3aeef 100644
> --- a/Documentation/dma-buf-sharing.txt
> +++ b/Documentation/dma-buf-sharing.txt
> @@ -32,8 +32,12 @@ The buffer-user
> *IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
> For this first version, A buffer shared using the dma_buf sharing API:
> - *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
> - this framework.
> -- may be used *ONLY* by importers that do not need CPU access to the buffer.
> + this framework.
> +- with this new iteration of the dma-buf api cpu access from the kernel has been
> + enable, see below for the details.
> +
> +dma-buf operations for device dma only
> +--------------------------------------
>
> The dma_buf buffer sharing API usage contains the following steps:
>
> @@ -219,7 +223,99 @@ NOTES:
> If the exporter chooses not to allow an attach() operation once a
> map_dma_buf() API has been called, it simply returns an error.
>
> -Miscellaneous notes:
> +Kernel cpu access to a dma-buf buffer object
> +--------------------------------------------
> +
> +The motivation to allow cpu access from the kernel to a dma-buf object from the
> +importers side are:
> +- fallback operations, e.g. if the devices is connected to a usb bus and the
> + kernel needs to shuffle the data around first before sending it away.
> +- full transparency for existing users on the importer side, i.e. userspace
> + should not notice the difference between a normal object from that subsystem
> + and an imported one backed by a dma-buf. This is really important for drm
> + opengl drivers that expect to still use all the existing upload/download
> + paths.
> +
> +Access to a dma_buf from the kernel context involves three steps:
> +
> +1. Prepare access, which invalidate any necessary caches and make the object
> + available for cpu access.
> +2. Access the object page-by-page with the dma_buf map apis
> +3. Finish access, which will flush any necessary cpu caches and free reserved
> + resources.
> +
> +1. Prepare access
> +
> + Before an importer can access a dma_buf object with the cpu from the kernel
> + context, it needs to notify the exporter of the access that is about to
> + happen.
> +
> + Interface:
> + int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> + size_t start, size_t len,
> + enum dma_data_direction direction)
> +
> + This allows the exporter to ensure that the memory is actually available for
> + cpu access - the exporter might need to allocate or swap-in and pin the
> + backing storage. The exporter also needs to ensure that cpu access is
> + coherent for the given range and access direction. The range and access
> + direction can be used by the exporter to optimize the cache flushing, i.e.
> + access outside of the range or with a different direction (read instead of
> + write) might return stale or even bogus data (e.g. when the exporter needs to
> + copy the data to temporary storage).
> +
> + This step might fail, e.g. in oom conditions.
> +
> +2. Accessing the buffer
> +
> + To support dma_buf objects residing in highmem cpu access is page-based using
> + an api similar to kmap. Accessing a dma_buf is done in aligned chunks of
> + PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which returns
> + a pointer in kernel virtual address space. Afterwards the chunk needs to be
> + unmapped again. There is no limit on how often a given chunk can be mapped
> + and unmapped, i.e. the importer does not need to call begin_cpu_access again
> + before mapping the same chunk again.
> +
> + Interfaces:
> + void *dma_buf_kmap(struct dma_buf *, unsigned long);
> + void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
> +
> + There are also atomic variants of these interfaces. Like for kmap they
> + facilitate non-blocking fast-paths. Neither the importer nor the exporter (in
> + the callback) is allowed to block when using these.
> +
> + Interfaces:
> + void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long);
> + void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
> +
> + For importers all the restrictions of using kmap apply, like the limited
> + supply of kmap_atomic slots. Hence an importer shall only hold onto at most 2
> + atomic dma_buf kmaps at the same time (in any given process context).
> +
> + dma_buf kmap calls outside of the range specified in begin_cpu_access are
> + undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
> + the partial chunks at the beginning and end but may return stale or bogus
> + data outside of the range (in these partial chunks).
> +
> + Note that these calls need to always succeed. The exporter needs to complete
> + any preparations that might fail in begin_cpu_access.
> +
> +3. Finish access
> +
> + When the importer is done accessing the range specified in begin_cpu_access,
> + it needs to announce this to the exporter (to facilitate cache flushing and
> + unpinning of any pinned resources). The result of of any dma_buf kmap calls
> + after end_cpu_access is undefined.
> +
> + Interface:
> + void dma_buf_end_cpu_access(struct dma_buf *dma_buf,
> + size_t start, size_t len,
> + enum dma_data_direction dir);
> +
> +
> +Miscellaneous notes
> +-------------------
> +
> - Any exporters or users of the dma-buf buffer sharing framework must have
> a 'select DMA_SHARED_BUFFER' in their respective Kconfigs.
>
> --
> 1.7.7.5
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/4] dma-buf: add support for kernel cpu access
2012-03-18 23:34 ` [PATCH 2/4] dma-buf: add support for kernel cpu access Daniel Vetter
@ 2012-03-19 2:00 ` Rob Clark
2012-03-19 23:02 ` [PATCH] " Daniel Vetter
0 siblings, 1 reply; 15+ messages in thread
From: Rob Clark @ 2012-03-19 2:00 UTC (permalink / raw)
To: Daniel Vetter; +Cc: linaro-mm-sig, LKML, DRI Development, linux-media
On Sun, Mar 18, 2012 at 6:34 PM, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> Big differences to other contenders in the field (like ion) is
> that this also supports highmem, so we have to split up the cpu
> access from the kernel side into a prepare and a kmap step.
>
> Prepare is allowed to fail and should do everything required so that
> the kmap calls can succeed (like swapin/backing storage allocation,
> flushing, ...).
>
> More in-depth explanations will follow in the follow-up documentation
> patch.
>
> Changes in v2:
>
> - Clear up begin_cpu_access confusion noticed by Sumit Semwal.
> - Don't automatically fallback from the _atomic variants to the
> non-atomic variants. The _atomic callbacks are not allowed to
> sleep, so we want exporters to make this decision explicit. The
> function signatures are explicit, so simpler exporters can still
> use the same function for both.
> - Make the unmap functions optional. Simpler exporters with permanent
> mappings don't need to do anything at unmap time.
>
> Changes in v3:
>
> - Adjust the WARN_ON checks for the new ->ops functions as suggested
> by Rob Clark and Sumit Semwal.
> - Rebased on top of latest dma-buf-next git.
>
> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Rob Clark <rob@ti.com>
note: we should start updating the individual driver support for drm
drivers for this (since Dave has prime working now), although this
should not block the drm-core support for prime/dmabuf
BR,
-R
> ---
> drivers/base/dma-buf.c | 124 ++++++++++++++++++++++++++++++++++++++++++++++-
> include/linux/dma-buf.h | 59 ++++++++++++++++++++++
> 2 files changed, 182 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
> index 5641b9c..2226511 100644
> --- a/drivers/base/dma-buf.c
> +++ b/drivers/base/dma-buf.c
> @@ -80,7 +80,9 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops,
> if (WARN_ON(!priv || !ops
> || !ops->map_dma_buf
> || !ops->unmap_dma_buf
> - || !ops->release)) {
> + || !ops->release
> + || !ops->kmap_atomic
> + || !ops->kmap)) {
> return ERR_PTR(-EINVAL);
> }
>
> @@ -284,3 +286,123 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> direction);
> }
> EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
> +
> +
> +/**
> + * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
> + * cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific
> + * preparations. Coherency is only guaranteed in the specified range for the
> + * specified access direction.
> + * @dma_buf: [in] buffer to prepare cpu access for.
> + * @start: [in] start of range for cpu access.
> + * @len: [in] length of range for cpu access.
> + * @direction: [in] length of range for cpu access.
> + *
> + * Can return negative error values, returns 0 on success.
> + */
> +int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
> + enum dma_data_direction direction)
> +{
> + int ret = 0;
> +
> + if (WARN_ON(!dmabuf))
> + return EINVAL;
> +
> + if (dmabuf->ops->begin_cpu_access)
> + ret = dmabuf->ops->begin_cpu_access(dmabuf, start, len, direction);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
> +
> +/**
> + * dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the
> + * cpu in the kernel context. Calls end_cpu_access to allow exporter-specific
> + * actions. Coherency is only guaranteed in the specified range for the
> + * specified access direction.
> + * @dma_buf: [in] buffer to complete cpu access for.
> + * @start: [in] start of range for cpu access.
> + * @len: [in] length of range for cpu access.
> + * @direction: [in] length of range for cpu access.
> + *
> + * This call must always succeed.
> + */
> +void dma_buf_end_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
> + enum dma_data_direction direction)
> +{
> + WARN_ON(!dmabuf);
> +
> + if (dmabuf->ops->end_cpu_access)
> + dmabuf->ops->end_cpu_access(dmabuf, start, len, direction);
> +}
> +EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
> +
> +/**
> + * dma_buf_kmap_atomic - Map a page of the buffer object into kernel address
> + * space. The same restrictions as for kmap_atomic and friends apply.
> + * @dma_buf: [in] buffer to map page from.
> + * @page_num: [in] page in PAGE_SIZE units to map.
> + *
> + * This call must always succeed, any necessary preparations that might fail
> + * need to be done in begin_cpu_access.
> + */
> +void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num)
> +{
> + WARN_ON(!dmabuf);
> +
> + return dmabuf->ops->kmap_atomic(dmabuf, page_num);
> +}
> +EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic);
> +
> +/**
> + * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic.
> + * @dma_buf: [in] buffer to unmap page from.
> + * @page_num: [in] page in PAGE_SIZE units to unmap.
> + * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap_atomic.
> + *
> + * This call must always succeed.
> + */
> +void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num,
> + void *vaddr)
> +{
> + WARN_ON(!dmabuf);
> +
> + if (dmabuf->ops->kunmap_atomic)
> + dmabuf->ops->kunmap_atomic(dmabuf, page_num, vaddr);
> +}
> +EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic);
> +
> +/**
> + * dma_buf_kmap - Map a page of the buffer object into kernel address space. The
> + * same restrictions as for kmap and friends apply.
> + * @dma_buf: [in] buffer to map page from.
> + * @page_num: [in] page in PAGE_SIZE units to map.
> + *
> + * This call must always succeed, any necessary preparations that might fail
> + * need to be done in begin_cpu_access.
> + */
> +void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long page_num)
> +{
> + WARN_ON(!dmabuf);
> +
> + return dmabuf->ops->kmap(dmabuf, page_num);
> +}
> +EXPORT_SYMBOL_GPL(dma_buf_kmap);
> +
> +/**
> + * dma_buf_kunmap - Unmap a page obtained by dma_buf_kmap.
> + * @dma_buf: [in] buffer to unmap page from.
> + * @page_num: [in] page in PAGE_SIZE units to unmap.
> + * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap.
> + *
> + * This call must always succeed.
> + */
> +void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num,
> + void *vaddr)
> +{
> + WARN_ON(!dmabuf);
> +
> + if (dmabuf->ops->kunmap)
> + dmabuf->ops->kunmap(dmabuf, page_num, vaddr);
> +}
> +EXPORT_SYMBOL_GPL(dma_buf_kunmap);
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index 24e0f48..ee7ef99 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -50,6 +50,17 @@ struct dma_buf_attachment;
> * @unmap_dma_buf: decreases usecount of buffer, might deallocate scatter
> * pages.
> * @release: release this buffer; to be called after the last dma_buf_put.
> + * @begin_cpu_access: [optional] called before cpu access to invalidate cpu
> + * caches and allocate backing storage (if not yet done)
> + * respectively pin the objet into memory.
> + * @end_cpu_access: [optional] called after cpu access to flush cashes.
> + * @kmap_atomic: maps a page from the buffer into kernel address
> + * space, users may not block until the subsequent unmap call.
> + * This callback must not sleep.
> + * @kunmap_atomic: [optional] unmaps a atomically mapped page from the buffer.
> + * This Callback must not sleep.
> + * @kmap: maps a page from the buffer into kernel address space.
> + * @kunmap: [optional] unmaps a page from the buffer.
> */
> struct dma_buf_ops {
> int (*attach)(struct dma_buf *, struct device *,
> @@ -73,6 +84,14 @@ struct dma_buf_ops {
> /* after final dma_buf_put() */
> void (*release)(struct dma_buf *);
>
> + int (*begin_cpu_access)(struct dma_buf *, size_t, size_t,
> + enum dma_data_direction);
> + void (*end_cpu_access)(struct dma_buf *, size_t, size_t,
> + enum dma_data_direction);
> + void *(*kmap_atomic)(struct dma_buf *, unsigned long);
> + void (*kunmap_atomic)(struct dma_buf *, unsigned long, void *);
> + void *(*kmap)(struct dma_buf *, unsigned long);
> + void (*kunmap)(struct dma_buf *, unsigned long, void *);
> };
>
> /**
> @@ -140,6 +159,14 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *,
> enum dma_data_direction);
> void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *,
> enum dma_data_direction);
> +int dma_buf_begin_cpu_access(struct dma_buf *dma_buf, size_t start, size_t len,
> + enum dma_data_direction dir);
> +void dma_buf_end_cpu_access(struct dma_buf *dma_buf, size_t start, size_t len,
> + enum dma_data_direction dir);
> +void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long);
> +void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
> +void *dma_buf_kmap(struct dma_buf *, unsigned long);
> +void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
> #else
>
> static inline struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
> @@ -188,6 +215,38 @@ static inline void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
> return;
> }
>
> +static inline int dma_buf_begin_cpu_access(struct dma_buf *,
> + size_t, size_t,
> + enum dma_data_direction)
> +{
> + return -ENODEV;
> +}
> +
> +static inline void dma_buf_end_cpu_access(struct dma_buf *,
> + size_t, size_t,
> + enum dma_data_direction)
> +{
> +}
> +
> +static inline void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long)
> +{
> + return NULL;
> +}
> +
> +static inline void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long,
> + void *)
> +{
> +}
> +
> +static inline void *dma_buf_kmap(struct dma_buf *, unsigned long)
> +{
> + return NULL;
> +}
> +
> +static inline void dma_buf_kunmap(struct dma_buf *, unsigned long,
> + void *)
> +{
> +}
> #endif /* CONFIG_DMA_SHARED_BUFFER */
>
> #endif /* __DMA_BUF_H__ */
> --
> 1.7.7.5
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Linaro-mm-sig] [PATCH 4/4] dma-buf: document fd flags and O_CLOEXEC requirement
2012-03-18 23:34 ` [PATCH 4/4] dma-buf: document fd flags and O_CLOEXEC requirement Daniel Vetter
@ 2012-03-19 10:51 ` Dave Airlie
2012-03-19 15:41 ` [PATCH] " Daniel Vetter
0 siblings, 1 reply; 15+ messages in thread
From: Dave Airlie @ 2012-03-19 10:51 UTC (permalink / raw)
To: Daniel Vetter; +Cc: linaro-mm-sig, LKML, DRI Development, linux-media
On Sun, Mar 18, 2012 at 11:34 PM, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> Otherwise subsystems will get this wrong and end up with and second
> export ioctl with the flag and O_CLOEXEC support added.
Its not actually dma_buf_export that takes the O_CLOEXEC flag its dma_buf_fd
I'm not sure how blindly we should be passing flags in from userspace
to these, like O_NONBLOCK or perms flags.
Dave.
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH] dma-buf: document fd flags and O_CLOEXEC requirement
2012-03-19 10:51 ` [Linaro-mm-sig] " Dave Airlie
@ 2012-03-19 15:41 ` Daniel Vetter
2012-03-19 15:44 ` Ville Syrjälä
0 siblings, 1 reply; 15+ messages in thread
From: Daniel Vetter @ 2012-03-19 15:41 UTC (permalink / raw)
To: linaro-mm-sig, LKML, DRI Development, linux-media
Cc: Daniel Vetter, Dave Airlie
Otherwise subsystems will get this wrong and end up with a second
export ioctl with the flag and O_CLOEXEC support added.
v2: Fixup the function name and caution exporters to limit the flags
to only O_CLOEXEC. Noted by Dave Airlie.
Cc: Dave Airlie <airlied@gmail.com>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
Documentation/dma-buf-sharing.txt | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
index 9f3aeef..a6d4c37 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -319,6 +319,12 @@ Miscellaneous notes
- Any exporters or users of the dma-buf buffer sharing framework must have
a 'select DMA_SHARED_BUFFER' in their respective Kconfigs.
+- To avoid the wrath of userspace library writers exporting subsystems must have
+ a flag parameter in the ioctl that creates the dma-buf fd which needs to
+ support at least the O_CLOEXEC fd flag. This needs to be passed in the flag
+ parameter of dma_buf_fd. Without any other reasons applying it is recommended
+ that exporters limit the flags passed to dma_buf_fd to only O_CLOEXEC.
+
References:
[1] struct dma_buf_ops in include/linux/dma-buf.h
[2] All interfaces mentioned above defined in include/linux/dma-buf.h
--
1.7.7.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH] dma-buf: document fd flags and O_CLOEXEC requirement
2012-03-19 15:41 ` [PATCH] " Daniel Vetter
@ 2012-03-19 15:44 ` Ville Syrjälä
0 siblings, 0 replies; 15+ messages in thread
From: Ville Syrjälä @ 2012-03-19 15:44 UTC (permalink / raw)
To: Daniel Vetter; +Cc: linaro-mm-sig, LKML, DRI Development, linux-media
On Mon, Mar 19, 2012 at 04:41:55PM +0100, Daniel Vetter wrote:
> Otherwise subsystems will get this wrong and end up with a second
> export ioctl with the flag and O_CLOEXEC support added.
>
> v2: Fixup the function name and caution exporters to limit the flags
> to only O_CLOEXEC. Noted by Dave Airlie.
>
> Cc: Dave Airlie <airlied@gmail.com>
> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
> Documentation/dma-buf-sharing.txt | 6 ++++++
> 1 files changed, 6 insertions(+), 0 deletions(-)
>
> diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
> index 9f3aeef..a6d4c37 100644
> --- a/Documentation/dma-buf-sharing.txt
> +++ b/Documentation/dma-buf-sharing.txt
> @@ -319,6 +319,12 @@ Miscellaneous notes
> - Any exporters or users of the dma-buf buffer sharing framework must have
> a 'select DMA_SHARED_BUFFER' in their respective Kconfigs.
>
> +- To avoid the wrath of userspace library writers exporting subsystems must have
> + a flag parameter in the ioctl that creates the dma-buf fd which needs to
> + support at least the O_CLOEXEC fd flag. This needs to be passed in the flag
> + parameter of dma_buf_fd. Without any other reasons applying it is recommended
> + that exporters limit the flags passed to dma_buf_fd to only O_CLOEXEC.
Difficult to parse. Needs more punctuation.
--
Ville Syrjälä
Intel OTC
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH] dma-buf: document fd flags and O_CLOEXEC requirement
@ 2012-03-19 21:42 Rob Clark
2012-03-22 6:05 ` Sumit Semwal
0 siblings, 1 reply; 15+ messages in thread
From: Rob Clark @ 2012-03-19 21:42 UTC (permalink / raw)
To: linaro-mm-sig, linux-kernel, dri-devel, linux-media
Cc: patches, daniel.vetter, sumit.semwal, Rob Clark
From: Rob Clark <rob@ti.com>
Otherwise subsystems will get this wrong and end up with a second
export ioctl with the flag and O_CLOEXEC support added.
Signed-off-by: Rob Clark <rob@ti.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
Updated version of Daniel's original documentation patch with (hopefully)
improved wording, and a better description of the motivation.
Documentation/dma-buf-sharing.txt | 18 ++++++++++++++++++
1 files changed, 18 insertions(+), 0 deletions(-)
diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
index 225f96d..3b51134 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -223,6 +223,24 @@ Miscellaneous notes:
- Any exporters or users of the dma-buf buffer sharing framework must have
a 'select DMA_SHARED_BUFFER' in their respective Kconfigs.
+- In order to avoid fd leaks on exec, the FD_CLOEXEC flag must be set
+ on the file descriptor. This is not just a resource leak, but a
+ potential security hole. It could give the newly exec'd application
+ access to buffers, via the leaked fd, to which it should otherwise
+ not be permitted access.
+
+ The problem with doing this via a separate fcntl() call, versus doing it
+ atomically when the fd is created, is that this is inherently racy in a
+ multi-threaded app[3]. The issue is made worse when it is library code
+ opening/creating the file descriptor, as the application may not even be
+ aware of the fd's.
+
+ To avoid this problem, userspace must have a way to request O_CLOEXEC
+ flag be set when the dma-buf fd is created. So any API provided by
+ the exporting driver to create a dmabuf fd must provide a way to let
+ userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd().
+
References:
[1] struct dma_buf_ops in include/linux/dma-buf.h
[2] All interfaces mentioned above defined in include/linux/dma-buf.h
+[3] https://lwn.net/Articles/236486/
--
1.7.5.4
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH] dma-buf: add support for kernel cpu access
2012-03-19 2:00 ` Rob Clark
@ 2012-03-19 23:02 ` Daniel Vetter
2012-03-22 6:03 ` [Linaro-mm-sig] " Sumit Semwal
0 siblings, 1 reply; 15+ messages in thread
From: Daniel Vetter @ 2012-03-19 23:02 UTC (permalink / raw)
To: linaro-mm-sig, LKML, DRI Development, linux-media
Cc: Rob Clark, Daniel Vetter
Big differences to other contenders in the field (like ion) is
that this also supports highmem, so we have to split up the cpu
access from the kernel side into a prepare and a kmap step.
Prepare is allowed to fail and should do everything required so that
the kmap calls can succeed (like swapin/backing storage allocation,
flushing, ...).
More in-depth explanations will follow in the follow-up documentation
patch.
Changes in v2:
- Clear up begin_cpu_access confusion noticed by Sumit Semwal.
- Don't automatically fallback from the _atomic variants to the
non-atomic variants. The _atomic callbacks are not allowed to
sleep, so we want exporters to make this decision explicit. The
function signatures are explicit, so simpler exporters can still
use the same function for both.
- Make the unmap functions optional. Simpler exporters with permanent
mappings don't need to do anything at unmap time.
Changes in v3:
- Adjust the WARN_ON checks for the new ->ops functions as suggested
by Rob Clark and Sumit Semwal.
- Rebased on top of latest dma-buf-next git.
Changes in v4:
- Fixup a missing - in a return -EINVAL; statement.
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
drivers/base/dma-buf.c | 124 ++++++++++++++++++++++++++++++++++++++++++++++-
include/linux/dma-buf.h | 59 ++++++++++++++++++++++
2 files changed, 182 insertions(+), 1 deletions(-)
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 5641b9c..07cbbc6 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -80,7 +80,9 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops,
if (WARN_ON(!priv || !ops
|| !ops->map_dma_buf
|| !ops->unmap_dma_buf
- || !ops->release)) {
+ || !ops->release
+ || !ops->kmap_atomic
+ || !ops->kmap)) {
return ERR_PTR(-EINVAL);
}
@@ -284,3 +286,123 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
direction);
}
EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
+
+
+/**
+ * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the
+ * cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific
+ * preparations. Coherency is only guaranteed in the specified range for the
+ * specified access direction.
+ * @dma_buf: [in] buffer to prepare cpu access for.
+ * @start: [in] start of range for cpu access.
+ * @len: [in] length of range for cpu access.
+ * @direction: [in] length of range for cpu access.
+ *
+ * Can return negative error values, returns 0 on success.
+ */
+int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
+ enum dma_data_direction direction)
+{
+ int ret = 0;
+
+ if (WARN_ON(!dmabuf))
+ return -EINVAL;
+
+ if (dmabuf->ops->begin_cpu_access)
+ ret = dmabuf->ops->begin_cpu_access(dmabuf, start, len, direction);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
+
+/**
+ * dma_buf_end_cpu_access - Must be called after accessing a dma_buf from the
+ * cpu in the kernel context. Calls end_cpu_access to allow exporter-specific
+ * actions. Coherency is only guaranteed in the specified range for the
+ * specified access direction.
+ * @dma_buf: [in] buffer to complete cpu access for.
+ * @start: [in] start of range for cpu access.
+ * @len: [in] length of range for cpu access.
+ * @direction: [in] length of range for cpu access.
+ *
+ * This call must always succeed.
+ */
+void dma_buf_end_cpu_access(struct dma_buf *dmabuf, size_t start, size_t len,
+ enum dma_data_direction direction)
+{
+ WARN_ON(!dmabuf);
+
+ if (dmabuf->ops->end_cpu_access)
+ dmabuf->ops->end_cpu_access(dmabuf, start, len, direction);
+}
+EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
+
+/**
+ * dma_buf_kmap_atomic - Map a page of the buffer object into kernel address
+ * space. The same restrictions as for kmap_atomic and friends apply.
+ * @dma_buf: [in] buffer to map page from.
+ * @page_num: [in] page in PAGE_SIZE units to map.
+ *
+ * This call must always succeed, any necessary preparations that might fail
+ * need to be done in begin_cpu_access.
+ */
+void *dma_buf_kmap_atomic(struct dma_buf *dmabuf, unsigned long page_num)
+{
+ WARN_ON(!dmabuf);
+
+ return dmabuf->ops->kmap_atomic(dmabuf, page_num);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kmap_atomic);
+
+/**
+ * dma_buf_kunmap_atomic - Unmap a page obtained by dma_buf_kmap_atomic.
+ * @dma_buf: [in] buffer to unmap page from.
+ * @page_num: [in] page in PAGE_SIZE units to unmap.
+ * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap_atomic.
+ *
+ * This call must always succeed.
+ */
+void dma_buf_kunmap_atomic(struct dma_buf *dmabuf, unsigned long page_num,
+ void *vaddr)
+{
+ WARN_ON(!dmabuf);
+
+ if (dmabuf->ops->kunmap_atomic)
+ dmabuf->ops->kunmap_atomic(dmabuf, page_num, vaddr);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kunmap_atomic);
+
+/**
+ * dma_buf_kmap - Map a page of the buffer object into kernel address space. The
+ * same restrictions as for kmap and friends apply.
+ * @dma_buf: [in] buffer to map page from.
+ * @page_num: [in] page in PAGE_SIZE units to map.
+ *
+ * This call must always succeed, any necessary preparations that might fail
+ * need to be done in begin_cpu_access.
+ */
+void *dma_buf_kmap(struct dma_buf *dmabuf, unsigned long page_num)
+{
+ WARN_ON(!dmabuf);
+
+ return dmabuf->ops->kmap(dmabuf, page_num);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kmap);
+
+/**
+ * dma_buf_kunmap - Unmap a page obtained by dma_buf_kmap.
+ * @dma_buf: [in] buffer to unmap page from.
+ * @page_num: [in] page in PAGE_SIZE units to unmap.
+ * @vaddr: [in] kernel space pointer obtained from dma_buf_kmap.
+ *
+ * This call must always succeed.
+ */
+void dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long page_num,
+ void *vaddr)
+{
+ WARN_ON(!dmabuf);
+
+ if (dmabuf->ops->kunmap)
+ dmabuf->ops->kunmap(dmabuf, page_num, vaddr);
+}
+EXPORT_SYMBOL_GPL(dma_buf_kunmap);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 24e0f48..ee7ef99 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -50,6 +50,17 @@ struct dma_buf_attachment;
* @unmap_dma_buf: decreases usecount of buffer, might deallocate scatter
* pages.
* @release: release this buffer; to be called after the last dma_buf_put.
+ * @begin_cpu_access: [optional] called before cpu access to invalidate cpu
+ * caches and allocate backing storage (if not yet done)
+ * respectively pin the objet into memory.
+ * @end_cpu_access: [optional] called after cpu access to flush cashes.
+ * @kmap_atomic: maps a page from the buffer into kernel address
+ * space, users may not block until the subsequent unmap call.
+ * This callback must not sleep.
+ * @kunmap_atomic: [optional] unmaps a atomically mapped page from the buffer.
+ * This Callback must not sleep.
+ * @kmap: maps a page from the buffer into kernel address space.
+ * @kunmap: [optional] unmaps a page from the buffer.
*/
struct dma_buf_ops {
int (*attach)(struct dma_buf *, struct device *,
@@ -73,6 +84,14 @@ struct dma_buf_ops {
/* after final dma_buf_put() */
void (*release)(struct dma_buf *);
+ int (*begin_cpu_access)(struct dma_buf *, size_t, size_t,
+ enum dma_data_direction);
+ void (*end_cpu_access)(struct dma_buf *, size_t, size_t,
+ enum dma_data_direction);
+ void *(*kmap_atomic)(struct dma_buf *, unsigned long);
+ void (*kunmap_atomic)(struct dma_buf *, unsigned long, void *);
+ void *(*kmap)(struct dma_buf *, unsigned long);
+ void (*kunmap)(struct dma_buf *, unsigned long, void *);
};
/**
@@ -140,6 +159,14 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *,
enum dma_data_direction);
void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *,
enum dma_data_direction);
+int dma_buf_begin_cpu_access(struct dma_buf *dma_buf, size_t start, size_t len,
+ enum dma_data_direction dir);
+void dma_buf_end_cpu_access(struct dma_buf *dma_buf, size_t start, size_t len,
+ enum dma_data_direction dir);
+void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long);
+void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
+void *dma_buf_kmap(struct dma_buf *, unsigned long);
+void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
#else
static inline struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
@@ -188,6 +215,38 @@ static inline void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
return;
}
+static inline int dma_buf_begin_cpu_access(struct dma_buf *,
+ size_t, size_t,
+ enum dma_data_direction)
+{
+ return -ENODEV;
+}
+
+static inline void dma_buf_end_cpu_access(struct dma_buf *,
+ size_t, size_t,
+ enum dma_data_direction)
+{
+}
+
+static inline void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long)
+{
+ return NULL;
+}
+
+static inline void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long,
+ void *)
+{
+}
+
+static inline void *dma_buf_kmap(struct dma_buf *, unsigned long)
+{
+ return NULL;
+}
+
+static inline void dma_buf_kunmap(struct dma_buf *, unsigned long,
+ void *)
+{
+}
#endif /* CONFIG_DMA_SHARED_BUFFER */
#endif /* __DMA_BUF_H__ */
--
1.7.7.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [Linaro-mm-sig] [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls
2012-03-18 23:34 [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls Daniel Vetter
` (2 preceding siblings ...)
2012-03-18 23:34 ` [PATCH 4/4] dma-buf: document fd flags and O_CLOEXEC requirement Daniel Vetter
@ 2012-03-22 6:03 ` Sumit Semwal
3 siblings, 0 replies; 15+ messages in thread
From: Sumit Semwal @ 2012-03-22 6:03 UTC (permalink / raw)
To: Daniel Vetter; +Cc: linaro-mm-sig, LKML, DRI Development, linux-media
On 19 March 2012 05:04, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> The mutex protects the attachment list and hence needs to be held
> around the callbakc to the exporters (optional) attach/detach
> functions.
>
> Holding the mutex around the map/unmap calls doesn't protect any
> dma_buf state. Exporters need to properly protect any of their own
> state anyway (to protect against calls from their own interfaces).
> So this only makes the locking messier (and lockdep easier to anger).
>
> Therefore let's just drop this.
>
> v2: Rebased on top of latest dma-buf-next git.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Reviewed-by: Rob Clark <rob.clark@linaro.org>
Thanks; Applied to for-next.
> ---
<snip>
BR,
~Sumit.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Linaro-mm-sig] [PATCH] dma-buf: add support for kernel cpu access
2012-03-19 23:02 ` [PATCH] " Daniel Vetter
@ 2012-03-22 6:03 ` Sumit Semwal
0 siblings, 0 replies; 15+ messages in thread
From: Sumit Semwal @ 2012-03-22 6:03 UTC (permalink / raw)
To: Daniel Vetter
Cc: linaro-mm-sig, LKML, DRI Development, linux-media, Rob Clark
On 20 March 2012 04:32, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
> Big differences to other contenders in the field (like ion) is
> that this also supports highmem, so we have to split up the cpu
> access from the kernel side into a prepare and a kmap step.
>
> Prepare is allowed to fail and should do everything required so that
> the kmap calls can succeed (like swapin/backing storage allocation,
> flushing, ...).
>
> More in-depth explanations will follow in the follow-up documentation
> patch.
>
> Changes in v2:
>
> - Clear up begin_cpu_access confusion noticed by Sumit Semwal.
> - Don't automatically fallback from the _atomic variants to the
> non-atomic variants. The _atomic callbacks are not allowed to
> sleep, so we want exporters to make this decision explicit. The
> function signatures are explicit, so simpler exporters can still
> use the same function for both.
> - Make the unmap functions optional. Simpler exporters with permanent
> mappings don't need to do anything at unmap time.
>
> Changes in v3:
>
> - Adjust the WARN_ON checks for the new ->ops functions as suggested
> by Rob Clark and Sumit Semwal.
> - Rebased on top of latest dma-buf-next git.
>
> Changes in v4:
>
> - Fixup a missing - in a return -EINVAL; statement.
>
> Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Thanks; applied to for-next.
> ---
<snip>
BR,
~Sumit.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Linaro-mm-sig] [PATCH 3/4] dma_buf: Add documentation for the new cpu access support
2012-03-19 1:54 ` Rob Clark
@ 2012-03-22 6:04 ` Sumit Semwal
0 siblings, 0 replies; 15+ messages in thread
From: Sumit Semwal @ 2012-03-22 6:04 UTC (permalink / raw)
To: Rob Clark
Cc: Daniel Vetter, linaro-mm-sig, LKML, DRI Development, linux-media
On 19 March 2012 07:24, Rob Clark <rob.clark@linaro.org> wrote:
> On Sun, Mar 18, 2012 at 6:34 PM, Daniel Vetter <daniel.vetter@ffwll.ch> wrote:
>> v2: Fix spelling issues noticed by Rob Clark.
>>
>> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>
> Signed-off-by: Rob Clark <rob@ti.com>
Thanks; applied to for-next.
>
<snip>
BR,
~me.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH] dma-buf: document fd flags and O_CLOEXEC requirement
2012-03-19 21:42 [PATCH] dma-buf: document fd flags and O_CLOEXEC requirement Rob Clark
@ 2012-03-22 6:05 ` Sumit Semwal
0 siblings, 0 replies; 15+ messages in thread
From: Sumit Semwal @ 2012-03-22 6:05 UTC (permalink / raw)
To: Rob Clark
Cc: linaro-mm-sig, linux-kernel, dri-devel, linux-media, patches,
daniel.vetter, Rob Clark
On 20 March 2012 03:12, Rob Clark <rob.clark@linaro.org> wrote:
> From: Rob Clark <rob@ti.com>
>
> Otherwise subsystems will get this wrong and end up with a second
> export ioctl with the flag and O_CLOEXEC support added.
>
> Signed-off-by: Rob Clark <rob@ti.com>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> ---
> Updated version of Daniel's original documentation patch with (hopefully)
> improved wording, and a better description of the motivation.
Thanks; applied this in place of Daniel's to for-next.
>
BR,
~Sumit.
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2012-03-22 6:05 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-18 23:34 [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls Daniel Vetter
2012-03-18 23:34 ` [PATCH 2/4] dma-buf: add support for kernel cpu access Daniel Vetter
2012-03-19 2:00 ` Rob Clark
2012-03-19 23:02 ` [PATCH] " Daniel Vetter
2012-03-22 6:03 ` [Linaro-mm-sig] " Sumit Semwal
2012-03-18 23:34 ` [PATCH 3/4] dma_buf: Add documentation for the new cpu access support Daniel Vetter
2012-03-19 1:54 ` Rob Clark
2012-03-22 6:04 ` [Linaro-mm-sig] " Sumit Semwal
2012-03-18 23:34 ` [PATCH 4/4] dma-buf: document fd flags and O_CLOEXEC requirement Daniel Vetter
2012-03-19 10:51 ` [Linaro-mm-sig] " Dave Airlie
2012-03-19 15:41 ` [PATCH] " Daniel Vetter
2012-03-19 15:44 ` Ville Syrjälä
2012-03-22 6:03 ` [Linaro-mm-sig] [PATCH 1/4] dma-buf: don't hold the mutex around map/unmap calls Sumit Semwal
-- strict thread matches above, loose matches on Subject: below --
2012-03-19 21:42 [PATCH] dma-buf: document fd flags and O_CLOEXEC requirement Rob Clark
2012-03-22 6:05 ` Sumit Semwal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).