* [PATCH v3 0/2] dma-buf: heaps: Support carved-out heaps
@ 2025-04-07 16:29 Maxime Ripard
2025-04-07 16:29 ` [PATCH v3 1/2] dma-buf: heaps: system: Remove global variable Maxime Ripard
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Maxime Ripard @ 2025-04-07 16:29 UTC (permalink / raw)
To: Rob Herring, Saravana Kannan, Sumit Semwal, Benjamin Gaignard,
Brian Starkey, John Stultz, T.J. Mercier, Christian König
Cc: Mattijs Korpershoek, devicetree, linux-kernel, linux-media,
dri-devel, linaro-mm-sig, Maxime Ripard
Hi,
This series is the follow-up of the discussion that John and I had some
time ago here:
https://lore.kernel.org/all/CANDhNCquJn6bH3KxKf65BWiTYLVqSd9892-xtFDHHqqyrroCMQ@mail.gmail.com/
The initial problem we were discussing was that I'm currently working on
a platform which has a memory layout with ECC enabled. However, enabling
the ECC has a number of drawbacks on that platform: lower performance,
increased memory usage, etc. So for things like framebuffers, the
trade-off isn't great and thus there's a memory region with ECC disabled
to allocate from for such use cases.
After a suggestion from John, I chose to first start using heap
allocations flags to allow for userspace to ask for a particular ECC
setup. This is then backed by a new heap type that runs from reserved
memory chunks flagged as such, and the existing DT properties to specify
the ECC properties.
After further discussion, it was considered that flags were not the
right solution, and relying on the names of the heaps would be enough to
let userspace know the kind of buffer it deals with.
Thus, even though the uAPI part of it has been dropped in this second
version, we still need a driver to create heaps out of carved-out memory
regions. In addition to the original usecase, a similar driver can be
found in BSPs from most vendors, so I believe it would be a useful
addition to the kernel.
I submitted a draft PR to the DT schema for the bindings used in this
PR:
https://github.com/devicetree-org/dt-schema/pull/138
Let me know what you think,
Maxime
Signed-off-by: Maxime Ripard <mripard@kernel.org>
---
Changes in v3:
- Reworked global variable patch
- Link to v2: https://lore.kernel.org/r/20250401-dma-buf-ecc-heap-v2-0-043fd006a1af@kernel.org
Changes in v2:
- Add vmap/vunmap operations
- Drop ECC flags uapi
- Rebase on top of 6.14
- Link to v1: https://lore.kernel.org/r/20240515-dma-buf-ecc-heap-v1-0-54cbbd049511@kernel.org
---
Maxime Ripard (2):
dma-buf: heaps: system: Remove global variable
dma-buf: heaps: Introduce a new heap for reserved memory
drivers/dma-buf/heaps/Kconfig | 8 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/carveout_heap.c | 360 ++++++++++++++++++++++++++++++++++
drivers/dma-buf/heaps/system_heap.c | 3 +-
4 files changed, 370 insertions(+), 2 deletions(-)
---
base-commit: fcbf30774e82a441890b722bf0c26542fb82150f
change-id: 20240515-dma-buf-ecc-heap-28a311d2c94e
Best regards,
--
Maxime Ripard <mripard@kernel.org>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v3 1/2] dma-buf: heaps: system: Remove global variable
2025-04-07 16:29 [PATCH v3 0/2] dma-buf: heaps: Support carved-out heaps Maxime Ripard
@ 2025-04-07 16:29 ` Maxime Ripard
2025-04-07 17:49 ` Christian König
2025-04-08 8:43 ` Mattijs Korpershoek
2025-04-07 16:29 ` [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory Maxime Ripard
2025-04-25 7:55 ` [PATCH v3 0/2] dma-buf: heaps: Support carved-out heaps Maxime Ripard
2 siblings, 2 replies; 10+ messages in thread
From: Maxime Ripard @ 2025-04-07 16:29 UTC (permalink / raw)
To: Rob Herring, Saravana Kannan, Sumit Semwal, Benjamin Gaignard,
Brian Starkey, John Stultz, T.J. Mercier, Christian König
Cc: Mattijs Korpershoek, devicetree, linux-kernel, linux-media,
dri-devel, linaro-mm-sig, Maxime Ripard
The system heap is storing its struct dma_heap pointer in a global
variable but isn't using it anywhere.
Let's move the global variable into system_heap_create() to make it
local.
Signed-off-by: Maxime Ripard <mripard@kernel.org>
---
drivers/dma-buf/heaps/system_heap.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
index 26d5dc89ea1663a0d078e3a5723ca3d8d12b935f..82b1b714300d6ff5f3e543059dd8215ceaa00c69 100644
--- a/drivers/dma-buf/heaps/system_heap.c
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -19,12 +19,10 @@
#include <linux/module.h>
#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
-static struct dma_heap *sys_heap;
-
struct system_heap_buffer {
struct dma_heap *heap;
struct list_head attachments;
struct mutex lock;
unsigned long len;
@@ -422,10 +420,11 @@ static const struct dma_heap_ops system_heap_ops = {
};
static int __init system_heap_create(void)
{
struct dma_heap_export_info exp_info;
+ struct dma_heap *sys_heap;
exp_info.name = "system";
exp_info.ops = &system_heap_ops;
exp_info.priv = NULL;
--
2.49.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory
2025-04-07 16:29 [PATCH v3 0/2] dma-buf: heaps: Support carved-out heaps Maxime Ripard
2025-04-07 16:29 ` [PATCH v3 1/2] dma-buf: heaps: system: Remove global variable Maxime Ripard
@ 2025-04-07 16:29 ` Maxime Ripard
2025-04-10 7:42 ` Mattijs Korpershoek
2025-04-11 20:26 ` T.J. Mercier
2025-04-25 7:55 ` [PATCH v3 0/2] dma-buf: heaps: Support carved-out heaps Maxime Ripard
2 siblings, 2 replies; 10+ messages in thread
From: Maxime Ripard @ 2025-04-07 16:29 UTC (permalink / raw)
To: Rob Herring, Saravana Kannan, Sumit Semwal, Benjamin Gaignard,
Brian Starkey, John Stultz, T.J. Mercier, Christian König
Cc: Mattijs Korpershoek, devicetree, linux-kernel, linux-media,
dri-devel, linaro-mm-sig, Maxime Ripard
Some reserved memory regions might have particular memory setup or
attributes that make them good candidates for heaps.
Let's provide a heap type that will create a new heap for each reserved
memory region flagged as such.
Signed-off-by: Maxime Ripard <mripard@kernel.org>
---
drivers/dma-buf/heaps/Kconfig | 8 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/carveout_heap.c | 360 ++++++++++++++++++++++++++++++++++
3 files changed, 369 insertions(+)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c422644e8aadaf5aff2bd9a33c49c1ba3..c6981d696733b4d8d0c3f6f5a37d967fd6a1a4a2 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -1,5 +1,13 @@
+config DMABUF_HEAPS_CARVEOUT
+ bool "Carveout Heaps"
+ depends on DMABUF_HEAPS
+ help
+ Choose this option to enable the carveout dmabuf heap. The carveout
+ heap is backed by pages from reserved memory regions flagged as
+ exportable. If in doubt, say Y.
+
config DMABUF_HEAPS_SYSTEM
bool "DMA-BUF System Heap"
depends on DMABUF_HEAPS
help
Choose this option to enable the system dmabuf heap. The system heap
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 974467791032ffb8a7aba17b1407d9a19b3f3b44..b734647ad5c84f449106748160258e372f153df2 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_DMABUF_HEAPS_CARVEOUT) += carveout_heap.o
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
diff --git a/drivers/dma-buf/heaps/carveout_heap.c b/drivers/dma-buf/heaps/carveout_heap.c
new file mode 100644
index 0000000000000000000000000000000000000000..f7198b781ea57f4f60e554d917c9277e9a716b16
--- /dev/null
+++ b/drivers/dma-buf/heaps/carveout_heap.c
@@ -0,0 +1,360 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/genalloc.h>
+#include <linux/highmem.h>
+#include <linux/of_reserved_mem.h>
+
+struct carveout_heap_priv {
+ struct dma_heap *heap;
+ struct gen_pool *pool;
+};
+
+struct carveout_heap_buffer_priv {
+ struct mutex lock;
+ struct list_head attachments;
+
+ unsigned long num_pages;
+ struct carveout_heap_priv *heap;
+ dma_addr_t daddr;
+ void *vaddr;
+ unsigned int vmap_cnt;
+};
+
+struct carveout_heap_attachment {
+ struct list_head head;
+ struct sg_table table;
+
+ struct device *dev;
+ bool mapped;
+};
+
+static int carveout_heap_attach(struct dma_buf *buf,
+ struct dma_buf_attachment *attachment)
+{
+ struct carveout_heap_buffer_priv *priv = buf->priv;
+ struct carveout_heap_attachment *a;
+ struct sg_table *sgt;
+ unsigned long len = priv->num_pages * PAGE_SIZE;
+ int ret;
+
+ a = kzalloc(sizeof(*a), GFP_KERNEL);
+ if (!a)
+ return -ENOMEM;
+ INIT_LIST_HEAD(&a->head);
+ a->dev = attachment->dev;
+ attachment->priv = a;
+
+ sgt = &a->table;
+ ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+ if (ret)
+ goto err_cleanup_attach;
+
+ sg_dma_address(sgt->sgl) = priv->daddr;
+ sg_dma_len(sgt->sgl) = len;
+
+ mutex_lock(&priv->lock);
+ list_add(&a->head, &priv->attachments);
+ mutex_unlock(&priv->lock);
+
+ return 0;
+
+err_cleanup_attach:
+ kfree(a);
+ return ret;
+}
+
+static void carveout_heap_detach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct carveout_heap_buffer_priv *priv = dmabuf->priv;
+ struct carveout_heap_attachment *a = attachment->priv;
+
+ mutex_lock(&priv->lock);
+ list_del(&a->head);
+ mutex_unlock(&priv->lock);
+
+ sg_free_table(&a->table);
+ kfree(a);
+}
+
+static struct sg_table *
+carveout_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ struct carveout_heap_attachment *a = attachment->priv;
+ struct sg_table *table = &a->table;
+ int ret;
+
+ ret = dma_map_sgtable(a->dev, table, direction, 0);
+ if (ret)
+ return ERR_PTR(-ENOMEM);
+
+ a->mapped = true;
+
+ return table;
+}
+
+static void carveout_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ struct carveout_heap_attachment *a = attachment->priv;
+
+ a->mapped = false;
+ dma_unmap_sgtable(a->dev, table, direction, 0);
+}
+
+static int
+carveout_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct carveout_heap_buffer_priv *priv = dmabuf->priv;
+ struct carveout_heap_attachment *a;
+ unsigned long len = priv->num_pages * PAGE_SIZE;
+
+ mutex_lock(&priv->lock);
+
+ if (priv->vmap_cnt > 0)
+ invalidate_kernel_vmap_range(priv->vaddr, len);
+
+ list_for_each_entry(a, &priv->attachments, head) {
+ if (!a->mapped)
+ continue;
+
+ dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
+ }
+
+ mutex_unlock(&priv->lock);
+
+ return 0;
+}
+
+static int
+carveout_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct carveout_heap_buffer_priv *priv = dmabuf->priv;
+ struct carveout_heap_attachment *a;
+ unsigned long len = priv->num_pages * PAGE_SIZE;
+
+ mutex_lock(&priv->lock);
+
+ if (priv->vmap_cnt > 0)
+ flush_kernel_vmap_range(priv->vaddr, len);
+
+ list_for_each_entry(a, &priv->attachments, head) {
+ if (!a->mapped)
+ continue;
+
+ dma_sync_sgtable_for_device(a->dev, &a->table, direction);
+ }
+
+ mutex_unlock(&priv->lock);
+
+ return 0;
+}
+
+static int carveout_heap_mmap(struct dma_buf *dmabuf,
+ struct vm_area_struct *vma)
+{
+ struct carveout_heap_buffer_priv *priv = dmabuf->priv;
+ unsigned long len = priv->num_pages * PAGE_SIZE;
+ struct page *page = virt_to_page(priv->vaddr);
+
+ return remap_pfn_range(vma, vma->vm_start, page_to_pfn(page),
+ len, vma->vm_page_prot);
+}
+
+static int carveout_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct carveout_heap_buffer_priv *priv = dmabuf->priv;
+
+ mutex_lock(&priv->lock);
+
+ iosys_map_set_vaddr(map, priv->vaddr);
+ priv->vmap_cnt++;
+
+ mutex_unlock(&priv->lock);
+
+ return 0;
+}
+
+static void carveout_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
+{
+ struct carveout_heap_buffer_priv *priv = dmabuf->priv;
+
+ mutex_lock(&priv->lock);
+
+ priv->vmap_cnt--;
+ mutex_unlock(&priv->lock);
+
+ iosys_map_clear(map);
+}
+
+static void carveout_heap_dma_buf_release(struct dma_buf *buf)
+{
+ struct carveout_heap_buffer_priv *buffer_priv = buf->priv;
+ struct carveout_heap_priv *heap_priv = buffer_priv->heap;
+ unsigned long len = buffer_priv->num_pages * PAGE_SIZE;
+
+ gen_pool_free(heap_priv->pool, (unsigned long)buffer_priv->vaddr, len);
+ kfree(buffer_priv);
+}
+
+static const struct dma_buf_ops carveout_heap_buf_ops = {
+ .attach = carveout_heap_attach,
+ .detach = carveout_heap_detach,
+ .map_dma_buf = carveout_heap_map_dma_buf,
+ .unmap_dma_buf = carveout_heap_unmap_dma_buf,
+ .begin_cpu_access = carveout_heap_dma_buf_begin_cpu_access,
+ .end_cpu_access = carveout_heap_dma_buf_end_cpu_access,
+ .mmap = carveout_heap_mmap,
+ .vmap = carveout_heap_vmap,
+ .vunmap = carveout_heap_vunmap,
+ .release = carveout_heap_dma_buf_release,
+};
+
+static struct dma_buf *carveout_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ u32 fd_flags,
+ u64 heap_flags)
+{
+ struct carveout_heap_priv *heap_priv = dma_heap_get_drvdata(heap);
+ struct carveout_heap_buffer_priv *buffer_priv;
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ struct dma_buf *buf;
+ dma_addr_t daddr;
+ size_t size = PAGE_ALIGN(len);
+ void *vaddr;
+ int ret;
+
+ buffer_priv = kzalloc(sizeof(*buffer_priv), GFP_KERNEL);
+ if (!buffer_priv)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&buffer_priv->attachments);
+ mutex_init(&buffer_priv->lock);
+
+ vaddr = gen_pool_dma_zalloc(heap_priv->pool, size, &daddr);
+ if (!vaddr) {
+ ret = -ENOMEM;
+ goto err_free_buffer_priv;
+ }
+
+ buffer_priv->vaddr = vaddr;
+ buffer_priv->daddr = daddr;
+ buffer_priv->heap = heap_priv;
+ buffer_priv->num_pages = size >> PAGE_SHIFT;
+
+ /* create the dmabuf */
+ exp_info.exp_name = dma_heap_get_name(heap);
+ exp_info.ops = &carveout_heap_buf_ops;
+ exp_info.size = size;
+ exp_info.flags = fd_flags;
+ exp_info.priv = buffer_priv;
+
+ buf = dma_buf_export(&exp_info);
+ if (IS_ERR(buf)) {
+ ret = PTR_ERR(buf);
+ goto err_free_buffer;
+ }
+
+ return buf;
+
+err_free_buffer:
+ gen_pool_free(heap_priv->pool, (unsigned long)vaddr, len);
+err_free_buffer_priv:
+ kfree(buffer_priv);
+
+ return ERR_PTR(ret);
+}
+
+static const struct dma_heap_ops carveout_heap_ops = {
+ .allocate = carveout_heap_allocate,
+};
+
+static int __init carveout_heap_setup(struct device_node *node)
+{
+ struct dma_heap_export_info exp_info = {};
+ const struct reserved_mem *rmem;
+ struct carveout_heap_priv *priv;
+ struct dma_heap *heap;
+ struct gen_pool *pool;
+ void *base;
+ int ret;
+
+ rmem = of_reserved_mem_lookup(node);
+ if (!rmem)
+ return -EINVAL;
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+ if (!pool) {
+ ret = -ENOMEM;
+ goto err_cleanup_heap;
+ }
+ priv->pool = pool;
+
+ base = memremap(rmem->base, rmem->size, MEMREMAP_WB);
+ if (!base) {
+ ret = -ENOMEM;
+ goto err_release_mem_region;
+ }
+
+ ret = gen_pool_add_virt(pool, (unsigned long)base, rmem->base,
+ rmem->size, NUMA_NO_NODE);
+ if (ret)
+ goto err_unmap;
+
+ exp_info.name = node->full_name;
+ exp_info.ops = &carveout_heap_ops;
+ exp_info.priv = priv;
+
+ heap = dma_heap_add(&exp_info);
+ if (IS_ERR(heap)) {
+ ret = PTR_ERR(heap);
+ goto err_cleanup_pool_region;
+ }
+ priv->heap = heap;
+
+ return 0;
+
+err_cleanup_pool_region:
+ gen_pool_free(pool, (unsigned long)base, rmem->size);
+err_unmap:
+ memunmap(base);
+err_release_mem_region:
+ gen_pool_destroy(pool);
+err_cleanup_heap:
+ kfree(priv);
+ return ret;
+}
+
+static int __init carveout_heap_init(void)
+{
+ struct device_node *rmem_node;
+ struct device_node *node;
+ int ret;
+
+ rmem_node = of_find_node_by_path("/reserved-memory");
+ if (!rmem_node)
+ return 0;
+
+ for_each_child_of_node(rmem_node, node) {
+ if (!of_property_read_bool(node, "export"))
+ continue;
+
+ ret = carveout_heap_setup(node);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+module_init(carveout_heap_init);
--
2.49.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v3 1/2] dma-buf: heaps: system: Remove global variable
2025-04-07 16:29 ` [PATCH v3 1/2] dma-buf: heaps: system: Remove global variable Maxime Ripard
@ 2025-04-07 17:49 ` Christian König
2025-04-08 8:43 ` Mattijs Korpershoek
1 sibling, 0 replies; 10+ messages in thread
From: Christian König @ 2025-04-07 17:49 UTC (permalink / raw)
To: Maxime Ripard, Rob Herring, Saravana Kannan, Sumit Semwal,
Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier
Cc: Mattijs Korpershoek, devicetree, linux-kernel, linux-media,
dri-devel, linaro-mm-sig
Am 07.04.25 um 18:29 schrieb Maxime Ripard:
> The system heap is storing its struct dma_heap pointer in a global
> variable but isn't using it anywhere.
>
> Let's move the global variable into system_heap_create() to make it
> local.
>
> Signed-off-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
Going to push this one to drm-misc-next, but I can't judge if in any way possible if patch #2 is correct or not.
Regards,
Christian.
> ---
> drivers/dma-buf/heaps/system_heap.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index 26d5dc89ea1663a0d078e3a5723ca3d8d12b935f..82b1b714300d6ff5f3e543059dd8215ceaa00c69 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -19,12 +19,10 @@
> #include <linux/module.h>
> #include <linux/scatterlist.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
>
> -static struct dma_heap *sys_heap;
> -
> struct system_heap_buffer {
> struct dma_heap *heap;
> struct list_head attachments;
> struct mutex lock;
> unsigned long len;
> @@ -422,10 +420,11 @@ static const struct dma_heap_ops system_heap_ops = {
> };
>
> static int __init system_heap_create(void)
> {
> struct dma_heap_export_info exp_info;
> + struct dma_heap *sys_heap;
>
> exp_info.name = "system";
> exp_info.ops = &system_heap_ops;
> exp_info.priv = NULL;
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 1/2] dma-buf: heaps: system: Remove global variable
2025-04-07 16:29 ` [PATCH v3 1/2] dma-buf: heaps: system: Remove global variable Maxime Ripard
2025-04-07 17:49 ` Christian König
@ 2025-04-08 8:43 ` Mattijs Korpershoek
1 sibling, 0 replies; 10+ messages in thread
From: Mattijs Korpershoek @ 2025-04-08 8:43 UTC (permalink / raw)
To: Maxime Ripard, Rob Herring, Saravana Kannan, Sumit Semwal,
Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier,
Christian König
Cc: Mattijs Korpershoek, devicetree, linux-kernel, linux-media,
dri-devel, linaro-mm-sig, Maxime Ripard
Hi Maxime,
Thank you for the patch.
On lun., avril 07, 2025 at 18:29, Maxime Ripard <mripard@kernel.org> wrote:
> The system heap is storing its struct dma_heap pointer in a global
> variable but isn't using it anywhere.
>
> Let's move the global variable into system_heap_create() to make it
> local.
>
> Signed-off-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Mattijs Korpershoek <mkorpershoek@kernel.org>
> ---
> drivers/dma-buf/heaps/system_heap.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index 26d5dc89ea1663a0d078e3a5723ca3d8d12b935f..82b1b714300d6ff5f3e543059dd8215ceaa00c69 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -19,12 +19,10 @@
> #include <linux/module.h>
> #include <linux/scatterlist.h>
> #include <linux/slab.h>
> #include <linux/vmalloc.h>
>
> -static struct dma_heap *sys_heap;
> -
> struct system_heap_buffer {
> struct dma_heap *heap;
> struct list_head attachments;
> struct mutex lock;
> unsigned long len;
> @@ -422,10 +420,11 @@ static const struct dma_heap_ops system_heap_ops = {
> };
>
> static int __init system_heap_create(void)
> {
> struct dma_heap_export_info exp_info;
> + struct dma_heap *sys_heap;
>
> exp_info.name = "system";
> exp_info.ops = &system_heap_ops;
> exp_info.priv = NULL;
>
>
> --
> 2.49.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory
2025-04-07 16:29 ` [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory Maxime Ripard
@ 2025-04-10 7:42 ` Mattijs Korpershoek
2025-04-11 20:26 ` T.J. Mercier
1 sibling, 0 replies; 10+ messages in thread
From: Mattijs Korpershoek @ 2025-04-10 7:42 UTC (permalink / raw)
To: Maxime Ripard, Rob Herring, Saravana Kannan, Sumit Semwal,
Benjamin Gaignard, Brian Starkey, John Stultz, T.J. Mercier,
Christian König
Cc: Mattijs Korpershoek, devicetree, linux-kernel, linux-media,
dri-devel, linaro-mm-sig, Maxime Ripard
Hi Maxime,
Thank you for the patch.
On lun., avril 07, 2025 at 18:29, Maxime Ripard <mripard@kernel.org> wrote:
> Some reserved memory regions might have particular memory setup or
> attributes that make them good candidates for heaps.
>
> Let's provide a heap type that will create a new heap for each reserved
> memory region flagged as such.
>
> Signed-off-by: Maxime Ripard <mripard@kernel.org>
> ---
> drivers/dma-buf/heaps/Kconfig | 8 +
> drivers/dma-buf/heaps/Makefile | 1 +
> drivers/dma-buf/heaps/carveout_heap.c | 360 ++++++++++++++++++++++++++++++++++
> 3 files changed, 369 insertions(+)
>
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> index a5eef06c422644e8aadaf5aff2bd9a33c49c1ba3..c6981d696733b4d8d0c3f6f5a37d967fd6a1a4a2 100644
> --- a/drivers/dma-buf/heaps/Kconfig
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -1,5 +1,13 @@
> +config DMABUF_HEAPS_CARVEOUT
> + bool "Carveout Heaps"
Nitpick: Shouldn't this be: "DMA-BUF Carveout Heaps" ?
This way, we are consistent with the other entries in this Kconfig.
In my opinion, I don't know enough about dma-buf to do a good review,
but I've tried my best by comparing this to a downstream heap I'm using from:
https://git.ti.com/cgit/ti-linux-kernel/ti-linux-kernel/tree/drivers/dma-buf/heaps/carveout-heap.c?h=ti-android-linux-6.6.y
Reviewed-by: Mattijs Korpershoek <mkorpershoek@kernel.org>
> + depends on DMABUF_HEAPS
> + help
> + Choose this option to enable the carveout dmabuf heap. The carveout
> + heap is backed by pages from reserved memory regions flagged as
> + exportable. If in doubt, say Y.
> +
> config DMABUF_HEAPS_SYSTEM
> bool "DMA-BUF System Heap"
> depends on DMABUF_HEAPS
> help
> Choose this option to enable the system dmabuf heap. The system heap
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index 974467791032ffb8a7aba17b1407d9a19b3f3b44..b734647ad5c84f449106748160258e372f153df2 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -1,3 +1,4 @@
> # SPDX-License-Identifier: GPL-2.0
> +obj-$(CONFIG_DMABUF_HEAPS_CARVEOUT) += carveout_heap.o
> obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
> obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
> diff --git a/drivers/dma-buf/heaps/carveout_heap.c b/drivers/dma-buf/heaps/carveout_heap.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..f7198b781ea57f4f60e554d917c9277e9a716b16
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/carveout_heap.c
> @@ -0,0 +1,360 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/dma-buf.h>
> +#include <linux/dma-heap.h>
> +#include <linux/genalloc.h>
> +#include <linux/highmem.h>
> +#include <linux/of_reserved_mem.h>
> +
> +struct carveout_heap_priv {
> + struct dma_heap *heap;
> + struct gen_pool *pool;
> +};
> +
> +struct carveout_heap_buffer_priv {
> + struct mutex lock;
> + struct list_head attachments;
> +
> + unsigned long num_pages;
> + struct carveout_heap_priv *heap;
> + dma_addr_t daddr;
> + void *vaddr;
> + unsigned int vmap_cnt;
> +};
> +
> +struct carveout_heap_attachment {
> + struct list_head head;
> + struct sg_table table;
> +
> + struct device *dev;
> + bool mapped;
> +};
> +
> +static int carveout_heap_attach(struct dma_buf *buf,
> + struct dma_buf_attachment *attachment)
> +{
> + struct carveout_heap_buffer_priv *priv = buf->priv;
> + struct carveout_heap_attachment *a;
> + struct sg_table *sgt;
> + unsigned long len = priv->num_pages * PAGE_SIZE;
> + int ret;
> +
> + a = kzalloc(sizeof(*a), GFP_KERNEL);
> + if (!a)
> + return -ENOMEM;
> + INIT_LIST_HEAD(&a->head);
> + a->dev = attachment->dev;
> + attachment->priv = a;
> +
> + sgt = &a->table;
> + ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> + if (ret)
> + goto err_cleanup_attach;
> +
> + sg_dma_address(sgt->sgl) = priv->daddr;
> + sg_dma_len(sgt->sgl) = len;
> +
> + mutex_lock(&priv->lock);
> + list_add(&a->head, &priv->attachments);
> + mutex_unlock(&priv->lock);
> +
> + return 0;
> +
> +err_cleanup_attach:
> + kfree(a);
> + return ret;
> +}
> +
> +static void carveout_heap_detach(struct dma_buf *dmabuf,
> + struct dma_buf_attachment *attachment)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> + struct carveout_heap_attachment *a = attachment->priv;
> +
> + mutex_lock(&priv->lock);
> + list_del(&a->head);
> + mutex_unlock(&priv->lock);
> +
> + sg_free_table(&a->table);
> + kfree(a);
> +}
> +
> +static struct sg_table *
> +carveout_heap_map_dma_buf(struct dma_buf_attachment *attachment,
> + enum dma_data_direction direction)
> +{
> + struct carveout_heap_attachment *a = attachment->priv;
> + struct sg_table *table = &a->table;
> + int ret;
> +
> + ret = dma_map_sgtable(a->dev, table, direction, 0);
> + if (ret)
> + return ERR_PTR(-ENOMEM);
> +
> + a->mapped = true;
> +
> + return table;
> +}
> +
> +static void carveout_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
> + struct sg_table *table,
> + enum dma_data_direction direction)
> +{
> + struct carveout_heap_attachment *a = attachment->priv;
> +
> + a->mapped = false;
> + dma_unmap_sgtable(a->dev, table, direction, 0);
> +}
> +
> +static int
> +carveout_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> + enum dma_data_direction direction)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> + struct carveout_heap_attachment *a;
> + unsigned long len = priv->num_pages * PAGE_SIZE;
> +
> + mutex_lock(&priv->lock);
> +
> + if (priv->vmap_cnt > 0)
> + invalidate_kernel_vmap_range(priv->vaddr, len);
> +
> + list_for_each_entry(a, &priv->attachments, head) {
> + if (!a->mapped)
> + continue;
> +
> + dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
> + }
> +
> + mutex_unlock(&priv->lock);
> +
> + return 0;
> +}
> +
> +static int
> +carveout_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> + enum dma_data_direction direction)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> + struct carveout_heap_attachment *a;
> + unsigned long len = priv->num_pages * PAGE_SIZE;
> +
> + mutex_lock(&priv->lock);
> +
> + if (priv->vmap_cnt > 0)
> + flush_kernel_vmap_range(priv->vaddr, len);
> +
> + list_for_each_entry(a, &priv->attachments, head) {
> + if (!a->mapped)
> + continue;
> +
> + dma_sync_sgtable_for_device(a->dev, &a->table, direction);
> + }
> +
> + mutex_unlock(&priv->lock);
> +
> + return 0;
> +}
> +
> +static int carveout_heap_mmap(struct dma_buf *dmabuf,
> + struct vm_area_struct *vma)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> + unsigned long len = priv->num_pages * PAGE_SIZE;
> + struct page *page = virt_to_page(priv->vaddr);
> +
> + return remap_pfn_range(vma, vma->vm_start, page_to_pfn(page),
> + len, vma->vm_page_prot);
> +}
> +
> +static int carveout_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> +
> + mutex_lock(&priv->lock);
> +
> + iosys_map_set_vaddr(map, priv->vaddr);
> + priv->vmap_cnt++;
> +
> + mutex_unlock(&priv->lock);
> +
> + return 0;
> +}
> +
> +static void carveout_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> +
> + mutex_lock(&priv->lock);
> +
> + priv->vmap_cnt--;
> + mutex_unlock(&priv->lock);
> +
> + iosys_map_clear(map);
> +}
> +
> +static void carveout_heap_dma_buf_release(struct dma_buf *buf)
> +{
> + struct carveout_heap_buffer_priv *buffer_priv = buf->priv;
> + struct carveout_heap_priv *heap_priv = buffer_priv->heap;
> + unsigned long len = buffer_priv->num_pages * PAGE_SIZE;
> +
> + gen_pool_free(heap_priv->pool, (unsigned long)buffer_priv->vaddr, len);
> + kfree(buffer_priv);
> +}
> +
> +static const struct dma_buf_ops carveout_heap_buf_ops = {
> + .attach = carveout_heap_attach,
> + .detach = carveout_heap_detach,
> + .map_dma_buf = carveout_heap_map_dma_buf,
> + .unmap_dma_buf = carveout_heap_unmap_dma_buf,
> + .begin_cpu_access = carveout_heap_dma_buf_begin_cpu_access,
> + .end_cpu_access = carveout_heap_dma_buf_end_cpu_access,
> + .mmap = carveout_heap_mmap,
> + .vmap = carveout_heap_vmap,
> + .vunmap = carveout_heap_vunmap,
> + .release = carveout_heap_dma_buf_release,
> +};
> +
> +static struct dma_buf *carveout_heap_allocate(struct dma_heap *heap,
> + unsigned long len,
> + u32 fd_flags,
> + u64 heap_flags)
> +{
> + struct carveout_heap_priv *heap_priv = dma_heap_get_drvdata(heap);
> + struct carveout_heap_buffer_priv *buffer_priv;
> + DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> + struct dma_buf *buf;
> + dma_addr_t daddr;
> + size_t size = PAGE_ALIGN(len);
> + void *vaddr;
> + int ret;
> +
> + buffer_priv = kzalloc(sizeof(*buffer_priv), GFP_KERNEL);
> + if (!buffer_priv)
> + return ERR_PTR(-ENOMEM);
> +
> + INIT_LIST_HEAD(&buffer_priv->attachments);
> + mutex_init(&buffer_priv->lock);
> +
> + vaddr = gen_pool_dma_zalloc(heap_priv->pool, size, &daddr);
> + if (!vaddr) {
> + ret = -ENOMEM;
> + goto err_free_buffer_priv;
> + }
> +
> + buffer_priv->vaddr = vaddr;
> + buffer_priv->daddr = daddr;
> + buffer_priv->heap = heap_priv;
> + buffer_priv->num_pages = size >> PAGE_SHIFT;
> +
> + /* create the dmabuf */
> + exp_info.exp_name = dma_heap_get_name(heap);
> + exp_info.ops = &carveout_heap_buf_ops;
> + exp_info.size = size;
> + exp_info.flags = fd_flags;
> + exp_info.priv = buffer_priv;
> +
> + buf = dma_buf_export(&exp_info);
> + if (IS_ERR(buf)) {
> + ret = PTR_ERR(buf);
> + goto err_free_buffer;
> + }
> +
> + return buf;
> +
> +err_free_buffer:
> + gen_pool_free(heap_priv->pool, (unsigned long)vaddr, len);
> +err_free_buffer_priv:
> + kfree(buffer_priv);
> +
> + return ERR_PTR(ret);
> +}
> +
> +static const struct dma_heap_ops carveout_heap_ops = {
> + .allocate = carveout_heap_allocate,
> +};
> +
> +static int __init carveout_heap_setup(struct device_node *node)
> +{
> + struct dma_heap_export_info exp_info = {};
> + const struct reserved_mem *rmem;
> + struct carveout_heap_priv *priv;
> + struct dma_heap *heap;
> + struct gen_pool *pool;
> + void *base;
> + int ret;
> +
> + rmem = of_reserved_mem_lookup(node);
> + if (!rmem)
> + return -EINVAL;
> +
> + priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> + if (!priv)
> + return -ENOMEM;
> +
> + pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> + if (!pool) {
> + ret = -ENOMEM;
> + goto err_cleanup_heap;
> + }
> + priv->pool = pool;
> +
> + base = memremap(rmem->base, rmem->size, MEMREMAP_WB);
> + if (!base) {
> + ret = -ENOMEM;
> + goto err_release_mem_region;
> + }
> +
> + ret = gen_pool_add_virt(pool, (unsigned long)base, rmem->base,
> + rmem->size, NUMA_NO_NODE);
> + if (ret)
> + goto err_unmap;
> +
> + exp_info.name = node->full_name;
> + exp_info.ops = &carveout_heap_ops;
> + exp_info.priv = priv;
> +
> + heap = dma_heap_add(&exp_info);
> + if (IS_ERR(heap)) {
> + ret = PTR_ERR(heap);
> + goto err_cleanup_pool_region;
> + }
> + priv->heap = heap;
> +
> + return 0;
> +
> +err_cleanup_pool_region:
> + gen_pool_free(pool, (unsigned long)base, rmem->size);
> +err_unmap:
> + memunmap(base);
> +err_release_mem_region:
> + gen_pool_destroy(pool);
> +err_cleanup_heap:
> + kfree(priv);
> + return ret;
> +}
> +
> +static int __init carveout_heap_init(void)
> +{
> + struct device_node *rmem_node;
> + struct device_node *node;
> + int ret;
> +
> + rmem_node = of_find_node_by_path("/reserved-memory");
> + if (!rmem_node)
> + return 0;
> +
> + for_each_child_of_node(rmem_node, node) {
> + if (!of_property_read_bool(node, "export"))
> + continue;
> +
> + ret = carveout_heap_setup(node);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +module_init(carveout_heap_init);
>
> --
> 2.49.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory
2025-04-07 16:29 ` [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory Maxime Ripard
2025-04-10 7:42 ` Mattijs Korpershoek
@ 2025-04-11 20:26 ` T.J. Mercier
2025-04-14 17:43 ` Andrew Davis
1 sibling, 1 reply; 10+ messages in thread
From: T.J. Mercier @ 2025-04-11 20:26 UTC (permalink / raw)
To: Maxime Ripard
Cc: Rob Herring, Saravana Kannan, Sumit Semwal, Benjamin Gaignard,
Brian Starkey, John Stultz, Christian König,
Mattijs Korpershoek, devicetree, linux-kernel, linux-media,
dri-devel, linaro-mm-sig, Andrew Davis
On Mon, Apr 7, 2025 at 9:29 AM Maxime Ripard <mripard@kernel.org> wrote:
>
> Some reserved memory regions might have particular memory setup or
> attributes that make them good candidates for heaps.
>
> Let's provide a heap type that will create a new heap for each reserved
> memory region flagged as such.
>
> Signed-off-by: Maxime Ripard <mripard@kernel.org>
This patch looks good to me, but I think it'd be good to add more
justification like you did at
https://lore.kernel.org/all/20240515-dma-buf-ecc-heap-v1-0-54cbbd049511@kernel.org
> ---
> drivers/dma-buf/heaps/Kconfig | 8 +
> drivers/dma-buf/heaps/Makefile | 1 +
> drivers/dma-buf/heaps/carveout_heap.c | 360 ++++++++++++++++++++++++++++++++++
> 3 files changed, 369 insertions(+)
>
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> index a5eef06c422644e8aadaf5aff2bd9a33c49c1ba3..c6981d696733b4d8d0c3f6f5a37d967fd6a1a4a2 100644
> --- a/drivers/dma-buf/heaps/Kconfig
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -1,5 +1,13 @@
> +config DMABUF_HEAPS_CARVEOUT
> + bool "Carveout Heaps"
> + depends on DMABUF_HEAPS
> + help
> + Choose this option to enable the carveout dmabuf heap. The carveout
> + heap is backed by pages from reserved memory regions flagged as
> + exportable. If in doubt, say Y.
> +
> config DMABUF_HEAPS_SYSTEM
> bool "DMA-BUF System Heap"
> depends on DMABUF_HEAPS
> help
> Choose this option to enable the system dmabuf heap. The system heap
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index 974467791032ffb8a7aba17b1407d9a19b3f3b44..b734647ad5c84f449106748160258e372f153df2 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -1,3 +1,4 @@
> # SPDX-License-Identifier: GPL-2.0
> +obj-$(CONFIG_DMABUF_HEAPS_CARVEOUT) += carveout_heap.o
> obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
> obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
> diff --git a/drivers/dma-buf/heaps/carveout_heap.c b/drivers/dma-buf/heaps/carveout_heap.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..f7198b781ea57f4f60e554d917c9277e9a716b16
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/carveout_heap.c
> @@ -0,0 +1,360 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/dma-buf.h>
> +#include <linux/dma-heap.h>
> +#include <linux/genalloc.h>
> +#include <linux/highmem.h>
> +#include <linux/of_reserved_mem.h>
> +
> +struct carveout_heap_priv {
> + struct dma_heap *heap;
> + struct gen_pool *pool;
> +};
> +
> +struct carveout_heap_buffer_priv {
> + struct mutex lock;
> + struct list_head attachments;
> +
> + unsigned long num_pages;
> + struct carveout_heap_priv *heap;
> + dma_addr_t daddr;
> + void *vaddr;
> + unsigned int vmap_cnt;
> +};
> +
> +struct carveout_heap_attachment {
> + struct list_head head;
> + struct sg_table table;
> +
> + struct device *dev;
> + bool mapped;
> +};
> +
> +static int carveout_heap_attach(struct dma_buf *buf,
> + struct dma_buf_attachment *attachment)
> +{
> + struct carveout_heap_buffer_priv *priv = buf->priv;
> + struct carveout_heap_attachment *a;
> + struct sg_table *sgt;
> + unsigned long len = priv->num_pages * PAGE_SIZE;
> + int ret;
> +
> + a = kzalloc(sizeof(*a), GFP_KERNEL);
> + if (!a)
> + return -ENOMEM;
> + INIT_LIST_HEAD(&a->head);
> + a->dev = attachment->dev;
> + attachment->priv = a;
> +
> + sgt = &a->table;
> + ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> + if (ret)
> + goto err_cleanup_attach;
> +
> + sg_dma_address(sgt->sgl) = priv->daddr;
> + sg_dma_len(sgt->sgl) = len;
> +
> + mutex_lock(&priv->lock);
> + list_add(&a->head, &priv->attachments);
> + mutex_unlock(&priv->lock);
> +
> + return 0;
> +
> +err_cleanup_attach:
> + kfree(a);
> + return ret;
> +}
> +
> +static void carveout_heap_detach(struct dma_buf *dmabuf,
> + struct dma_buf_attachment *attachment)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> + struct carveout_heap_attachment *a = attachment->priv;
> +
> + mutex_lock(&priv->lock);
> + list_del(&a->head);
> + mutex_unlock(&priv->lock);
> +
> + sg_free_table(&a->table);
> + kfree(a);
> +}
> +
> +static struct sg_table *
> +carveout_heap_map_dma_buf(struct dma_buf_attachment *attachment,
> + enum dma_data_direction direction)
> +{
> + struct carveout_heap_attachment *a = attachment->priv;
> + struct sg_table *table = &a->table;
> + int ret;
> +
> + ret = dma_map_sgtable(a->dev, table, direction, 0);
> + if (ret)
> + return ERR_PTR(-ENOMEM);
Not ERR_PTR(ret)? This is already converted to ENOMEM by
dma_buf_map_attachment before leaving the dmabuf code, but it might be
nice to retain the error type internally. The two existing heaps
aren't consistent about this, and I have a slight preference to
propagate the error here.
> +
> + a->mapped = true;
> +
> + return table;
> +}
> +
> +static void carveout_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
> + struct sg_table *table,
> + enum dma_data_direction direction)
> +{
> + struct carveout_heap_attachment *a = attachment->priv;
> +
> + a->mapped = false;
> + dma_unmap_sgtable(a->dev, table, direction, 0);
> +}
> +
> +static int
> +carveout_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> + enum dma_data_direction direction)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> + struct carveout_heap_attachment *a;
> + unsigned long len = priv->num_pages * PAGE_SIZE;
> +
> + mutex_lock(&priv->lock);
> +
> + if (priv->vmap_cnt > 0)
> + invalidate_kernel_vmap_range(priv->vaddr, len);
> +
> + list_for_each_entry(a, &priv->attachments, head) {
> + if (!a->mapped)
> + continue;
> +
> + dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
> + }
> +
> + mutex_unlock(&priv->lock);
> +
> + return 0;
> +}
> +
> +static int
> +carveout_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> + enum dma_data_direction direction)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> + struct carveout_heap_attachment *a;
> + unsigned long len = priv->num_pages * PAGE_SIZE;
> +
> + mutex_lock(&priv->lock);
> +
> + if (priv->vmap_cnt > 0)
> + flush_kernel_vmap_range(priv->vaddr, len);
> +
> + list_for_each_entry(a, &priv->attachments, head) {
> + if (!a->mapped)
> + continue;
> +
> + dma_sync_sgtable_for_device(a->dev, &a->table, direction);
> + }
> +
> + mutex_unlock(&priv->lock);
> +
> + return 0;
> +}
> +
> +static int carveout_heap_mmap(struct dma_buf *dmabuf,
> + struct vm_area_struct *vma)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> + unsigned long len = priv->num_pages * PAGE_SIZE;
> + struct page *page = virt_to_page(priv->vaddr);
> +
> + return remap_pfn_range(vma, vma->vm_start, page_to_pfn(page),
> + len, vma->vm_page_prot);
> +}
> +
> +static int carveout_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> +
> + mutex_lock(&priv->lock);
> +
> + iosys_map_set_vaddr(map, priv->vaddr);
> + priv->vmap_cnt++;
> +
> + mutex_unlock(&priv->lock);
> +
> + return 0;
> +}
> +
> +static void carveout_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
> +{
> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> +
> + mutex_lock(&priv->lock);
> +
> + priv->vmap_cnt--;
> + mutex_unlock(&priv->lock);
> +
> + iosys_map_clear(map);
> +}
> +
> +static void carveout_heap_dma_buf_release(struct dma_buf *buf)
> +{
> + struct carveout_heap_buffer_priv *buffer_priv = buf->priv;
> + struct carveout_heap_priv *heap_priv = buffer_priv->heap;
> + unsigned long len = buffer_priv->num_pages * PAGE_SIZE;
> +
> + gen_pool_free(heap_priv->pool, (unsigned long)buffer_priv->vaddr, len);
> + kfree(buffer_priv);
> +}
> +
> +static const struct dma_buf_ops carveout_heap_buf_ops = {
> + .attach = carveout_heap_attach,
> + .detach = carveout_heap_detach,
> + .map_dma_buf = carveout_heap_map_dma_buf,
> + .unmap_dma_buf = carveout_heap_unmap_dma_buf,
> + .begin_cpu_access = carveout_heap_dma_buf_begin_cpu_access,
> + .end_cpu_access = carveout_heap_dma_buf_end_cpu_access,
> + .mmap = carveout_heap_mmap,
> + .vmap = carveout_heap_vmap,
> + .vunmap = carveout_heap_vunmap,
> + .release = carveout_heap_dma_buf_release,
> +};
> +
> +static struct dma_buf *carveout_heap_allocate(struct dma_heap *heap,
> + unsigned long len,
> + u32 fd_flags,
> + u64 heap_flags)
> +{
> + struct carveout_heap_priv *heap_priv = dma_heap_get_drvdata(heap);
> + struct carveout_heap_buffer_priv *buffer_priv;
> + DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> + struct dma_buf *buf;
> + dma_addr_t daddr;
> + size_t size = PAGE_ALIGN(len);
This PAGE_ALIGN is not needed since dma_heap_buffer_alloc requires all
heap allocations to be page aligned before this function is called.
> + void *vaddr;
> + int ret;
> +
> + buffer_priv = kzalloc(sizeof(*buffer_priv), GFP_KERNEL);
> + if (!buffer_priv)
> + return ERR_PTR(-ENOMEM);
> +
> + INIT_LIST_HEAD(&buffer_priv->attachments);
> + mutex_init(&buffer_priv->lock);
> +
> + vaddr = gen_pool_dma_zalloc(heap_priv->pool, size, &daddr);
> + if (!vaddr) {
> + ret = -ENOMEM;
> + goto err_free_buffer_priv;
> + }
> +
> + buffer_priv->vaddr = vaddr;
> + buffer_priv->daddr = daddr;
> + buffer_priv->heap = heap_priv;
> + buffer_priv->num_pages = size >> PAGE_SHIFT;
> +
> + /* create the dmabuf */
> + exp_info.exp_name = dma_heap_get_name(heap);
> + exp_info.ops = &carveout_heap_buf_ops;
> + exp_info.size = size;
> + exp_info.flags = fd_flags;
> + exp_info.priv = buffer_priv;
> +
> + buf = dma_buf_export(&exp_info);
> + if (IS_ERR(buf)) {
> + ret = PTR_ERR(buf);
> + goto err_free_buffer;
> + }
> +
> + return buf;
> +
> +err_free_buffer:
> + gen_pool_free(heap_priv->pool, (unsigned long)vaddr, len);
> +err_free_buffer_priv:
> + kfree(buffer_priv);
> +
> + return ERR_PTR(ret);
> +}
> +
> +static const struct dma_heap_ops carveout_heap_ops = {
> + .allocate = carveout_heap_allocate,
> +};
> +
> +static int __init carveout_heap_setup(struct device_node *node)
> +{
> + struct dma_heap_export_info exp_info = {};
> + const struct reserved_mem *rmem;
> + struct carveout_heap_priv *priv;
> + struct dma_heap *heap;
> + struct gen_pool *pool;
> + void *base;
> + int ret;
> +
> + rmem = of_reserved_mem_lookup(node);
> + if (!rmem)
> + return -EINVAL;
> +
> + priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> + if (!priv)
> + return -ENOMEM;
> +
> + pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> + if (!pool) {
> + ret = -ENOMEM;
> + goto err_cleanup_heap;
> + }
> + priv->pool = pool;
> +
> + base = memremap(rmem->base, rmem->size, MEMREMAP_WB);
> + if (!base) {
> + ret = -ENOMEM;
> + goto err_release_mem_region;
> + }
> +
> + ret = gen_pool_add_virt(pool, (unsigned long)base, rmem->base,
> + rmem->size, NUMA_NO_NODE);
> + if (ret)
> + goto err_unmap;
> +
> + exp_info.name = node->full_name;
> + exp_info.ops = &carveout_heap_ops;
> + exp_info.priv = priv;
> +
> + heap = dma_heap_add(&exp_info);
> + if (IS_ERR(heap)) {
> + ret = PTR_ERR(heap);
> + goto err_cleanup_pool_region;
> + }
> + priv->heap = heap;
> +
> + return 0;
> +
> +err_cleanup_pool_region:
> + gen_pool_free(pool, (unsigned long)base, rmem->size);
> +err_unmap:
> + memunmap(base);
> +err_release_mem_region:
> + gen_pool_destroy(pool);
> +err_cleanup_heap:
> + kfree(priv);
> + return ret;
> +}
> +
> +static int __init carveout_heap_init(void)
> +{
> + struct device_node *rmem_node;
> + struct device_node *node;
> + int ret;
> +
> + rmem_node = of_find_node_by_path("/reserved-memory");
> + if (!rmem_node)
> + return 0;
> +
> + for_each_child_of_node(rmem_node, node) {
> + if (!of_property_read_bool(node, "export"))
> + continue;
> +
> + ret = carveout_heap_setup(node);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +module_init(carveout_heap_init);
>
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory
2025-04-11 20:26 ` T.J. Mercier
@ 2025-04-14 17:43 ` Andrew Davis
2025-04-25 7:33 ` Maxime Ripard
0 siblings, 1 reply; 10+ messages in thread
From: Andrew Davis @ 2025-04-14 17:43 UTC (permalink / raw)
To: T.J. Mercier, Maxime Ripard
Cc: Rob Herring, Saravana Kannan, Sumit Semwal, Benjamin Gaignard,
Brian Starkey, John Stultz, Christian König,
Mattijs Korpershoek, devicetree, linux-kernel, linux-media,
dri-devel, linaro-mm-sig
On 4/11/25 3:26 PM, T.J. Mercier wrote:
> On Mon, Apr 7, 2025 at 9:29 AM Maxime Ripard <mripard@kernel.org> wrote:
>>
>> Some reserved memory regions might have particular memory setup or
>> attributes that make them good candidates for heaps.
>>
>> Let's provide a heap type that will create a new heap for each reserved
>> memory region flagged as such.
>>
>> Signed-off-by: Maxime Ripard <mripard@kernel.org>
>
> This patch looks good to me, but I think it'd be good to add more
> justification like you did at
> https://lore.kernel.org/all/20240515-dma-buf-ecc-heap-v1-0-54cbbd049511@kernel.org
>
>> ---
>> drivers/dma-buf/heaps/Kconfig | 8 +
>> drivers/dma-buf/heaps/Makefile | 1 +
>> drivers/dma-buf/heaps/carveout_heap.c | 360 ++++++++++++++++++++++++++++++++++
>> 3 files changed, 369 insertions(+)
>>
>> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
>> index a5eef06c422644e8aadaf5aff2bd9a33c49c1ba3..c6981d696733b4d8d0c3f6f5a37d967fd6a1a4a2 100644
>> --- a/drivers/dma-buf/heaps/Kconfig
>> +++ b/drivers/dma-buf/heaps/Kconfig
>> @@ -1,5 +1,13 @@
>> +config DMABUF_HEAPS_CARVEOUT
>> + bool "Carveout Heaps"
>> + depends on DMABUF_HEAPS
>> + help
>> + Choose this option to enable the carveout dmabuf heap. The carveout
>> + heap is backed by pages from reserved memory regions flagged as
>> + exportable. If in doubt, say Y.
>> +
>> config DMABUF_HEAPS_SYSTEM
>> bool "DMA-BUF System Heap"
>> depends on DMABUF_HEAPS
>> help
>> Choose this option to enable the system dmabuf heap. The system heap
>> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
>> index 974467791032ffb8a7aba17b1407d9a19b3f3b44..b734647ad5c84f449106748160258e372f153df2 100644
>> --- a/drivers/dma-buf/heaps/Makefile
>> +++ b/drivers/dma-buf/heaps/Makefile
>> @@ -1,3 +1,4 @@
>> # SPDX-License-Identifier: GPL-2.0
>> +obj-$(CONFIG_DMABUF_HEAPS_CARVEOUT) += carveout_heap.o
>> obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
>> obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
>> diff --git a/drivers/dma-buf/heaps/carveout_heap.c b/drivers/dma-buf/heaps/carveout_heap.c
>> new file mode 100644
>> index 0000000000000000000000000000000000000000..f7198b781ea57f4f60e554d917c9277e9a716b16
>> --- /dev/null
>> +++ b/drivers/dma-buf/heaps/carveout_heap.c
>> @@ -0,0 +1,360 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#include <linux/dma-buf.h>
>> +#include <linux/dma-heap.h>
>> +#include <linux/genalloc.h>
>> +#include <linux/highmem.h>
>> +#include <linux/of_reserved_mem.h>
>> +
>> +struct carveout_heap_priv {
>> + struct dma_heap *heap;
>> + struct gen_pool *pool;
>> +};
>> +
>> +struct carveout_heap_buffer_priv {
>> + struct mutex lock;
>> + struct list_head attachments;
>> +
>> + unsigned long num_pages;
>> + struct carveout_heap_priv *heap;
>> + dma_addr_t daddr;
>> + void *vaddr;
>> + unsigned int vmap_cnt;
>> +};
>> +
>> +struct carveout_heap_attachment {
>> + struct list_head head;
>> + struct sg_table table;
>> +
>> + struct device *dev;
>> + bool mapped;
>> +};
>> +
>> +static int carveout_heap_attach(struct dma_buf *buf,
>> + struct dma_buf_attachment *attachment)
>> +{
>> + struct carveout_heap_buffer_priv *priv = buf->priv;
>> + struct carveout_heap_attachment *a;
>> + struct sg_table *sgt;
>> + unsigned long len = priv->num_pages * PAGE_SIZE;
>> + int ret;
>> +
>> + a = kzalloc(sizeof(*a), GFP_KERNEL);
>> + if (!a)
>> + return -ENOMEM;
>> + INIT_LIST_HEAD(&a->head);
>> + a->dev = attachment->dev;
>> + attachment->priv = a;
>> +
>> + sgt = &a->table;
>> + ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
>> + if (ret)
>> + goto err_cleanup_attach;
>> +
>> + sg_dma_address(sgt->sgl) = priv->daddr;
>> + sg_dma_len(sgt->sgl) = len;
>> +
>> + mutex_lock(&priv->lock);
>> + list_add(&a->head, &priv->attachments);
>> + mutex_unlock(&priv->lock);
>> +
>> + return 0;
>> +
>> +err_cleanup_attach:
>> + kfree(a);
>> + return ret;
>> +}
>> +
>> +static void carveout_heap_detach(struct dma_buf *dmabuf,
>> + struct dma_buf_attachment *attachment)
>> +{
>> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
>> + struct carveout_heap_attachment *a = attachment->priv;
>> +
>> + mutex_lock(&priv->lock);
>> + list_del(&a->head);
>> + mutex_unlock(&priv->lock);
>> +
>> + sg_free_table(&a->table);
>> + kfree(a);
>> +}
>> +
>> +static struct sg_table *
>> +carveout_heap_map_dma_buf(struct dma_buf_attachment *attachment,
>> + enum dma_data_direction direction)
>> +{
>> + struct carveout_heap_attachment *a = attachment->priv;
>> + struct sg_table *table = &a->table;
>> + int ret;
>> +
>> + ret = dma_map_sgtable(a->dev, table, direction, 0);
>> + if (ret)
>> + return ERR_PTR(-ENOMEM);
>
> Not ERR_PTR(ret)? This is already converted to ENOMEM by
> dma_buf_map_attachment before leaving the dmabuf code, but it might be
> nice to retain the error type internally. The two existing heaps
> aren't consistent about this, and I have a slight preference to
> propagate the error here.
>
>> +
>> + a->mapped = true;
>> +
>> + return table;
>> +}
>> +
>> +static void carveout_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
>> + struct sg_table *table,
>> + enum dma_data_direction direction)
>> +{
>> + struct carveout_heap_attachment *a = attachment->priv;
>> +
>> + a->mapped = false;
>> + dma_unmap_sgtable(a->dev, table, direction, 0);
>> +}
>> +
>> +static int
>> +carveout_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>> + enum dma_data_direction direction)
>> +{
>> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
>> + struct carveout_heap_attachment *a;
>> + unsigned long len = priv->num_pages * PAGE_SIZE;
>> +
>> + mutex_lock(&priv->lock);
>> +
>> + if (priv->vmap_cnt > 0)
>> + invalidate_kernel_vmap_range(priv->vaddr, len);
>> +
>> + list_for_each_entry(a, &priv->attachments, head) {
>> + if (!a->mapped)
>> + continue;
>> +
>> + dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
>> + }
>> +
>> + mutex_unlock(&priv->lock);
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +carveout_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>> + enum dma_data_direction direction)
>> +{
>> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
>> + struct carveout_heap_attachment *a;
>> + unsigned long len = priv->num_pages * PAGE_SIZE;
>> +
>> + mutex_lock(&priv->lock);
>> +
>> + if (priv->vmap_cnt > 0)
>> + flush_kernel_vmap_range(priv->vaddr, len);
>> +
>> + list_for_each_entry(a, &priv->attachments, head) {
>> + if (!a->mapped)
>> + continue;
>> +
>> + dma_sync_sgtable_for_device(a->dev, &a->table, direction);
>> + }
>> +
>> + mutex_unlock(&priv->lock);
>> +
>> + return 0;
>> +}
>> +
>> +static int carveout_heap_mmap(struct dma_buf *dmabuf,
>> + struct vm_area_struct *vma)
>> +{
>> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
>> + unsigned long len = priv->num_pages * PAGE_SIZE;
>> + struct page *page = virt_to_page(priv->vaddr);
>> +
>> + return remap_pfn_range(vma, vma->vm_start, page_to_pfn(page),
>> + len, vma->vm_page_prot);
>> +}
>> +
>> +static int carveout_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
>> +{
>> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
>> +
>> + mutex_lock(&priv->lock);
>> +
>> + iosys_map_set_vaddr(map, priv->vaddr);
>> + priv->vmap_cnt++;
>> +
>> + mutex_unlock(&priv->lock);
>> +
>> + return 0;
>> +}
>> +
>> +static void carveout_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
>> +{
>> + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
>> +
>> + mutex_lock(&priv->lock);
>> +
>> + priv->vmap_cnt--;
>> + mutex_unlock(&priv->lock);
>> +
>> + iosys_map_clear(map);
>> +}
>> +
>> +static void carveout_heap_dma_buf_release(struct dma_buf *buf)
>> +{
>> + struct carveout_heap_buffer_priv *buffer_priv = buf->priv;
>> + struct carveout_heap_priv *heap_priv = buffer_priv->heap;
>> + unsigned long len = buffer_priv->num_pages * PAGE_SIZE;
>> +
>> + gen_pool_free(heap_priv->pool, (unsigned long)buffer_priv->vaddr, len);
>> + kfree(buffer_priv);
>> +}
>> +
>> +static const struct dma_buf_ops carveout_heap_buf_ops = {
>> + .attach = carveout_heap_attach,
>> + .detach = carveout_heap_detach,
>> + .map_dma_buf = carveout_heap_map_dma_buf,
>> + .unmap_dma_buf = carveout_heap_unmap_dma_buf,
>> + .begin_cpu_access = carveout_heap_dma_buf_begin_cpu_access,
>> + .end_cpu_access = carveout_heap_dma_buf_end_cpu_access,
>> + .mmap = carveout_heap_mmap,
>> + .vmap = carveout_heap_vmap,
>> + .vunmap = carveout_heap_vunmap,
>> + .release = carveout_heap_dma_buf_release,
>> +};
>> +
>> +static struct dma_buf *carveout_heap_allocate(struct dma_heap *heap,
>> + unsigned long len,
>> + u32 fd_flags,
>> + u64 heap_flags)
>> +{
>> + struct carveout_heap_priv *heap_priv = dma_heap_get_drvdata(heap);
>> + struct carveout_heap_buffer_priv *buffer_priv;
>> + DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
>> + struct dma_buf *buf;
>> + dma_addr_t daddr;
>> + size_t size = PAGE_ALIGN(len);
>
> This PAGE_ALIGN is not needed since dma_heap_buffer_alloc requires all
> heap allocations to be page aligned before this function is called.
>
>
>
>
>
>> + void *vaddr;
>> + int ret;
>> +
>> + buffer_priv = kzalloc(sizeof(*buffer_priv), GFP_KERNEL);
>> + if (!buffer_priv)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + INIT_LIST_HEAD(&buffer_priv->attachments);
>> + mutex_init(&buffer_priv->lock);
>> +
>> + vaddr = gen_pool_dma_zalloc(heap_priv->pool, size, &daddr);
>> + if (!vaddr) {
>> + ret = -ENOMEM;
>> + goto err_free_buffer_priv;
>> + }
>> +
>> + buffer_priv->vaddr = vaddr;
>> + buffer_priv->daddr = daddr;
>> + buffer_priv->heap = heap_priv;
>> + buffer_priv->num_pages = size >> PAGE_SHIFT;
>> +
>> + /* create the dmabuf */
>> + exp_info.exp_name = dma_heap_get_name(heap);
>> + exp_info.ops = &carveout_heap_buf_ops;
>> + exp_info.size = size;
>> + exp_info.flags = fd_flags;
>> + exp_info.priv = buffer_priv;
>> +
>> + buf = dma_buf_export(&exp_info);
>> + if (IS_ERR(buf)) {
>> + ret = PTR_ERR(buf);
>> + goto err_free_buffer;
>> + }
>> +
>> + return buf;
>> +
>> +err_free_buffer:
>> + gen_pool_free(heap_priv->pool, (unsigned long)vaddr, len);
>> +err_free_buffer_priv:
>> + kfree(buffer_priv);
>> +
>> + return ERR_PTR(ret);
>> +}
>> +
>> +static const struct dma_heap_ops carveout_heap_ops = {
>> + .allocate = carveout_heap_allocate,
>> +};
>> +
>> +static int __init carveout_heap_setup(struct device_node *node)
>> +{
>> + struct dma_heap_export_info exp_info = {};
>> + const struct reserved_mem *rmem;
>> + struct carveout_heap_priv *priv;
>> + struct dma_heap *heap;
>> + struct gen_pool *pool;
>> + void *base;
>> + int ret;
>> +
>> + rmem = of_reserved_mem_lookup(node);
>> + if (!rmem)
>> + return -EINVAL;
>> +
>> + priv = kzalloc(sizeof(*priv), GFP_KERNEL);
>> + if (!priv)
>> + return -ENOMEM;
>> +
>> + pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
>> + if (!pool) {
>> + ret = -ENOMEM;
>> + goto err_cleanup_heap;
>> + }
>> + priv->pool = pool;
>> +
>> + base = memremap(rmem->base, rmem->size, MEMREMAP_WB);
Why add a mapping here? What if the carveout is never mapped by the CPU
(or maybe it shouldn't be mapped for some reason). Instead you could
make the map at map time. I do it that way in our evil vendor tree
version of this driver for reference[0].
>> + if (!base) {
>> + ret = -ENOMEM;
>> + goto err_release_mem_region;
>> + }
>> +
>> + ret = gen_pool_add_virt(pool, (unsigned long)base, rmem->base,
>> + rmem->size, NUMA_NO_NODE);
>> + if (ret)
>> + goto err_unmap;
>> +
>> + exp_info.name = node->full_name;
So this is the only part that concerns me. We really got the user exposed
naming wrong with the CMA Heap IMHO (probably should have been always called
"default_cma" or somthing, instead it changes based on how the default CMA
area was defined).
If the name of the heap is how users select the heap, it needs to be consistent.
And naming it after the node makes the DT name into ABI. It also means it will
change based on device, or even based on how it is created. What if this same
reserved region is defined by ACPI instead of DT in some cases, or from kernel
command-line, etc.. Makes for bad ABI :(
Maybe in addition to the "export" property, in the DT node we have a "heap-name"
that can be set which then defines what name is presented to userspace. At
very least that allows us to kick the can down the road till we can figure out
what good portable Heap names should look like.
Andrew
[0] https://git.ti.com/cgit/ti-linux-kernel/ti-linux-kernel/tree/drivers/dma-buf/heaps/carveout-heap.c?h=ti-linux-6.12.y
>> + exp_info.ops = &carveout_heap_ops;
>> + exp_info.priv = priv;
>> +
>> + heap = dma_heap_add(&exp_info);
>> + if (IS_ERR(heap)) {
>> + ret = PTR_ERR(heap);
>> + goto err_cleanup_pool_region;
>> + }
>> + priv->heap = heap;
>> +
>> + return 0;
>> +
>> +err_cleanup_pool_region:
>> + gen_pool_free(pool, (unsigned long)base, rmem->size);
>> +err_unmap:
>> + memunmap(base);
>> +err_release_mem_region:
>> + gen_pool_destroy(pool);
>> +err_cleanup_heap:
>> + kfree(priv);
>> + return ret;
>> +}
>> +
>> +static int __init carveout_heap_init(void)
>> +{
>> + struct device_node *rmem_node;
>> + struct device_node *node;
>> + int ret;
>> +
>> + rmem_node = of_find_node_by_path("/reserved-memory");
>> + if (!rmem_node)
>> + return 0;
>> +
>> + for_each_child_of_node(rmem_node, node) {
>> + if (!of_property_read_bool(node, "export"))
>> + continue;
>> +
>> + ret = carveout_heap_setup(node);
>> + if (ret)
>> + return ret;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +module_init(carveout_heap_init);
>>
>> --
>> 2.49.0
>>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory
2025-04-14 17:43 ` Andrew Davis
@ 2025-04-25 7:33 ` Maxime Ripard
0 siblings, 0 replies; 10+ messages in thread
From: Maxime Ripard @ 2025-04-25 7:33 UTC (permalink / raw)
To: Andrew Davis
Cc: T.J. Mercier, Rob Herring, Saravana Kannan, Sumit Semwal,
Benjamin Gaignard, Brian Starkey, John Stultz,
Christian König, Mattijs Korpershoek, devicetree,
linux-kernel, linux-media, dri-devel, linaro-mm-sig
[-- Attachment #1: Type: text/plain, Size: 16625 bytes --]
On Mon, Apr 14, 2025 at 12:43:44PM -0500, Andrew Davis wrote:
> On 4/11/25 3:26 PM, T.J. Mercier wrote:
> > On Mon, Apr 7, 2025 at 9:29 AM Maxime Ripard <mripard@kernel.org> wrote:
> > >
> > > Some reserved memory regions might have particular memory setup or
> > > attributes that make them good candidates for heaps.
> > >
> > > Let's provide a heap type that will create a new heap for each reserved
> > > memory region flagged as such.
> > >
> > > Signed-off-by: Maxime Ripard <mripard@kernel.org>
> >
> > This patch looks good to me, but I think it'd be good to add more
> > justification like you did at
> > https://lore.kernel.org/all/20240515-dma-buf-ecc-heap-v1-0-54cbbd049511@kernel.org
> >
> > > ---
> > > drivers/dma-buf/heaps/Kconfig | 8 +
> > > drivers/dma-buf/heaps/Makefile | 1 +
> > > drivers/dma-buf/heaps/carveout_heap.c | 360 ++++++++++++++++++++++++++++++++++
> > > 3 files changed, 369 insertions(+)
> > >
> > > diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> > > index a5eef06c422644e8aadaf5aff2bd9a33c49c1ba3..c6981d696733b4d8d0c3f6f5a37d967fd6a1a4a2 100644
> > > --- a/drivers/dma-buf/heaps/Kconfig
> > > +++ b/drivers/dma-buf/heaps/Kconfig
> > > @@ -1,5 +1,13 @@
> > > +config DMABUF_HEAPS_CARVEOUT
> > > + bool "Carveout Heaps"
> > > + depends on DMABUF_HEAPS
> > > + help
> > > + Choose this option to enable the carveout dmabuf heap. The carveout
> > > + heap is backed by pages from reserved memory regions flagged as
> > > + exportable. If in doubt, say Y.
> > > +
> > > config DMABUF_HEAPS_SYSTEM
> > > bool "DMA-BUF System Heap"
> > > depends on DMABUF_HEAPS
> > > help
> > > Choose this option to enable the system dmabuf heap. The system heap
> > > diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> > > index 974467791032ffb8a7aba17b1407d9a19b3f3b44..b734647ad5c84f449106748160258e372f153df2 100644
> > > --- a/drivers/dma-buf/heaps/Makefile
> > > +++ b/drivers/dma-buf/heaps/Makefile
> > > @@ -1,3 +1,4 @@
> > > # SPDX-License-Identifier: GPL-2.0
> > > +obj-$(CONFIG_DMABUF_HEAPS_CARVEOUT) += carveout_heap.o
> > > obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
> > > obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
> > > diff --git a/drivers/dma-buf/heaps/carveout_heap.c b/drivers/dma-buf/heaps/carveout_heap.c
> > > new file mode 100644
> > > index 0000000000000000000000000000000000000000..f7198b781ea57f4f60e554d917c9277e9a716b16
> > > --- /dev/null
> > > +++ b/drivers/dma-buf/heaps/carveout_heap.c
> > > @@ -0,0 +1,360 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +
> > > +#include <linux/dma-buf.h>
> > > +#include <linux/dma-heap.h>
> > > +#include <linux/genalloc.h>
> > > +#include <linux/highmem.h>
> > > +#include <linux/of_reserved_mem.h>
> > > +
> > > +struct carveout_heap_priv {
> > > + struct dma_heap *heap;
> > > + struct gen_pool *pool;
> > > +};
> > > +
> > > +struct carveout_heap_buffer_priv {
> > > + struct mutex lock;
> > > + struct list_head attachments;
> > > +
> > > + unsigned long num_pages;
> > > + struct carveout_heap_priv *heap;
> > > + dma_addr_t daddr;
> > > + void *vaddr;
> > > + unsigned int vmap_cnt;
> > > +};
> > > +
> > > +struct carveout_heap_attachment {
> > > + struct list_head head;
> > > + struct sg_table table;
> > > +
> > > + struct device *dev;
> > > + bool mapped;
> > > +};
> > > +
> > > +static int carveout_heap_attach(struct dma_buf *buf,
> > > + struct dma_buf_attachment *attachment)
> > > +{
> > > + struct carveout_heap_buffer_priv *priv = buf->priv;
> > > + struct carveout_heap_attachment *a;
> > > + struct sg_table *sgt;
> > > + unsigned long len = priv->num_pages * PAGE_SIZE;
> > > + int ret;
> > > +
> > > + a = kzalloc(sizeof(*a), GFP_KERNEL);
> > > + if (!a)
> > > + return -ENOMEM;
> > > + INIT_LIST_HEAD(&a->head);
> > > + a->dev = attachment->dev;
> > > + attachment->priv = a;
> > > +
> > > + sgt = &a->table;
> > > + ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> > > + if (ret)
> > > + goto err_cleanup_attach;
> > > +
> > > + sg_dma_address(sgt->sgl) = priv->daddr;
> > > + sg_dma_len(sgt->sgl) = len;
> > > +
> > > + mutex_lock(&priv->lock);
> > > + list_add(&a->head, &priv->attachments);
> > > + mutex_unlock(&priv->lock);
> > > +
> > > + return 0;
> > > +
> > > +err_cleanup_attach:
> > > + kfree(a);
> > > + return ret;
> > > +}
> > > +
> > > +static void carveout_heap_detach(struct dma_buf *dmabuf,
> > > + struct dma_buf_attachment *attachment)
> > > +{
> > > + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> > > + struct carveout_heap_attachment *a = attachment->priv;
> > > +
> > > + mutex_lock(&priv->lock);
> > > + list_del(&a->head);
> > > + mutex_unlock(&priv->lock);
> > > +
> > > + sg_free_table(&a->table);
> > > + kfree(a);
> > > +}
> > > +
> > > +static struct sg_table *
> > > +carveout_heap_map_dma_buf(struct dma_buf_attachment *attachment,
> > > + enum dma_data_direction direction)
> > > +{
> > > + struct carveout_heap_attachment *a = attachment->priv;
> > > + struct sg_table *table = &a->table;
> > > + int ret;
> > > +
> > > + ret = dma_map_sgtable(a->dev, table, direction, 0);
> > > + if (ret)
> > > + return ERR_PTR(-ENOMEM);
> >
> > Not ERR_PTR(ret)? This is already converted to ENOMEM by
> > dma_buf_map_attachment before leaving the dmabuf code, but it might be
> > nice to retain the error type internally. The two existing heaps
> > aren't consistent about this, and I have a slight preference to
> > propagate the error here.
> >
> > > +
> > > + a->mapped = true;
> > > +
> > > + return table;
> > > +}
> > > +
> > > +static void carveout_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
> > > + struct sg_table *table,
> > > + enum dma_data_direction direction)
> > > +{
> > > + struct carveout_heap_attachment *a = attachment->priv;
> > > +
> > > + a->mapped = false;
> > > + dma_unmap_sgtable(a->dev, table, direction, 0);
> > > +}
> > > +
> > > +static int
> > > +carveout_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> > > + enum dma_data_direction direction)
> > > +{
> > > + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> > > + struct carveout_heap_attachment *a;
> > > + unsigned long len = priv->num_pages * PAGE_SIZE;
> > > +
> > > + mutex_lock(&priv->lock);
> > > +
> > > + if (priv->vmap_cnt > 0)
> > > + invalidate_kernel_vmap_range(priv->vaddr, len);
> > > +
> > > + list_for_each_entry(a, &priv->attachments, head) {
> > > + if (!a->mapped)
> > > + continue;
> > > +
> > > + dma_sync_sgtable_for_cpu(a->dev, &a->table, direction);
> > > + }
> > > +
> > > + mutex_unlock(&priv->lock);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int
> > > +carveout_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> > > + enum dma_data_direction direction)
> > > +{
> > > + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> > > + struct carveout_heap_attachment *a;
> > > + unsigned long len = priv->num_pages * PAGE_SIZE;
> > > +
> > > + mutex_lock(&priv->lock);
> > > +
> > > + if (priv->vmap_cnt > 0)
> > > + flush_kernel_vmap_range(priv->vaddr, len);
> > > +
> > > + list_for_each_entry(a, &priv->attachments, head) {
> > > + if (!a->mapped)
> > > + continue;
> > > +
> > > + dma_sync_sgtable_for_device(a->dev, &a->table, direction);
> > > + }
> > > +
> > > + mutex_unlock(&priv->lock);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int carveout_heap_mmap(struct dma_buf *dmabuf,
> > > + struct vm_area_struct *vma)
> > > +{
> > > + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> > > + unsigned long len = priv->num_pages * PAGE_SIZE;
> > > + struct page *page = virt_to_page(priv->vaddr);
> > > +
> > > + return remap_pfn_range(vma, vma->vm_start, page_to_pfn(page),
> > > + len, vma->vm_page_prot);
> > > +}
> > > +
> > > +static int carveout_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
> > > +{
> > > + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> > > +
> > > + mutex_lock(&priv->lock);
> > > +
> > > + iosys_map_set_vaddr(map, priv->vaddr);
> > > + priv->vmap_cnt++;
> > > +
> > > + mutex_unlock(&priv->lock);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static void carveout_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map)
> > > +{
> > > + struct carveout_heap_buffer_priv *priv = dmabuf->priv;
> > > +
> > > + mutex_lock(&priv->lock);
> > > +
> > > + priv->vmap_cnt--;
> > > + mutex_unlock(&priv->lock);
> > > +
> > > + iosys_map_clear(map);
> > > +}
> > > +
> > > +static void carveout_heap_dma_buf_release(struct dma_buf *buf)
> > > +{
> > > + struct carveout_heap_buffer_priv *buffer_priv = buf->priv;
> > > + struct carveout_heap_priv *heap_priv = buffer_priv->heap;
> > > + unsigned long len = buffer_priv->num_pages * PAGE_SIZE;
> > > +
> > > + gen_pool_free(heap_priv->pool, (unsigned long)buffer_priv->vaddr, len);
> > > + kfree(buffer_priv);
> > > +}
> > > +
> > > +static const struct dma_buf_ops carveout_heap_buf_ops = {
> > > + .attach = carveout_heap_attach,
> > > + .detach = carveout_heap_detach,
> > > + .map_dma_buf = carveout_heap_map_dma_buf,
> > > + .unmap_dma_buf = carveout_heap_unmap_dma_buf,
> > > + .begin_cpu_access = carveout_heap_dma_buf_begin_cpu_access,
> > > + .end_cpu_access = carveout_heap_dma_buf_end_cpu_access,
> > > + .mmap = carveout_heap_mmap,
> > > + .vmap = carveout_heap_vmap,
> > > + .vunmap = carveout_heap_vunmap,
> > > + .release = carveout_heap_dma_buf_release,
> > > +};
> > > +
> > > +static struct dma_buf *carveout_heap_allocate(struct dma_heap *heap,
> > > + unsigned long len,
> > > + u32 fd_flags,
> > > + u64 heap_flags)
> > > +{
> > > + struct carveout_heap_priv *heap_priv = dma_heap_get_drvdata(heap);
> > > + struct carveout_heap_buffer_priv *buffer_priv;
> > > + DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> > > + struct dma_buf *buf;
> > > + dma_addr_t daddr;
> > > + size_t size = PAGE_ALIGN(len);
> >
> > This PAGE_ALIGN is not needed since dma_heap_buffer_alloc requires all
> > heap allocations to be page aligned before this function is called.
> >
> >
> >
> >
> >
> > > + void *vaddr;
> > > + int ret;
> > > +
> > > + buffer_priv = kzalloc(sizeof(*buffer_priv), GFP_KERNEL);
> > > + if (!buffer_priv)
> > > + return ERR_PTR(-ENOMEM);
> > > +
> > > + INIT_LIST_HEAD(&buffer_priv->attachments);
> > > + mutex_init(&buffer_priv->lock);
> > > +
> > > + vaddr = gen_pool_dma_zalloc(heap_priv->pool, size, &daddr);
> > > + if (!vaddr) {
> > > + ret = -ENOMEM;
> > > + goto err_free_buffer_priv;
> > > + }
> > > +
> > > + buffer_priv->vaddr = vaddr;
> > > + buffer_priv->daddr = daddr;
> > > + buffer_priv->heap = heap_priv;
> > > + buffer_priv->num_pages = size >> PAGE_SHIFT;
> > > +
> > > + /* create the dmabuf */
> > > + exp_info.exp_name = dma_heap_get_name(heap);
> > > + exp_info.ops = &carveout_heap_buf_ops;
> > > + exp_info.size = size;
> > > + exp_info.flags = fd_flags;
> > > + exp_info.priv = buffer_priv;
> > > +
> > > + buf = dma_buf_export(&exp_info);
> > > + if (IS_ERR(buf)) {
> > > + ret = PTR_ERR(buf);
> > > + goto err_free_buffer;
> > > + }
> > > +
> > > + return buf;
> > > +
> > > +err_free_buffer:
> > > + gen_pool_free(heap_priv->pool, (unsigned long)vaddr, len);
> > > +err_free_buffer_priv:
> > > + kfree(buffer_priv);
> > > +
> > > + return ERR_PTR(ret);
> > > +}
> > > +
> > > +static const struct dma_heap_ops carveout_heap_ops = {
> > > + .allocate = carveout_heap_allocate,
> > > +};
> > > +
> > > +static int __init carveout_heap_setup(struct device_node *node)
> > > +{
> > > + struct dma_heap_export_info exp_info = {};
> > > + const struct reserved_mem *rmem;
> > > + struct carveout_heap_priv *priv;
> > > + struct dma_heap *heap;
> > > + struct gen_pool *pool;
> > > + void *base;
> > > + int ret;
> > > +
> > > + rmem = of_reserved_mem_lookup(node);
> > > + if (!rmem)
> > > + return -EINVAL;
> > > +
> > > + priv = kzalloc(sizeof(*priv), GFP_KERNEL);
> > > + if (!priv)
> > > + return -ENOMEM;
> > > +
> > > + pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> > > + if (!pool) {
> > > + ret = -ENOMEM;
> > > + goto err_cleanup_heap;
> > > + }
> > > + priv->pool = pool;
> > > +
> > > + base = memremap(rmem->base, rmem->size, MEMREMAP_WB);
>
> Why add a mapping here? What if the carveout is never mapped by the CPU
> (or maybe it shouldn't be mapped for some reason). Instead you could
> make the map at map time. I do it that way in our evil vendor tree
> version of this driver for reference[0].
Yeah, it's a good idea indeed.
> > > + if (!base) {
> > > + ret = -ENOMEM;
> > > + goto err_release_mem_region;
> > > + }
> > > +
> > > + ret = gen_pool_add_virt(pool, (unsigned long)base, rmem->base,
> > > + rmem->size, NUMA_NO_NODE);
> > > + if (ret)
> > > + goto err_unmap;
> > > +
> > > + exp_info.name = node->full_name;
>
> So this is the only part that concerns me. We really got the user exposed
> naming wrong with the CMA Heap IMHO (probably should have been always called
> "default_cma" or somthing, instead it changes based on how the default CMA
> area was defined).
Hopefully that one will be fixed soon :)
> If the name of the heap is how users select the heap, it needs to be consistent.
> And naming it after the node makes the DT name into ABI. It also means it will
> change based on device, or even based on how it is created. What if this same
> reserved region is defined by ACPI instead of DT in some cases, or from kernel
> command-line, etc.. Makes for bad ABI :(
>
> Maybe in addition to the "export" property, in the DT node we have a "heap-name"
> that can be set which then defines what name is presented to userspace. At
> very least that allows us to kick the can down the road till we can figure out
> what good portable Heap names should look like.
I agree that CMA not having a consistent naming was bad. However, it's
not really clear to me what would make a good name: do we want to
describe the region, allocator, attributes, all of them? I think we
should clear that up, and document it. Otherwise, even if we have stable
names, we'll never have good, consistent, ones. Let alone downstream.
My assumption so far was that we were describing the region. If that
assumption holds then the full DT name (so name@address) might just be
enough? It will be stable, describe the region in a way a platform would
understand, and we probably wouldn't have collisions.
What do you think?
Maxime
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 273 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 0/2] dma-buf: heaps: Support carved-out heaps
2025-04-07 16:29 [PATCH v3 0/2] dma-buf: heaps: Support carved-out heaps Maxime Ripard
2025-04-07 16:29 ` [PATCH v3 1/2] dma-buf: heaps: system: Remove global variable Maxime Ripard
2025-04-07 16:29 ` [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory Maxime Ripard
@ 2025-04-25 7:55 ` Maxime Ripard
2 siblings, 0 replies; 10+ messages in thread
From: Maxime Ripard @ 2025-04-25 7:55 UTC (permalink / raw)
To: Rob Herring, Saravana Kannan, Sumit Semwal, Benjamin Gaignard,
Brian Starkey, John Stultz, T.J. Mercier, Christian König
Cc: Mattijs Korpershoek, devicetree, linux-kernel, linux-media,
dri-devel, linaro-mm-sig
[-- Attachment #1: Type: text/plain, Size: 2658 bytes --]
Hi,
On Mon, Apr 07, 2025 at 06:29:06PM +0200, Maxime Ripard wrote:
> Hi,
>
> This series is the follow-up of the discussion that John and I had some
> time ago here:
>
> https://lore.kernel.org/all/CANDhNCquJn6bH3KxKf65BWiTYLVqSd9892-xtFDHHqqyrroCMQ@mail.gmail.com/
>
> The initial problem we were discussing was that I'm currently working on
> a platform which has a memory layout with ECC enabled. However, enabling
> the ECC has a number of drawbacks on that platform: lower performance,
> increased memory usage, etc. So for things like framebuffers, the
> trade-off isn't great and thus there's a memory region with ECC disabled
> to allocate from for such use cases.
>
> After a suggestion from John, I chose to first start using heap
> allocations flags to allow for userspace to ask for a particular ECC
> setup. This is then backed by a new heap type that runs from reserved
> memory chunks flagged as such, and the existing DT properties to specify
> the ECC properties.
>
> After further discussion, it was considered that flags were not the
> right solution, and relying on the names of the heaps would be enough to
> let userspace know the kind of buffer it deals with.
>
> Thus, even though the uAPI part of it has been dropped in this second
> version, we still need a driver to create heaps out of carved-out memory
> regions. In addition to the original usecase, a similar driver can be
> found in BSPs from most vendors, so I believe it would be a useful
> addition to the kernel.
>
> I submitted a draft PR to the DT schema for the bindings used in this
> PR:
> https://github.com/devicetree-org/dt-schema/pull/138
One thing the discussion about the CMA heap naming[1] with John made me
realize is that if we have both a region that is exported as a
carved-out heap, and with devices pointing to it with reserved-memory,
we wouldn't use the same allocator in both cases.
It looks like we have four cases:
- We have a shared-dma-pool region with the reusable property: the
region is registered as a CMA area, the devices will allocate from
that.
- We have a shared-dma-pool region without the reusable property: the
region is registered as a coherent DMA area, and the devices will
allocate from that pool.
- We have a restricted-dma-pool region, devices will allocate from
swiotlb
- We have any other region, we can do whatever.
So, the driver only supports the fourth case, and should be
significantly more complicated.
I'll work on that.
Maxime
1: https://lore.kernel.org/dri-devel/20250422191939.555963-1-jkangas@redhat.com/
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 273 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-04-25 7:56 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-07 16:29 [PATCH v3 0/2] dma-buf: heaps: Support carved-out heaps Maxime Ripard
2025-04-07 16:29 ` [PATCH v3 1/2] dma-buf: heaps: system: Remove global variable Maxime Ripard
2025-04-07 17:49 ` Christian König
2025-04-08 8:43 ` Mattijs Korpershoek
2025-04-07 16:29 ` [PATCH v3 2/2] dma-buf: heaps: Introduce a new heap for reserved memory Maxime Ripard
2025-04-10 7:42 ` Mattijs Korpershoek
2025-04-11 20:26 ` T.J. Mercier
2025-04-14 17:43 ` Andrew Davis
2025-04-25 7:33 ` Maxime Ripard
2025-04-25 7:55 ` [PATCH v3 0/2] dma-buf: heaps: Support carved-out heaps Maxime Ripard
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).