* Re: [PATCH v6 05/10] tee: implement restricted DMA-heap [not found] < <CAHUa44FpsCVrbwj1=nsJVJFVJSF1kzKdWAkAMXRu6EdLrLvh8g@mail.gmail.com> @ 2025-04-08 9:14 ` Sumit Garg 2025-04-08 13:28 ` Jens Wiklander 0 siblings, 1 reply; 9+ messages in thread From: Sumit Garg @ 2025-04-08 9:14 UTC (permalink / raw) To: op-tee [-- Attachment #1: Type: text/plain, Size: 2634 bytes --] On Tue, Apr 01, 2025 at 10:33:04AM +0200, Jens Wiklander wrote: > On Tue, Apr 1, 2025 at 9:58 AM Sumit Garg <sumit.garg@kernel.org> wrote: > > > > On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote: > > > Hi Sumit, > > > > > > > <snip> > > > > > > > > > > > > > > + > > > > > +#include "tee_private.h" > > > > > + > > > > > +struct tee_dma_heap { > > > > > + struct dma_heap *heap; > > > > > + enum tee_dma_heap_id id; > > > > > + struct tee_rstmem_pool *pool; > > > > > + struct tee_device *teedev; > > > > > + /* Protects pool and teedev above */ > > > > > + struct mutex mu; > > > > > +}; > > > > > + > > > > > +struct tee_heap_buffer { > > > > > + struct tee_rstmem_pool *pool; > > > > > + struct tee_device *teedev; > > > > > + size_t size; > > > > > + size_t offs; > > > > > + struct sg_table table; > > > > > +}; > > > > > + > > > > > +struct tee_heap_attachment { > > > > > + struct sg_table table; > > > > > + struct device *dev; > > > > > +}; > > > > > + > > > > > +struct tee_rstmem_static_pool { > > > > > + struct tee_rstmem_pool pool; > > > > > + struct gen_pool *gen_pool; > > > > > + phys_addr_t pa_base; > > > > > +}; > > > > > + > > > > > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) > > > > > > > > Can this dependency rather be better managed via Kconfig? > > > > > > This was the easiest yet somewhat flexible solution I could find. If > > > you have something better, let's use that instead. > > > > > > > --- a/drivers/tee/optee/Kconfig > > +++ b/drivers/tee/optee/Kconfig > > @@ -5,6 +5,7 @@ config OPTEE > > depends on HAVE_ARM_SMCCC > > depends on MMU > > depends on RPMB || !RPMB > > + select DMABUF_HEAPS > > help > > This implements the OP-TEE Trusted Execution Environment (TEE) > > driver. > > I wanted to avoid that since there are plenty of use cases where > DMABUF_HEAPS aren't needed. Yeah, but how the users will figure out the dependency to enable DMA heaps with TEE subsystem. So it's better we provide a generic kernel Kconfig which enables all the default features. > This seems to do the job: > +config TEE_DMABUF_HEAP > + bool > + depends on TEE = y && DMABUF_HEAPS > > We can only use DMABUF_HEAPS if the TEE subsystem is compiled into the kernel. Ah, I see. So we aren't exporting the DMA heaps APIs for TEE subsystem to use. We should do that such that there isn't a hard dependency to compile them into the kernel. -Sumit > > Cheers, > Jens ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v6 05/10] tee: implement restricted DMA-heap 2025-04-08 9:14 ` [PATCH v6 05/10] tee: implement restricted DMA-heap Sumit Garg @ 2025-04-08 13:28 ` Jens Wiklander 0 siblings, 0 replies; 9+ messages in thread From: Jens Wiklander @ 2025-04-08 13:28 UTC (permalink / raw) To: op-tee [-- Attachment #1: Type: text/plain, Size: 3203 bytes --] On Tue, Apr 8, 2025 at 11:14 AM Sumit Garg <sumit.garg@kernel.org> wrote: > > On Tue, Apr 01, 2025 at 10:33:04AM +0200, Jens Wiklander wrote: > > On Tue, Apr 1, 2025 at 9:58 AM Sumit Garg <sumit.garg@kernel.org> wrote: > > > > > > On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote: > > > > Hi Sumit, > > > > > > > > > > <snip> > > > > > > > > > > > > > > > > > > + > > > > > > +#include "tee_private.h" > > > > > > + > > > > > > +struct tee_dma_heap { > > > > > > + struct dma_heap *heap; > > > > > > + enum tee_dma_heap_id id; > > > > > > + struct tee_rstmem_pool *pool; > > > > > > + struct tee_device *teedev; > > > > > > + /* Protects pool and teedev above */ > > > > > > + struct mutex mu; > > > > > > +}; > > > > > > + > > > > > > +struct tee_heap_buffer { > > > > > > + struct tee_rstmem_pool *pool; > > > > > > + struct tee_device *teedev; > > > > > > + size_t size; > > > > > > + size_t offs; > > > > > > + struct sg_table table; > > > > > > +}; > > > > > > + > > > > > > +struct tee_heap_attachment { > > > > > > + struct sg_table table; > > > > > > + struct device *dev; > > > > > > +}; > > > > > > + > > > > > > +struct tee_rstmem_static_pool { > > > > > > + struct tee_rstmem_pool pool; > > > > > > + struct gen_pool *gen_pool; > > > > > > + phys_addr_t pa_base; > > > > > > +}; > > > > > > + > > > > > > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) > > > > > > > > > > Can this dependency rather be better managed via Kconfig? > > > > > > > > This was the easiest yet somewhat flexible solution I could find. If > > > > you have something better, let's use that instead. > > > > > > > > > > --- a/drivers/tee/optee/Kconfig > > > +++ b/drivers/tee/optee/Kconfig > > > @@ -5,6 +5,7 @@ config OPTEE > > > depends on HAVE_ARM_SMCCC > > > depends on MMU > > > depends on RPMB || !RPMB > > > + select DMABUF_HEAPS > > > help > > > This implements the OP-TEE Trusted Execution Environment (TEE) > > > driver. > > > > I wanted to avoid that since there are plenty of use cases where > > DMABUF_HEAPS aren't needed. > > Yeah, but how the users will figure out the dependency to enable DMA > heaps with TEE subsystem. I hope, without too much difficulty. They are after all looking for a way to allocate memory from a DMA heap. > So it's better we provide a generic kernel > Kconfig which enables all the default features. I disagree, it should be possible to configure without DMABUF_HEAPS if desired. > > > This seems to do the job: > > +config TEE_DMABUF_HEAP > > + bool > > + depends on TEE = y && DMABUF_HEAPS > > > > We can only use DMABUF_HEAPS if the TEE subsystem is compiled into the kernel. > > Ah, I see. So we aren't exporting the DMA heaps APIs for TEE subsystem > to use. We should do that such that there isn't a hard dependency to > compile them into the kernel. I was saving that for a later patch set as a later problem. We may save some time by not doing it now. Cheers, Jens > > -Sumit > > > > > Cheers, > > Jens ^ permalink raw reply [flat|nested] 9+ messages in thread
[parent not found: < <CAHUa44EGWuVPjoxpG-S66he=6dkvkwzxNewaGKVKXUxrO41ztg@mail.gmail.com>]
* Re: [PATCH v6 05/10] tee: implement restricted DMA-heap [not found] < <CAHUa44EGWuVPjoxpG-S66he=6dkvkwzxNewaGKVKXUxrO41ztg@mail.gmail.com> @ 2025-04-09 12:50 ` Sumit Garg 2025-04-10 6:49 ` Jens Wiklander 0 siblings, 1 reply; 9+ messages in thread From: Sumit Garg @ 2025-04-09 12:50 UTC (permalink / raw) To: op-tee [-- Attachment #1: Type: text/plain, Size: 3963 bytes --] On Tue, Apr 08, 2025 at 03:28:45PM +0200, Jens Wiklander wrote: > On Tue, Apr 8, 2025 at 11:14 AM Sumit Garg <sumit.garg@kernel.org> wrote: > > > > On Tue, Apr 01, 2025 at 10:33:04AM +0200, Jens Wiklander wrote: > > > On Tue, Apr 1, 2025 at 9:58 AM Sumit Garg <sumit.garg@kernel.org> wrote: > > > > > > > > On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote: > > > > > Hi Sumit, > > > > > > > > > > > > > <snip> > > > > > > > > > > > > > > > > > > > > > > + > > > > > > > +#include "tee_private.h" > > > > > > > + > > > > > > > +struct tee_dma_heap { > > > > > > > + struct dma_heap *heap; > > > > > > > + enum tee_dma_heap_id id; > > > > > > > + struct tee_rstmem_pool *pool; > > > > > > > + struct tee_device *teedev; > > > > > > > + /* Protects pool and teedev above */ > > > > > > > + struct mutex mu; > > > > > > > +}; > > > > > > > + > > > > > > > +struct tee_heap_buffer { > > > > > > > + struct tee_rstmem_pool *pool; > > > > > > > + struct tee_device *teedev; > > > > > > > + size_t size; > > > > > > > + size_t offs; > > > > > > > + struct sg_table table; > > > > > > > +}; > > > > > > > + > > > > > > > +struct tee_heap_attachment { > > > > > > > + struct sg_table table; > > > > > > > + struct device *dev; > > > > > > > +}; > > > > > > > + > > > > > > > +struct tee_rstmem_static_pool { > > > > > > > + struct tee_rstmem_pool pool; > > > > > > > + struct gen_pool *gen_pool; > > > > > > > + phys_addr_t pa_base; > > > > > > > +}; > > > > > > > + > > > > > > > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) > > > > > > > > > > > > Can this dependency rather be better managed via Kconfig? > > > > > > > > > > This was the easiest yet somewhat flexible solution I could find. If > > > > > you have something better, let's use that instead. > > > > > > > > > > > > > --- a/drivers/tee/optee/Kconfig > > > > +++ b/drivers/tee/optee/Kconfig > > > > @@ -5,6 +5,7 @@ config OPTEE > > > > depends on HAVE_ARM_SMCCC > > > > depends on MMU > > > > depends on RPMB || !RPMB > > > > + select DMABUF_HEAPS > > > > help > > > > This implements the OP-TEE Trusted Execution Environment (TEE) > > > > driver. > > > > > > I wanted to avoid that since there are plenty of use cases where > > > DMABUF_HEAPS aren't needed. > > > > Yeah, but how the users will figure out the dependency to enable DMA > > heaps with TEE subsystem. > > I hope, without too much difficulty. They are after all looking for a > way to allocate memory from a DMA heap. > > > So it's better we provide a generic kernel > > Kconfig which enables all the default features. > > I disagree, it should be possible to configure without DMABUF_HEAPS if desired. It's hard to see a use-case for that additional compile time option. If you are worried about kernel size then those can be built as modules. On the other hand the benifit is that we avoid ifdefery and providing sane TEE defaults where features can be detected and enabled at runtime instead. > > > > > > This seems to do the job: > > > +config TEE_DMABUF_HEAP > > > + bool > > > + depends on TEE = y && DMABUF_HEAPS > > > > > > We can only use DMABUF_HEAPS if the TEE subsystem is compiled into the kernel. > > > > Ah, I see. So we aren't exporting the DMA heaps APIs for TEE subsystem > > to use. We should do that such that there isn't a hard dependency to > > compile them into the kernel. > > I was saving that for a later patch set as a later problem. We may > save some time by not doing it now. > But I think it's not a correct way to just reuse internal APIs from DMA heaps subsystem without exporting them. It can be seen as a inter subsystem API contract breach. I hope it won't be an issue with DMA heap maintainers regarding export of those APIs. -Sumit ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v6 05/10] tee: implement restricted DMA-heap 2025-04-09 12:50 ` Sumit Garg @ 2025-04-10 6:49 ` Jens Wiklander 0 siblings, 0 replies; 9+ messages in thread From: Jens Wiklander @ 2025-04-10 6:49 UTC (permalink / raw) To: op-tee [-- Attachment #1: Type: text/plain, Size: 4685 bytes --] On Wed, Apr 9, 2025 at 2:50 PM Sumit Garg <sumit.garg@kernel.org> wrote: > > On Tue, Apr 08, 2025 at 03:28:45PM +0200, Jens Wiklander wrote: > > On Tue, Apr 8, 2025 at 11:14 AM Sumit Garg <sumit.garg@kernel.org> wrote: > > > > > > On Tue, Apr 01, 2025 at 10:33:04AM +0200, Jens Wiklander wrote: > > > > On Tue, Apr 1, 2025 at 9:58 AM Sumit Garg <sumit.garg@kernel.org> wrote: > > > > > > > > > > On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote: > > > > > > Hi Sumit, > > > > > > > > > > > > > > > > <snip> > > > > > > > > > > > > > > > > > > > > > > > > > > + > > > > > > > > +#include "tee_private.h" > > > > > > > > + > > > > > > > > +struct tee_dma_heap { > > > > > > > > + struct dma_heap *heap; > > > > > > > > + enum tee_dma_heap_id id; > > > > > > > > + struct tee_rstmem_pool *pool; > > > > > > > > + struct tee_device *teedev; > > > > > > > > + /* Protects pool and teedev above */ > > > > > > > > + struct mutex mu; > > > > > > > > +}; > > > > > > > > + > > > > > > > > +struct tee_heap_buffer { > > > > > > > > + struct tee_rstmem_pool *pool; > > > > > > > > + struct tee_device *teedev; > > > > > > > > + size_t size; > > > > > > > > + size_t offs; > > > > > > > > + struct sg_table table; > > > > > > > > +}; > > > > > > > > + > > > > > > > > +struct tee_heap_attachment { > > > > > > > > + struct sg_table table; > > > > > > > > + struct device *dev; > > > > > > > > +}; > > > > > > > > + > > > > > > > > +struct tee_rstmem_static_pool { > > > > > > > > + struct tee_rstmem_pool pool; > > > > > > > > + struct gen_pool *gen_pool; > > > > > > > > + phys_addr_t pa_base; > > > > > > > > +}; > > > > > > > > + > > > > > > > > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) > > > > > > > > > > > > > > Can this dependency rather be better managed via Kconfig? > > > > > > > > > > > > This was the easiest yet somewhat flexible solution I could find. If > > > > > > you have something better, let's use that instead. > > > > > > > > > > > > > > > > --- a/drivers/tee/optee/Kconfig > > > > > +++ b/drivers/tee/optee/Kconfig > > > > > @@ -5,6 +5,7 @@ config OPTEE > > > > > depends on HAVE_ARM_SMCCC > > > > > depends on MMU > > > > > depends on RPMB || !RPMB > > > > > + select DMABUF_HEAPS > > > > > help > > > > > This implements the OP-TEE Trusted Execution Environment (TEE) > > > > > driver. > > > > > > > > I wanted to avoid that since there are plenty of use cases where > > > > DMABUF_HEAPS aren't needed. > > > > > > Yeah, but how the users will figure out the dependency to enable DMA > > > heaps with TEE subsystem. > > > > I hope, without too much difficulty. They are after all looking for a > > way to allocate memory from a DMA heap. > > > > > So it's better we provide a generic kernel > > > Kconfig which enables all the default features. > > > > I disagree, it should be possible to configure without DMABUF_HEAPS if desired. > > It's hard to see a use-case for that additional compile time option. If > you are worried about kernel size then those can be built as modules. On > the other hand the benifit is that we avoid ifdefery and providing sane > TEE defaults where features can be detected and enabled at runtime > instead. My primary concern isn't kernel size, even if it shouldn't be irrelevant. It doesn't seem right to enable features that are not asked for casually. In this case, it's not unreasonable or unexpected that DMABUF_HEAPS must be explicitly enabled in the config if a heap interface is needed. It's the same as before this patch set. > > > > > > > > > > This seems to do the job: > > > > +config TEE_DMABUF_HEAP > > > > + bool > > > > + depends on TEE = y && DMABUF_HEAPS > > > > > > > > We can only use DMABUF_HEAPS if the TEE subsystem is compiled into the kernel. > > > > > > Ah, I see. So we aren't exporting the DMA heaps APIs for TEE subsystem > > > to use. We should do that such that there isn't a hard dependency to > > > compile them into the kernel. > > > > I was saving that for a later patch set as a later problem. We may > > save some time by not doing it now. > > > > But I think it's not a correct way to just reuse internal APIs from DMA > heaps subsystem without exporting them. It can be seen as a inter > subsystem API contract breach. I hope it won't be an issue with DMA heap > maintainers regarding export of those APIs. Fair enough. I'll add a patch in the next patch set for that. I guess the same goes for CMA. Cheers, Jens ^ permalink raw reply [flat|nested] 9+ messages in thread
[parent not found: < <CAHUa44GjpHT5Nqo+Ar5jNYNPV-YJQYpLTCf=7oJ1o0VjP-t0nA@mail.gmail.com>]
* Re: [PATCH v6 05/10] tee: implement restricted DMA-heap [not found] < <CAHUa44GjpHT5Nqo+Ar5jNYNPV-YJQYpLTCf=7oJ1o0VjP-t0nA@mail.gmail.com> @ 2025-04-01 7:58 ` Sumit Garg 2025-04-01 8:33 ` Jens Wiklander 0 siblings, 1 reply; 9+ messages in thread From: Sumit Garg @ 2025-04-01 7:58 UTC (permalink / raw) To: op-tee [-- Attachment #1: Type: text/plain, Size: 1516 bytes --] On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote: > Hi Sumit, > <snip> > > > > > > + > > > +#include "tee_private.h" > > > + > > > +struct tee_dma_heap { > > > + struct dma_heap *heap; > > > + enum tee_dma_heap_id id; > > > + struct tee_rstmem_pool *pool; > > > + struct tee_device *teedev; > > > + /* Protects pool and teedev above */ > > > + struct mutex mu; > > > +}; > > > + > > > +struct tee_heap_buffer { > > > + struct tee_rstmem_pool *pool; > > > + struct tee_device *teedev; > > > + size_t size; > > > + size_t offs; > > > + struct sg_table table; > > > +}; > > > + > > > +struct tee_heap_attachment { > > > + struct sg_table table; > > > + struct device *dev; > > > +}; > > > + > > > +struct tee_rstmem_static_pool { > > > + struct tee_rstmem_pool pool; > > > + struct gen_pool *gen_pool; > > > + phys_addr_t pa_base; > > > +}; > > > + > > > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) > > > > Can this dependency rather be better managed via Kconfig? > > This was the easiest yet somewhat flexible solution I could find. If > you have something better, let's use that instead. > --- a/drivers/tee/optee/Kconfig +++ b/drivers/tee/optee/Kconfig @@ -5,6 +5,7 @@ config OPTEE depends on HAVE_ARM_SMCCC depends on MMU depends on RPMB || !RPMB + select DMABUF_HEAPS help This implements the OP-TEE Trusted Execution Environment (TEE) driver. -Sumit ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v6 05/10] tee: implement restricted DMA-heap 2025-04-01 7:58 ` Sumit Garg @ 2025-04-01 8:33 ` Jens Wiklander 0 siblings, 0 replies; 9+ messages in thread From: Jens Wiklander @ 2025-04-01 8:33 UTC (permalink / raw) To: op-tee [-- Attachment #1: Type: text/plain, Size: 1975 bytes --] On Tue, Apr 1, 2025 at 9:58 AM Sumit Garg <sumit.garg@kernel.org> wrote: > > On Tue, Mar 25, 2025 at 11:55:46AM +0100, Jens Wiklander wrote: > > Hi Sumit, > > > > <snip> > > > > > > > > > > + > > > > +#include "tee_private.h" > > > > + > > > > +struct tee_dma_heap { > > > > + struct dma_heap *heap; > > > > + enum tee_dma_heap_id id; > > > > + struct tee_rstmem_pool *pool; > > > > + struct tee_device *teedev; > > > > + /* Protects pool and teedev above */ > > > > + struct mutex mu; > > > > +}; > > > > + > > > > +struct tee_heap_buffer { > > > > + struct tee_rstmem_pool *pool; > > > > + struct tee_device *teedev; > > > > + size_t size; > > > > + size_t offs; > > > > + struct sg_table table; > > > > +}; > > > > + > > > > +struct tee_heap_attachment { > > > > + struct sg_table table; > > > > + struct device *dev; > > > > +}; > > > > + > > > > +struct tee_rstmem_static_pool { > > > > + struct tee_rstmem_pool pool; > > > > + struct gen_pool *gen_pool; > > > > + phys_addr_t pa_base; > > > > +}; > > > > + > > > > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) > > > > > > Can this dependency rather be better managed via Kconfig? > > > > This was the easiest yet somewhat flexible solution I could find. If > > you have something better, let's use that instead. > > > > --- a/drivers/tee/optee/Kconfig > +++ b/drivers/tee/optee/Kconfig > @@ -5,6 +5,7 @@ config OPTEE > depends on HAVE_ARM_SMCCC > depends on MMU > depends on RPMB || !RPMB > + select DMABUF_HEAPS > help > This implements the OP-TEE Trusted Execution Environment (TEE) > driver. I wanted to avoid that since there are plenty of use cases where DMABUF_HEAPS aren't needed. This seems to do the job: +config TEE_DMABUF_HEAP + bool + depends on TEE = y && DMABUF_HEAPS We can only use DMABUF_HEAPS if the TEE subsystem is compiled into the kernel. Cheers, Jens ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v6 00/10] TEE subsystem for restricted dma-buf allocations
@ 2025-03-05 13:04 Jens Wiklander
2025-03-05 13:04 ` [PATCH v6 05/10] tee: implement restricted DMA-heap Jens Wiklander
0 siblings, 1 reply; 9+ messages in thread
From: Jens Wiklander @ 2025-03-05 13:04 UTC (permalink / raw)
To: op-tee
[-- Attachment #1: Type: text/plain, Size: 7355 bytes --]
Hi,
This patch set allocates the restricted DMA-bufs from a DMA-heap
instantiated from the TEE subsystem.
The TEE subsystem handles the DMA-buf allocations since it is the TEE
(OP-TEE, AMD-TEE, TS-TEE, or perhaps a future QTEE) which sets up the
restrictions for the memory used for the DMA-bufs.
The DMA-heap uses a restricted memory pool provided by the backend TEE
driver, allowing it to choose how to allocate the restricted physical
memory.
The allocated DMA-bufs must be imported with a new TEE_IOC_SHM_REGISTER_FD
before they can be passed as arguments when requesting services from the
secure world.
Three use-cases (Secure Video Playback, Trusted UI, and Secure Video
Recording) has been identified so far to serve as examples of what can be
expected. The use-cases has predefined DMA-heap names,
"restricted,secure-video", "restricted,trusted-ui", and
"restricted,secure-video-record". The backend driver registers restricted
memory pools for the use-cases it supports.
Each use-case has it's own restricted memory pool since different use-cases
requires isolation from different parts of the system. A restricted memory
pool can be based on a static carveout instantiated while probing the TEE
backend driver, or dynamically allocated from CMA and made restricted as
needed by the TEE.
This can be tested on a RockPi 4B+ with the following steps:
repo init -u https://github.com/jenswi-linaro/manifest.git -m rockpi4.xml \
-b prototype/sdp-v6
repo sync -j8
cd build
make toolchains -j$(nproc)
make all -j$(nproc)
# Copy ../out/rockpi4.img to an SD card and boot the RockPi from that
# Connect a monitor to the RockPi
# login and at the prompt:
gst-launch-1.0 videotestsrc ! \
aesenc key=1f9423681beb9a79215820f6bda73d0f \
iv=e9aa8e834d8d70b7e0d254ff670dd718 serialize-iv=true ! \
aesdec key=1f9423681beb9a79215820f6bda73d0f ! \
kmssink
The aesdec module has been hacked to use an OP-TEE TA to decrypt the stream
into restricted DMA-bufs which are consumed by the kmssink.
The primitive QEMU tests from previous patch set can be tested on RockPi
in the same way with:
xtest --sdp-basic
The primitive test are tested on QEMU with the following steps:
repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \
-b prototype/sdp-v6
repo sync -j8
cd build
make toolchains -j$(nproc)
make SPMC_AT_EL=1 all -j$(nproc)
make SPMC_AT_EL=1 run-only
# login and at the prompt:
xtest --sdp-basic
The SPMC_AT_EL=1 parameter configures the build with FF-A and an SPMC at
S-EL1 inside OP-TEE. The parameter can be changed into SPMC_AT_EL=n to test
without FF-A using the original SMC ABI instead. Please remember to do
%rm -rf ../trusted-firmware-a/build/qemu
for TF-A to be rebuilt properly using the new configuration.
https://optee.readthedocs.io/en/latest/building/prerequisites.html
list dependencies needed to build the above.
The tests are pretty basic, mostly checking that a Trusted Application in
the secure world can access and manipulate the memory. There are also some
negative tests for out of bounds buffers etc.
Thanks,
Jens
Changes since V5:
* Removing "tee: add restricted memory allocation" and
"tee: add TEE_IOC_RSTMEM_FD_INFO"
* Adding "tee: implement restricted DMA-heap",
"tee: new ioctl to a register tee_shm from a dmabuf file descriptor",
"tee: add tee_shm_alloc_cma_phys_mem()",
"optee: pass parent device to tee_device_alloc()", and
"tee: tee_device_alloc(): copy dma_mask from parent device"
* The two TEE driver OPs "rstmem_alloc()" and "rstmem_free()" are replaced
with a struct tee_rstmem_pool abstraction.
* Replaced the the TEE_IOC_RSTMEM_ALLOC user space API with the DMA-heap API
Changes since V4:
* Adding the patch "tee: add TEE_IOC_RSTMEM_FD_INFO" needed by the
GStreamer demo
* Removing the dummy CPU access and mmap functions from the dma_buf_ops
* Fixing a compile error in "optee: FF-A: dynamic restricted memory allocation"
reported by kernel test robot <lkp@intel.com>
Changes since V3:
* Make the use_case and flags field in struct tee_shm u32's instead of
u16's
* Add more description for TEE_IOC_RSTMEM_ALLOC in the header file
* Import namespace DMA_BUF in module tee, reported by lkp(a)intel.com
* Added a note in the commit message for "optee: account for direction
while converting parameters" why it's needed
* Factor out dynamic restricted memory allocation from
"optee: support restricted memory allocation" into two new commits
"optee: FF-A: dynamic restricted memory allocation" and
"optee: smc abi: dynamic restricted memory allocation"
* Guard CMA usage with #ifdef CONFIG_CMA, effectively disabling dynamic
restricted memory allocate if CMA isn't configured
Changes since the V2 RFC:
* Based on v6.12
* Replaced the flags for SVP and Trusted UID memory with a u32 field with
unique id for each use case
* Added dynamic allocation of restricted memory pools
* Added OP-TEE ABI both with and without FF-A for dynamic restricted memory
* Added support for FF-A with FFA_LEND
Changes since the V1 RFC:
* Based on v6.11
* Complete rewrite, replacing the restricted heap with TEE_IOC_RSTMEM_ALLOC
Changes since Olivier's post [2]:
* Based on Yong Wu's post [1] where much of dma-buf handling is done in
the generic restricted heap
* Simplifications and cleanup
* New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap
support"
* Replaced the word "secure" with "restricted" where applicable
Etienne Carriere (1):
tee: new ioctl to a register tee_shm from a dmabuf file descriptor
Jens Wiklander (9):
tee: tee_device_alloc(): copy dma_mask from parent device
optee: pass parent device to tee_device_alloc()
optee: account for direction while converting parameters
optee: sync secure world ABI headers
tee: implement restricted DMA-heap
tee: add tee_shm_alloc_cma_phys_mem()
optee: support restricted memory allocation
optee: FF-A: dynamic restricted memory allocation
optee: smc abi: dynamic restricted memory allocation
drivers/tee/Makefile | 1 +
drivers/tee/optee/Makefile | 1 +
drivers/tee/optee/call.c | 10 +-
drivers/tee/optee/core.c | 1 +
drivers/tee/optee/ffa_abi.c | 194 +++++++++++-
drivers/tee/optee/optee_ffa.h | 27 +-
drivers/tee/optee/optee_msg.h | 65 ++++-
drivers/tee/optee/optee_private.h | 55 +++-
drivers/tee/optee/optee_smc.h | 71 ++++-
drivers/tee/optee/rpc.c | 31 +-
drivers/tee/optee/rstmem.c | 329 +++++++++++++++++++++
drivers/tee/optee/smc_abi.c | 190 ++++++++++--
drivers/tee/tee_core.c | 147 +++++++---
drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++
drivers/tee/tee_private.h | 7 +
drivers/tee/tee_shm.c | 199 ++++++++++++-
include/linux/tee_core.h | 67 +++++
include/linux/tee_drv.h | 10 +
include/uapi/linux/tee.h | 29 ++
19 files changed, 1781 insertions(+), 123 deletions(-)
create mode 100644 drivers/tee/optee/rstmem.c
create mode 100644 drivers/tee/tee_heap.c
base-commit: 7eb172143d5508b4da468ed59ee857c6e5e01da6
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH v6 05/10] tee: implement restricted DMA-heap 2025-03-05 13:04 [PATCH v6 00/10] TEE subsystem for restricted dma-buf allocations Jens Wiklander @ 2025-03-05 13:04 ` Jens Wiklander 2025-03-25 6:33 ` Sumit Garg 0 siblings, 1 reply; 9+ messages in thread From: Jens Wiklander @ 2025-03-05 13:04 UTC (permalink / raw) To: op-tee [-- Attachment #1: Type: text/plain, Size: 17123 bytes --] Implement DMA heap for restricted DMA-buf allocation in the TEE subsystem. Restricted memory refers to memory buffers behind a hardware enforced firewall. It is not accessible to the kernel during normal circumstances but rather only accessible to certain hardware IPs or CPUs executing in higher or differently privileged mode than the kernel itself. This interface allows to allocate and manage such restricted memory buffers via interaction with a TEE implementation. The restricted memory is allocated for a specific use-case, like Secure Video Playback, Trusted UI, or Secure Video Recording where certain hardware devices can access the memory. The DMA-heaps are enabled explicitly by the TEE backend driver. The TEE backend drivers needs to implement restricted memory pool to manage the restricted memory. Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> --- drivers/tee/Makefile | 1 + drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++++++++++ drivers/tee/tee_private.h | 6 + include/linux/tee_core.h | 62 +++++ 4 files changed, 539 insertions(+) create mode 100644 drivers/tee/tee_heap.c diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 5488cba30bd2..949a6a79fb06 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_TEE) += tee.o tee-objs += tee_core.o +tee-objs += tee_heap.o tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ diff --git a/drivers/tee/tee_heap.c b/drivers/tee/tee_heap.c new file mode 100644 index 000000000000..476ab2e27260 --- /dev/null +++ b/drivers/tee/tee_heap.c @@ -0,0 +1,470 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025, Linaro Limited + */ + +#include <linux/scatterlist.h> +#include <linux/dma-buf.h> +#include <linux/dma-heap.h> +#include <linux/genalloc.h> +#include <linux/module.h> +#include <linux/scatterlist.h> +#include <linux/slab.h> +#include <linux/tee_core.h> +#include <linux/xarray.h> + +#include "tee_private.h" + +struct tee_dma_heap { + struct dma_heap *heap; + enum tee_dma_heap_id id; + struct tee_rstmem_pool *pool; + struct tee_device *teedev; + /* Protects pool and teedev above */ + struct mutex mu; +}; + +struct tee_heap_buffer { + struct tee_rstmem_pool *pool; + struct tee_device *teedev; + size_t size; + size_t offs; + struct sg_table table; +}; + +struct tee_heap_attachment { + struct sg_table table; + struct device *dev; +}; + +struct tee_rstmem_static_pool { + struct tee_rstmem_pool pool; + struct gen_pool *gen_pool; + phys_addr_t pa_base; +}; + +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) +static DEFINE_XARRAY_ALLOC(tee_dma_heap); + +static int copy_sg_table(struct sg_table *dst, struct sg_table *src) +{ + struct scatterlist *dst_sg; + struct scatterlist *src_sg; + int ret; + int i; + + ret = sg_alloc_table(dst, src->orig_nents, GFP_KERNEL); + if (ret) + return ret; + + dst_sg = dst->sgl; + for_each_sgtable_sg(src, src_sg, i) { + sg_set_page(dst_sg, sg_page(src_sg), src_sg->length, + src_sg->offset); + dst_sg = sg_next(dst_sg); + } + + return 0; +} + +static int tee_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_heap_buffer *buf = dmabuf->priv; + struct tee_heap_attachment *a; + int ret; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + ret = copy_sg_table(&a->table, &buf->table); + if (ret) { + kfree(a); + return ret; + } + + a->dev = attachment->dev; + attachment->priv = a; + + return 0; +} + +static void tee_heap_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_heap_attachment *a = attachment->priv; + + sg_free_table(&a->table); + kfree(a); +} + +static struct sg_table * +tee_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct tee_heap_attachment *a = attachment->priv; + int ret; + + ret = dma_map_sgtable(attachment->dev, &a->table, direction, + DMA_ATTR_SKIP_CPU_SYNC); + if (ret) + return ERR_PTR(ret); + + return &a->table; +} + +static void tee_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct tee_heap_attachment *a = attachment->priv; + + WARN_ON(&a->table != table); + + dma_unmap_sgtable(attachment->dev, table, direction, + DMA_ATTR_SKIP_CPU_SYNC); +} + +static void tee_heap_buf_free(struct dma_buf *dmabuf) +{ + struct tee_heap_buffer *buf = dmabuf->priv; + struct tee_device *teedev = buf->teedev; + + buf->pool->ops->free(buf->pool, &buf->table); + tee_device_put(teedev); +} + +static const struct dma_buf_ops tee_heap_buf_ops = { + .attach = tee_heap_attach, + .detach = tee_heap_detach, + .map_dma_buf = tee_heap_map_dma_buf, + .unmap_dma_buf = tee_heap_unmap_dma_buf, + .release = tee_heap_buf_free, +}; + +static struct dma_buf *tee_dma_heap_alloc(struct dma_heap *heap, + unsigned long len, u32 fd_flags, + u64 heap_flags) +{ + struct tee_dma_heap *h = dma_heap_get_drvdata(heap); + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct tee_device *teedev = NULL; + struct tee_heap_buffer *buf; + struct tee_rstmem_pool *pool; + struct dma_buf *dmabuf; + int rc; + + mutex_lock(&h->mu); + if (tee_device_get(h->teedev)) { + teedev = h->teedev; + pool = h->pool; + } + mutex_unlock(&h->mu); + + if (!teedev) + return ERR_PTR(-EINVAL); + + buf = kzalloc(sizeof(*buf), GFP_KERNEL); + if (!buf) { + dmabuf = ERR_PTR(-ENOMEM); + goto err; + } + buf->size = len; + buf->pool = pool; + buf->teedev = teedev; + + rc = pool->ops->alloc(pool, &buf->table, len, &buf->offs); + if (rc) { + dmabuf = ERR_PTR(rc); + goto err_kfree; + } + + exp_info.ops = &tee_heap_buf_ops; + exp_info.size = len; + exp_info.priv = buf; + exp_info.flags = fd_flags; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) + goto err_rstmem_free; + + return dmabuf; + +err_rstmem_free: + pool->ops->free(pool, &buf->table); +err_kfree: + kfree(buf); +err: + tee_device_put(h->teedev); + return dmabuf; +} + +static const struct dma_heap_ops tee_dma_heap_ops = { + .allocate = tee_dma_heap_alloc, +}; + +static const char *heap_id_2_name(enum tee_dma_heap_id id) +{ + switch (id) { + case TEE_DMA_HEAP_SECURE_VIDEO_PLAY: + return "restricted,secure-video"; + case TEE_DMA_HEAP_TRUSTED_UI: + return "restricted,trusted-ui"; + case TEE_DMA_HEAP_SECURE_VIDEO_RECORD: + return "restricted,secure-video-record"; + default: + return NULL; + } +} + +static int alloc_dma_heap(struct tee_device *teedev, enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool) +{ + struct dma_heap_export_info exp_info = { + .ops = &tee_dma_heap_ops, + .name = heap_id_2_name(id), + }; + struct tee_dma_heap *h; + int rc; + + if (!exp_info.name) + return -EINVAL; + + if (xa_reserve(&tee_dma_heap, id, GFP_KERNEL)) { + if (!xa_load(&tee_dma_heap, id)) + return -EEXIST; + return -ENOMEM; + } + + h = kzalloc(sizeof(*h), GFP_KERNEL); + if (!h) + return -ENOMEM; + h->id = id; + h->teedev = teedev; + h->pool = pool; + mutex_init(&h->mu); + + exp_info.priv = h; + h->heap = dma_heap_add(&exp_info); + if (IS_ERR(h->heap)) { + rc = PTR_ERR(h->heap); + kfree(h); + + return rc; + } + + /* "can't fail" due to the call to xa_reserve() above */ + return WARN(xa_store(&tee_dma_heap, id, h, GFP_KERNEL), + "xa_store() failed"); +} + +int tee_device_register_dma_heap(struct tee_device *teedev, + enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool) +{ + struct tee_dma_heap *h; + int rc; + + h = xa_load(&tee_dma_heap, id); + if (h) { + mutex_lock(&h->mu); + if (h->teedev) { + rc = -EBUSY; + } else { + h->teedev = teedev; + h->pool = pool; + rc = 0; + } + mutex_unlock(&h->mu); + } else { + rc = alloc_dma_heap(teedev, id, pool); + } + + if (rc) + dev_err(&teedev->dev, "can't register DMA heap id %d (%s)\n", + id, heap_id_2_name(id)); + + return rc; +} + +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev) +{ + struct tee_rstmem_pool *pool; + struct tee_dma_heap *h; + u_long i; + + xa_for_each(&tee_dma_heap, i, h) { + if (h) { + pool = NULL; + mutex_lock(&h->mu); + if (h->teedev == teedev) { + pool = h->pool; + h->teedev = NULL; + h->pool = NULL; + } + mutex_unlock(&h->mu); + if (pool) + pool->ops->destroy_pool(pool); + } + } +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); + +int tee_heap_update_from_dma_buf(struct tee_device *teedev, + struct dma_buf *dmabuf, size_t *offset, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct tee_heap_buffer *buf; + int rc; + + /* The DMA-buf must be from our heap */ + if (dmabuf->ops != &tee_heap_buf_ops) + return -EINVAL; + + buf = dmabuf->priv; + /* The buffer must be from the same teedev */ + if (buf->teedev != teedev) + return -EINVAL; + + shm->size = buf->size; + + rc = buf->pool->ops->update_shm(buf->pool, &buf->table, buf->offs, shm, + parent_shm); + if (!rc && *parent_shm) + *offset = buf->offs; + + return rc; +} +#else +int tee_device_register_dma_heap(struct tee_device *teedev __always_unused, + enum tee_dma_heap_id id __always_unused, + struct tee_rstmem_pool *pool __always_unused) +{ + return -EINVAL; +} +EXPORT_SYMBOL_GPL(tee_device_register_dma_heap); + +void +tee_device_unregister_all_dma_heaps(struct tee_device *teedev __always_unused) +{ +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); + +int tee_heap_update_from_dma_buf(struct tee_device *teedev __always_unused, + struct dma_buf *dmabuf __always_unused, + size_t *offset __always_unused, + struct tee_shm *shm __always_unused, + struct tee_shm **parent_shm __always_unused) +{ + return -EINVAL; +} +#endif + +static struct tee_rstmem_static_pool * +to_rstmem_static_pool(struct tee_rstmem_pool *pool) +{ + return container_of(pool, struct tee_rstmem_static_pool, pool); +} + +static int rstmem_pool_op_static_alloc(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t size, + size_t *offs) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + phys_addr_t pa; + int ret; + + pa = gen_pool_alloc(stp->gen_pool, size); + if (!pa) + return -ENOMEM; + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (ret) { + gen_pool_free(stp->gen_pool, pa, size); + return ret; + } + + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); + *offs = pa - stp->pa_base; + + return 0; +} + +static void rstmem_pool_op_static_free(struct tee_rstmem_pool *pool, + struct sg_table *sgt) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + struct scatterlist *sg; + int i; + + for_each_sgtable_sg(sgt, sg, i) + gen_pool_free(stp->gen_pool, sg_phys(sg), sg->length); + sg_free_table(sgt); +} + +static int rstmem_pool_op_static_update_shm(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t offs, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + + shm->paddr = stp->pa_base + offs; + *parent_shm = NULL; + + return 0; +} + +static void rstmem_pool_op_static_destroy_pool(struct tee_rstmem_pool *pool) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + + gen_pool_destroy(stp->gen_pool); + kfree(stp); +} + +static struct tee_rstmem_pool_ops rstmem_pool_ops_static = { + .alloc = rstmem_pool_op_static_alloc, + .free = rstmem_pool_op_static_free, + .update_shm = rstmem_pool_op_static_update_shm, + .destroy_pool = rstmem_pool_op_static_destroy_pool, +}; + +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, + size_t size) +{ + const size_t page_mask = PAGE_SIZE - 1; + struct tee_rstmem_static_pool *stp; + int rc; + + /* Check it's page aligned */ + if ((paddr | size) & page_mask) + return ERR_PTR(-EINVAL); + + stp = kzalloc(sizeof(*stp), GFP_KERNEL); + if (!stp) + return ERR_PTR(-ENOMEM); + + stp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); + if (!stp->gen_pool) { + rc = -ENOMEM; + goto err_free; + } + + rc = gen_pool_add(stp->gen_pool, paddr, size, -1); + if (rc) + goto err_free_pool; + + stp->pool.ops = &rstmem_pool_ops_static; + stp->pa_base = paddr; + return &stp->pool; + +err_free_pool: + gen_pool_destroy(stp->gen_pool); +err_free: + kfree(stp); + + return ERR_PTR(rc); +} +EXPORT_SYMBOL_GPL(tee_rstmem_static_pool_alloc); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 9bc50605227c..6c6ff5d5eed2 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -8,6 +8,7 @@ #include <linux/cdev.h> #include <linux/completion.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/kref.h> #include <linux/mutex.h> #include <linux/types.h> @@ -24,4 +25,9 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +int tee_heap_update_from_dma_buf(struct tee_device *teedev, + struct dma_buf *dmabuf, size_t *offset, + struct tee_shm *shm, + struct tee_shm **parent_shm); + #endif /*TEE_PRIVATE_H*/ diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index a38494d6b5f4..16ef078247ae 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -8,9 +8,11 @@ #include <linux/cdev.h> #include <linux/device.h> +#include <linux/dma-buf.h> #include <linux/idr.h> #include <linux/kref.h> #include <linux/list.h> +#include <linux/scatterlist.h> #include <linux/tee.h> #include <linux/tee_drv.h> #include <linux/types.h> @@ -30,6 +32,12 @@ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 +enum tee_dma_heap_id { + TEE_DMA_HEAP_SECURE_VIDEO_PLAY = 1, + TEE_DMA_HEAP_TRUSTED_UI, + TEE_DMA_HEAP_SECURE_VIDEO_RECORD, +}; + /** * struct tee_device - TEE Device representation * @name: name of device @@ -116,6 +124,33 @@ struct tee_desc { u32 flags; }; +/** + * struct tee_rstmem_pool - restricted memory pool + * @ops: operations + * + * This is an abstract interface where this struct is expected to be + * embedded in another struct specific to the implementation. + */ +struct tee_rstmem_pool { + const struct tee_rstmem_pool_ops *ops; +}; + +/** + * struct tee_rstmem_pool_ops - restricted memory pool operations + * @alloc: called when allocating restricted memory + * @free: called when freeing restricted memory + * @destroy_pool: called when destroying the pool + */ +struct tee_rstmem_pool_ops { + int (*alloc)(struct tee_rstmem_pool *pool, struct sg_table *sgt, + size_t size, size_t *offs); + void (*free)(struct tee_rstmem_pool *pool, struct sg_table *sgt); + int (*update_shm)(struct tee_rstmem_pool *pool, struct sg_table *sgt, + size_t offs, struct tee_shm *shm, + struct tee_shm **parent_shm); + void (*destroy_pool)(struct tee_rstmem_pool *pool); +}; + /** * tee_device_alloc() - Allocate a new struct tee_device instance * @teedesc: Descriptor for this driver @@ -154,6 +189,11 @@ int tee_device_register(struct tee_device *teedev); */ void tee_device_unregister(struct tee_device *teedev); +int tee_device_register_dma_heap(struct tee_device *teedev, + enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool); +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev); + /** * tee_device_set_dev_groups() - Set device attribute groups * @teedev: Device to register @@ -229,6 +269,28 @@ static inline void tee_shm_pool_free(struct tee_shm_pool *pool) pool->ops->destroy_pool(pool); } +/** + * tee_rstmem_static_pool_alloc() - Create a restricted memory manager + * @paddr: Physical address of start of pool + * @size: Size in bytes of the pool + * + * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. + */ +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, + size_t size); + +/** + * tee_rstmem_pool_free() - Free a restricted memory pool + * @pool: The restricted memory pool to free + * + * There must be no remaining restricted memory allocated from this pool + * when this function is called. + */ +static inline void tee_rstmem_pool_free(struct tee_rstmem_pool *pool) +{ + pool->ops->destroy_pool(pool); +} + /** * tee_get_drvdata() - Return driver_data pointer * @returns the driver_data pointer supplied to tee_register(). -- 2.43.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v6 05/10] tee: implement restricted DMA-heap 2025-03-05 13:04 ` [PATCH v6 05/10] tee: implement restricted DMA-heap Jens Wiklander @ 2025-03-25 6:33 ` Sumit Garg 2025-03-25 10:55 ` Jens Wiklander 0 siblings, 1 reply; 9+ messages in thread From: Sumit Garg @ 2025-03-25 6:33 UTC (permalink / raw) To: op-tee [-- Attachment #1: Type: text/plain, Size: 18722 bytes --] Hi Jens, On Wed, Mar 05, 2025 at 02:04:11PM +0100, Jens Wiklander wrote: > Implement DMA heap for restricted DMA-buf allocation in the TEE > subsystem. > > Restricted memory refers to memory buffers behind a hardware enforced > firewall. It is not accessible to the kernel during normal circumstances > but rather only accessible to certain hardware IPs or CPUs executing in > higher or differently privileged mode than the kernel itself. This > interface allows to allocate and manage such restricted memory buffers > via interaction with a TEE implementation. > > The restricted memory is allocated for a specific use-case, like Secure > Video Playback, Trusted UI, or Secure Video Recording where certain > hardware devices can access the memory. > > The DMA-heaps are enabled explicitly by the TEE backend driver. The TEE > backend drivers needs to implement restricted memory pool to manage the > restricted memory. > > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> > --- > drivers/tee/Makefile | 1 + > drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++++++++++ > drivers/tee/tee_private.h | 6 + > include/linux/tee_core.h | 62 +++++ > 4 files changed, 539 insertions(+) > create mode 100644 drivers/tee/tee_heap.c > > diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile > index 5488cba30bd2..949a6a79fb06 100644 > --- a/drivers/tee/Makefile > +++ b/drivers/tee/Makefile > @@ -1,6 +1,7 @@ > # SPDX-License-Identifier: GPL-2.0 > obj-$(CONFIG_TEE) += tee.o > tee-objs += tee_core.o > +tee-objs += tee_heap.o > tee-objs += tee_shm.o > tee-objs += tee_shm_pool.o > obj-$(CONFIG_OPTEE) += optee/ > diff --git a/drivers/tee/tee_heap.c b/drivers/tee/tee_heap.c > new file mode 100644 > index 000000000000..476ab2e27260 > --- /dev/null > +++ b/drivers/tee/tee_heap.c > @@ -0,0 +1,470 @@ > +// SPDX-License-Identifier: GPL-2.0-only > +/* > + * Copyright (c) 2025, Linaro Limited > + */ > + > +#include <linux/scatterlist.h> > +#include <linux/dma-buf.h> > +#include <linux/dma-heap.h> > +#include <linux/genalloc.h> > +#include <linux/module.h> > +#include <linux/scatterlist.h> > +#include <linux/slab.h> > +#include <linux/tee_core.h> > +#include <linux/xarray.h> Lets try to follow alphabetical order here. > + > +#include "tee_private.h" > + > +struct tee_dma_heap { > + struct dma_heap *heap; > + enum tee_dma_heap_id id; > + struct tee_rstmem_pool *pool; > + struct tee_device *teedev; > + /* Protects pool and teedev above */ > + struct mutex mu; > +}; > + > +struct tee_heap_buffer { > + struct tee_rstmem_pool *pool; > + struct tee_device *teedev; > + size_t size; > + size_t offs; > + struct sg_table table; > +}; > + > +struct tee_heap_attachment { > + struct sg_table table; > + struct device *dev; > +}; > + > +struct tee_rstmem_static_pool { > + struct tee_rstmem_pool pool; > + struct gen_pool *gen_pool; > + phys_addr_t pa_base; > +}; > + > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) Can this dependency rather be better managed via Kconfig? > +static DEFINE_XARRAY_ALLOC(tee_dma_heap); > + > +static int copy_sg_table(struct sg_table *dst, struct sg_table *src) > +{ > + struct scatterlist *dst_sg; > + struct scatterlist *src_sg; > + int ret; > + int i; > + > + ret = sg_alloc_table(dst, src->orig_nents, GFP_KERNEL); > + if (ret) > + return ret; > + > + dst_sg = dst->sgl; > + for_each_sgtable_sg(src, src_sg, i) { > + sg_set_page(dst_sg, sg_page(src_sg), src_sg->length, > + src_sg->offset); > + dst_sg = sg_next(dst_sg); > + } > + > + return 0; > +} > + > +static int tee_heap_attach(struct dma_buf *dmabuf, > + struct dma_buf_attachment *attachment) > +{ > + struct tee_heap_buffer *buf = dmabuf->priv; > + struct tee_heap_attachment *a; > + int ret; > + > + a = kzalloc(sizeof(*a), GFP_KERNEL); > + if (!a) > + return -ENOMEM; > + > + ret = copy_sg_table(&a->table, &buf->table); > + if (ret) { > + kfree(a); > + return ret; > + } > + > + a->dev = attachment->dev; > + attachment->priv = a; > + > + return 0; > +} > + > +static void tee_heap_detach(struct dma_buf *dmabuf, > + struct dma_buf_attachment *attachment) > +{ > + struct tee_heap_attachment *a = attachment->priv; > + > + sg_free_table(&a->table); > + kfree(a); > +} > + > +static struct sg_table * > +tee_heap_map_dma_buf(struct dma_buf_attachment *attachment, > + enum dma_data_direction direction) > +{ > + struct tee_heap_attachment *a = attachment->priv; > + int ret; > + > + ret = dma_map_sgtable(attachment->dev, &a->table, direction, > + DMA_ATTR_SKIP_CPU_SYNC); > + if (ret) > + return ERR_PTR(ret); > + > + return &a->table; > +} > + > +static void tee_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, > + struct sg_table *table, > + enum dma_data_direction direction) > +{ > + struct tee_heap_attachment *a = attachment->priv; > + > + WARN_ON(&a->table != table); > + > + dma_unmap_sgtable(attachment->dev, table, direction, > + DMA_ATTR_SKIP_CPU_SYNC); > +} > + > +static void tee_heap_buf_free(struct dma_buf *dmabuf) > +{ > + struct tee_heap_buffer *buf = dmabuf->priv; > + struct tee_device *teedev = buf->teedev; > + > + buf->pool->ops->free(buf->pool, &buf->table); > + tee_device_put(teedev); > +} > + > +static const struct dma_buf_ops tee_heap_buf_ops = { > + .attach = tee_heap_attach, > + .detach = tee_heap_detach, > + .map_dma_buf = tee_heap_map_dma_buf, > + .unmap_dma_buf = tee_heap_unmap_dma_buf, > + .release = tee_heap_buf_free, > +}; > + > +static struct dma_buf *tee_dma_heap_alloc(struct dma_heap *heap, > + unsigned long len, u32 fd_flags, > + u64 heap_flags) > +{ > + struct tee_dma_heap *h = dma_heap_get_drvdata(heap); > + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); > + struct tee_device *teedev = NULL; > + struct tee_heap_buffer *buf; > + struct tee_rstmem_pool *pool; > + struct dma_buf *dmabuf; > + int rc; > + > + mutex_lock(&h->mu); > + if (tee_device_get(h->teedev)) { > + teedev = h->teedev; > + pool = h->pool; > + } > + mutex_unlock(&h->mu); > + > + if (!teedev) > + return ERR_PTR(-EINVAL); > + > + buf = kzalloc(sizeof(*buf), GFP_KERNEL); > + if (!buf) { > + dmabuf = ERR_PTR(-ENOMEM); > + goto err; > + } > + buf->size = len; > + buf->pool = pool; > + buf->teedev = teedev; > + > + rc = pool->ops->alloc(pool, &buf->table, len, &buf->offs); > + if (rc) { > + dmabuf = ERR_PTR(rc); > + goto err_kfree; > + } > + > + exp_info.ops = &tee_heap_buf_ops; > + exp_info.size = len; > + exp_info.priv = buf; > + exp_info.flags = fd_flags; > + dmabuf = dma_buf_export(&exp_info); > + if (IS_ERR(dmabuf)) > + goto err_rstmem_free; > + > + return dmabuf; > + > +err_rstmem_free: > + pool->ops->free(pool, &buf->table); > +err_kfree: > + kfree(buf); > +err: > + tee_device_put(h->teedev); > + return dmabuf; > +} > + > +static const struct dma_heap_ops tee_dma_heap_ops = { > + .allocate = tee_dma_heap_alloc, > +}; > + > +static const char *heap_id_2_name(enum tee_dma_heap_id id) > +{ > + switch (id) { > + case TEE_DMA_HEAP_SECURE_VIDEO_PLAY: > + return "restricted,secure-video"; > + case TEE_DMA_HEAP_TRUSTED_UI: > + return "restricted,trusted-ui"; > + case TEE_DMA_HEAP_SECURE_VIDEO_RECORD: > + return "restricted,secure-video-record"; > + default: > + return NULL; > + } > +} > + > +static int alloc_dma_heap(struct tee_device *teedev, enum tee_dma_heap_id id, > + struct tee_rstmem_pool *pool) > +{ > + struct dma_heap_export_info exp_info = { > + .ops = &tee_dma_heap_ops, > + .name = heap_id_2_name(id), > + }; > + struct tee_dma_heap *h; > + int rc; > + > + if (!exp_info.name) > + return -EINVAL; > + > + if (xa_reserve(&tee_dma_heap, id, GFP_KERNEL)) { > + if (!xa_load(&tee_dma_heap, id)) > + return -EEXIST; > + return -ENOMEM; > + } > + > + h = kzalloc(sizeof(*h), GFP_KERNEL); > + if (!h) > + return -ENOMEM; > + h->id = id; > + h->teedev = teedev; > + h->pool = pool; > + mutex_init(&h->mu); > + > + exp_info.priv = h; > + h->heap = dma_heap_add(&exp_info); > + if (IS_ERR(h->heap)) { > + rc = PTR_ERR(h->heap); > + kfree(h); > + > + return rc; > + } > + > + /* "can't fail" due to the call to xa_reserve() above */ > + return WARN(xa_store(&tee_dma_heap, id, h, GFP_KERNEL), > + "xa_store() failed"); > +} > + > +int tee_device_register_dma_heap(struct tee_device *teedev, > + enum tee_dma_heap_id id, > + struct tee_rstmem_pool *pool) > +{ > + struct tee_dma_heap *h; > + int rc; > + > + h = xa_load(&tee_dma_heap, id); > + if (h) { > + mutex_lock(&h->mu); > + if (h->teedev) { > + rc = -EBUSY; > + } else { > + h->teedev = teedev; > + h->pool = pool; > + rc = 0; > + } > + mutex_unlock(&h->mu); > + } else { > + rc = alloc_dma_heap(teedev, id, pool); > + } > + > + if (rc) > + dev_err(&teedev->dev, "can't register DMA heap id %d (%s)\n", > + id, heap_id_2_name(id)); > + > + return rc; > +} > + > +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev) > +{ > + struct tee_rstmem_pool *pool; > + struct tee_dma_heap *h; > + u_long i; > + > + xa_for_each(&tee_dma_heap, i, h) { > + if (h) { > + pool = NULL; > + mutex_lock(&h->mu); > + if (h->teedev == teedev) { > + pool = h->pool; > + h->teedev = NULL; > + h->pool = NULL; > + } > + mutex_unlock(&h->mu); > + if (pool) > + pool->ops->destroy_pool(pool); > + } > + } > +} > +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); > + > +int tee_heap_update_from_dma_buf(struct tee_device *teedev, > + struct dma_buf *dmabuf, size_t *offset, > + struct tee_shm *shm, > + struct tee_shm **parent_shm) > +{ > + struct tee_heap_buffer *buf; > + int rc; > + > + /* The DMA-buf must be from our heap */ > + if (dmabuf->ops != &tee_heap_buf_ops) > + return -EINVAL; > + > + buf = dmabuf->priv; > + /* The buffer must be from the same teedev */ > + if (buf->teedev != teedev) > + return -EINVAL; > + > + shm->size = buf->size; > + > + rc = buf->pool->ops->update_shm(buf->pool, &buf->table, buf->offs, shm, > + parent_shm); > + if (!rc && *parent_shm) > + *offset = buf->offs; > + > + return rc; > +} > +#else > +int tee_device_register_dma_heap(struct tee_device *teedev __always_unused, > + enum tee_dma_heap_id id __always_unused, > + struct tee_rstmem_pool *pool __always_unused) > +{ > + return -EINVAL; > +} > +EXPORT_SYMBOL_GPL(tee_device_register_dma_heap); > + > +void > +tee_device_unregister_all_dma_heaps(struct tee_device *teedev __always_unused) > +{ > +} > +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); > + > +int tee_heap_update_from_dma_buf(struct tee_device *teedev __always_unused, > + struct dma_buf *dmabuf __always_unused, > + size_t *offset __always_unused, > + struct tee_shm *shm __always_unused, > + struct tee_shm **parent_shm __always_unused) > +{ > + return -EINVAL; > +} > +#endif > + > +static struct tee_rstmem_static_pool * > +to_rstmem_static_pool(struct tee_rstmem_pool *pool) > +{ > + return container_of(pool, struct tee_rstmem_static_pool, pool); > +} > + > +static int rstmem_pool_op_static_alloc(struct tee_rstmem_pool *pool, > + struct sg_table *sgt, size_t size, > + size_t *offs) > +{ > + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); > + phys_addr_t pa; > + int ret; > + > + pa = gen_pool_alloc(stp->gen_pool, size); > + if (!pa) > + return -ENOMEM; > + > + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > + if (ret) { > + gen_pool_free(stp->gen_pool, pa, size); > + return ret; > + } > + > + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); > + *offs = pa - stp->pa_base; > + > + return 0; > +} > + > +static void rstmem_pool_op_static_free(struct tee_rstmem_pool *pool, > + struct sg_table *sgt) > +{ > + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); > + struct scatterlist *sg; > + int i; > + > + for_each_sgtable_sg(sgt, sg, i) > + gen_pool_free(stp->gen_pool, sg_phys(sg), sg->length); > + sg_free_table(sgt); > +} > + > +static int rstmem_pool_op_static_update_shm(struct tee_rstmem_pool *pool, > + struct sg_table *sgt, size_t offs, > + struct tee_shm *shm, > + struct tee_shm **parent_shm) > +{ > + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); > + > + shm->paddr = stp->pa_base + offs; > + *parent_shm = NULL; > + > + return 0; > +} > + > +static void rstmem_pool_op_static_destroy_pool(struct tee_rstmem_pool *pool) > +{ > + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); > + > + gen_pool_destroy(stp->gen_pool); > + kfree(stp); > +} > + > +static struct tee_rstmem_pool_ops rstmem_pool_ops_static = { > + .alloc = rstmem_pool_op_static_alloc, > + .free = rstmem_pool_op_static_free, > + .update_shm = rstmem_pool_op_static_update_shm, > + .destroy_pool = rstmem_pool_op_static_destroy_pool, > +}; > + > +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, > + size_t size) > +{ > + const size_t page_mask = PAGE_SIZE - 1; > + struct tee_rstmem_static_pool *stp; > + int rc; > + > + /* Check it's page aligned */ > + if ((paddr | size) & page_mask) > + return ERR_PTR(-EINVAL); > + > + stp = kzalloc(sizeof(*stp), GFP_KERNEL); > + if (!stp) > + return ERR_PTR(-ENOMEM); > + > + stp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); > + if (!stp->gen_pool) { > + rc = -ENOMEM; > + goto err_free; > + } > + > + rc = gen_pool_add(stp->gen_pool, paddr, size, -1); > + if (rc) > + goto err_free_pool; > + > + stp->pool.ops = &rstmem_pool_ops_static; > + stp->pa_base = paddr; > + return &stp->pool; > + > +err_free_pool: > + gen_pool_destroy(stp->gen_pool); > +err_free: > + kfree(stp); > + > + return ERR_PTR(rc); > +} > +EXPORT_SYMBOL_GPL(tee_rstmem_static_pool_alloc); > diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h > index 9bc50605227c..6c6ff5d5eed2 100644 > --- a/drivers/tee/tee_private.h > +++ b/drivers/tee/tee_private.h > @@ -8,6 +8,7 @@ > #include <linux/cdev.h> > #include <linux/completion.h> > #include <linux/device.h> > +#include <linux/dma-buf.h> > #include <linux/kref.h> > #include <linux/mutex.h> > #include <linux/types.h> > @@ -24,4 +25,9 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); > struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, > unsigned long addr, size_t length); > > +int tee_heap_update_from_dma_buf(struct tee_device *teedev, > + struct dma_buf *dmabuf, size_t *offset, > + struct tee_shm *shm, > + struct tee_shm **parent_shm); > + > #endif /*TEE_PRIVATE_H*/ > diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h > index a38494d6b5f4..16ef078247ae 100644 > --- a/include/linux/tee_core.h > +++ b/include/linux/tee_core.h > @@ -8,9 +8,11 @@ > > #include <linux/cdev.h> > #include <linux/device.h> > +#include <linux/dma-buf.h> > #include <linux/idr.h> > #include <linux/kref.h> > #include <linux/list.h> > +#include <linux/scatterlist.h> > #include <linux/tee.h> > #include <linux/tee_drv.h> > #include <linux/types.h> > @@ -30,6 +32,12 @@ > #define TEE_DEVICE_FLAG_REGISTERED 0x1 > #define TEE_MAX_DEV_NAME_LEN 32 > > +enum tee_dma_heap_id { > + TEE_DMA_HEAP_SECURE_VIDEO_PLAY = 1, > + TEE_DMA_HEAP_TRUSTED_UI, > + TEE_DMA_HEAP_SECURE_VIDEO_RECORD, > +}; > + > /** > * struct tee_device - TEE Device representation > * @name: name of device > @@ -116,6 +124,33 @@ struct tee_desc { > u32 flags; > }; > > +/** > + * struct tee_rstmem_pool - restricted memory pool > + * @ops: operations > + * > + * This is an abstract interface where this struct is expected to be > + * embedded in another struct specific to the implementation. > + */ > +struct tee_rstmem_pool { > + const struct tee_rstmem_pool_ops *ops; > +}; > + > +/** > + * struct tee_rstmem_pool_ops - restricted memory pool operations > + * @alloc: called when allocating restricted memory > + * @free: called when freeing restricted memory > + * @destroy_pool: called when destroying the pool > + */ > +struct tee_rstmem_pool_ops { > + int (*alloc)(struct tee_rstmem_pool *pool, struct sg_table *sgt, > + size_t size, size_t *offs); > + void (*free)(struct tee_rstmem_pool *pool, struct sg_table *sgt); > + int (*update_shm)(struct tee_rstmem_pool *pool, struct sg_table *sgt, > + size_t offs, struct tee_shm *shm, > + struct tee_shm **parent_shm); This API isn't descibed in kdoc comment above. Can you descibe the role of this API and when it's needed? -Sumit > + void (*destroy_pool)(struct tee_rstmem_pool *pool); > +}; > + > /** > * tee_device_alloc() - Allocate a new struct tee_device instance > * @teedesc: Descriptor for this driver > @@ -154,6 +189,11 @@ int tee_device_register(struct tee_device *teedev); > */ > void tee_device_unregister(struct tee_device *teedev); > > +int tee_device_register_dma_heap(struct tee_device *teedev, > + enum tee_dma_heap_id id, > + struct tee_rstmem_pool *pool); > +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev); > + > /** > * tee_device_set_dev_groups() - Set device attribute groups > * @teedev: Device to register > @@ -229,6 +269,28 @@ static inline void tee_shm_pool_free(struct tee_shm_pool *pool) > pool->ops->destroy_pool(pool); > } > > +/** > + * tee_rstmem_static_pool_alloc() - Create a restricted memory manager > + * @paddr: Physical address of start of pool > + * @size: Size in bytes of the pool > + * > + * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. > + */ > +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, > + size_t size); > + > +/** > + * tee_rstmem_pool_free() - Free a restricted memory pool > + * @pool: The restricted memory pool to free > + * > + * There must be no remaining restricted memory allocated from this pool > + * when this function is called. > + */ > +static inline void tee_rstmem_pool_free(struct tee_rstmem_pool *pool) > +{ > + pool->ops->destroy_pool(pool); > +} > + > /** > * tee_get_drvdata() - Return driver_data pointer > * @returns the driver_data pointer supplied to tee_register(). > -- > 2.43.0 > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v6 05/10] tee: implement restricted DMA-heap 2025-03-25 6:33 ` Sumit Garg @ 2025-03-25 10:55 ` Jens Wiklander 0 siblings, 0 replies; 9+ messages in thread From: Jens Wiklander @ 2025-03-25 10:55 UTC (permalink / raw) To: op-tee [-- Attachment #1: Type: text/plain, Size: 22994 bytes --] Hi Sumit, On Tue, Mar 25, 2025 at 7:33 AM Sumit Garg <sumit.garg@kernel.org> wrote: > > Hi Jens, > > On Wed, Mar 05, 2025 at 02:04:11PM +0100, Jens Wiklander wrote: > > Implement DMA heap for restricted DMA-buf allocation in the TEE > > subsystem. > > > > Restricted memory refers to memory buffers behind a hardware enforced > > firewall. It is not accessible to the kernel during normal circumstances > > but rather only accessible to certain hardware IPs or CPUs executing in > > higher or differently privileged mode than the kernel itself. This > > interface allows to allocate and manage such restricted memory buffers > > via interaction with a TEE implementation. > > > > The restricted memory is allocated for a specific use-case, like Secure > > Video Playback, Trusted UI, or Secure Video Recording where certain > > hardware devices can access the memory. > > > > The DMA-heaps are enabled explicitly by the TEE backend driver. The TEE > > backend drivers needs to implement restricted memory pool to manage the > > restricted memory. > > > > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> > > --- > > drivers/tee/Makefile | 1 + > > drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++++++++++ > > drivers/tee/tee_private.h | 6 + > > include/linux/tee_core.h | 62 +++++ > > 4 files changed, 539 insertions(+) > > create mode 100644 drivers/tee/tee_heap.c > > > > diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile > > index 5488cba30bd2..949a6a79fb06 100644 > > --- a/drivers/tee/Makefile > > +++ b/drivers/tee/Makefile > > @@ -1,6 +1,7 @@ > > # SPDX-License-Identifier: GPL-2.0 > > obj-$(CONFIG_TEE) += tee.o > > tee-objs += tee_core.o > > +tee-objs += tee_heap.o > > tee-objs += tee_shm.o > > tee-objs += tee_shm_pool.o > > obj-$(CONFIG_OPTEE) += optee/ > > diff --git a/drivers/tee/tee_heap.c b/drivers/tee/tee_heap.c > > new file mode 100644 > > index 000000000000..476ab2e27260 > > --- /dev/null > > +++ b/drivers/tee/tee_heap.c > > @@ -0,0 +1,470 @@ > > +// SPDX-License-Identifier: GPL-2.0-only > > +/* > > + * Copyright (c) 2025, Linaro Limited > > + */ > > + > > +#include <linux/scatterlist.h> > > +#include <linux/dma-buf.h> > > +#include <linux/dma-heap.h> > > +#include <linux/genalloc.h> > > +#include <linux/module.h> > > +#include <linux/scatterlist.h> > > +#include <linux/slab.h> > > +#include <linux/tee_core.h> > > +#include <linux/xarray.h> > > Lets try to follow alphabetical order here. Sure > > > + > > +#include "tee_private.h" > > + > > +struct tee_dma_heap { > > + struct dma_heap *heap; > > + enum tee_dma_heap_id id; > > + struct tee_rstmem_pool *pool; > > + struct tee_device *teedev; > > + /* Protects pool and teedev above */ > > + struct mutex mu; > > +}; > > + > > +struct tee_heap_buffer { > > + struct tee_rstmem_pool *pool; > > + struct tee_device *teedev; > > + size_t size; > > + size_t offs; > > + struct sg_table table; > > +}; > > + > > +struct tee_heap_attachment { > > + struct sg_table table; > > + struct device *dev; > > +}; > > + > > +struct tee_rstmem_static_pool { > > + struct tee_rstmem_pool pool; > > + struct gen_pool *gen_pool; > > + phys_addr_t pa_base; > > +}; > > + > > +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) > > Can this dependency rather be better managed via Kconfig? This was the easiest yet somewhat flexible solution I could find. If you have something better, let's use that instead. > > > +static DEFINE_XARRAY_ALLOC(tee_dma_heap); > > + > > +static int copy_sg_table(struct sg_table *dst, struct sg_table *src) > > +{ > > + struct scatterlist *dst_sg; > > + struct scatterlist *src_sg; > > + int ret; > > + int i; > > + > > + ret = sg_alloc_table(dst, src->orig_nents, GFP_KERNEL); > > + if (ret) > > + return ret; > > + > > + dst_sg = dst->sgl; > > + for_each_sgtable_sg(src, src_sg, i) { > > + sg_set_page(dst_sg, sg_page(src_sg), src_sg->length, > > + src_sg->offset); > > + dst_sg = sg_next(dst_sg); > > + } > > + > > + return 0; > > +} > > + > > +static int tee_heap_attach(struct dma_buf *dmabuf, > > + struct dma_buf_attachment *attachment) > > +{ > > + struct tee_heap_buffer *buf = dmabuf->priv; > > + struct tee_heap_attachment *a; > > + int ret; > > + > > + a = kzalloc(sizeof(*a), GFP_KERNEL); > > + if (!a) > > + return -ENOMEM; > > + > > + ret = copy_sg_table(&a->table, &buf->table); > > + if (ret) { > > + kfree(a); > > + return ret; > > + } > > + > > + a->dev = attachment->dev; > > + attachment->priv = a; > > + > > + return 0; > > +} > > + > > +static void tee_heap_detach(struct dma_buf *dmabuf, > > + struct dma_buf_attachment *attachment) > > +{ > > + struct tee_heap_attachment *a = attachment->priv; > > + > > + sg_free_table(&a->table); > > + kfree(a); > > +} > > + > > +static struct sg_table * > > +tee_heap_map_dma_buf(struct dma_buf_attachment *attachment, > > + enum dma_data_direction direction) > > +{ > > + struct tee_heap_attachment *a = attachment->priv; > > + int ret; > > + > > + ret = dma_map_sgtable(attachment->dev, &a->table, direction, > > + DMA_ATTR_SKIP_CPU_SYNC); > > + if (ret) > > + return ERR_PTR(ret); > > + > > + return &a->table; > > +} > > + > > +static void tee_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, > > + struct sg_table *table, > > + enum dma_data_direction direction) > > +{ > > + struct tee_heap_attachment *a = attachment->priv; > > + > > + WARN_ON(&a->table != table); > > + > > + dma_unmap_sgtable(attachment->dev, table, direction, > > + DMA_ATTR_SKIP_CPU_SYNC); > > +} > > + > > +static void tee_heap_buf_free(struct dma_buf *dmabuf) > > +{ > > + struct tee_heap_buffer *buf = dmabuf->priv; > > + struct tee_device *teedev = buf->teedev; > > + > > + buf->pool->ops->free(buf->pool, &buf->table); > > + tee_device_put(teedev); > > +} > > + > > +static const struct dma_buf_ops tee_heap_buf_ops = { > > + .attach = tee_heap_attach, > > + .detach = tee_heap_detach, > > + .map_dma_buf = tee_heap_map_dma_buf, > > + .unmap_dma_buf = tee_heap_unmap_dma_buf, > > + .release = tee_heap_buf_free, > > +}; > > + > > +static struct dma_buf *tee_dma_heap_alloc(struct dma_heap *heap, > > + unsigned long len, u32 fd_flags, > > + u64 heap_flags) > > +{ > > + struct tee_dma_heap *h = dma_heap_get_drvdata(heap); > > + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); > > + struct tee_device *teedev = NULL; > > + struct tee_heap_buffer *buf; > > + struct tee_rstmem_pool *pool; > > + struct dma_buf *dmabuf; > > + int rc; > > + > > + mutex_lock(&h->mu); > > + if (tee_device_get(h->teedev)) { > > + teedev = h->teedev; > > + pool = h->pool; > > + } > > + mutex_unlock(&h->mu); > > + > > + if (!teedev) > > + return ERR_PTR(-EINVAL); > > + > > + buf = kzalloc(sizeof(*buf), GFP_KERNEL); > > + if (!buf) { > > + dmabuf = ERR_PTR(-ENOMEM); > > + goto err; > > + } > > + buf->size = len; > > + buf->pool = pool; > > + buf->teedev = teedev; > > + > > + rc = pool->ops->alloc(pool, &buf->table, len, &buf->offs); > > + if (rc) { > > + dmabuf = ERR_PTR(rc); > > + goto err_kfree; > > + } > > + > > + exp_info.ops = &tee_heap_buf_ops; > > + exp_info.size = len; > > + exp_info.priv = buf; > > + exp_info.flags = fd_flags; > > + dmabuf = dma_buf_export(&exp_info); > > + if (IS_ERR(dmabuf)) > > + goto err_rstmem_free; > > + > > + return dmabuf; > > + > > +err_rstmem_free: > > + pool->ops->free(pool, &buf->table); > > +err_kfree: > > + kfree(buf); > > +err: > > + tee_device_put(h->teedev); > > + return dmabuf; > > +} > > + > > +static const struct dma_heap_ops tee_dma_heap_ops = { > > + .allocate = tee_dma_heap_alloc, > > +}; > > + > > +static const char *heap_id_2_name(enum tee_dma_heap_id id) > > +{ > > + switch (id) { > > + case TEE_DMA_HEAP_SECURE_VIDEO_PLAY: > > + return "restricted,secure-video"; > > + case TEE_DMA_HEAP_TRUSTED_UI: > > + return "restricted,trusted-ui"; > > + case TEE_DMA_HEAP_SECURE_VIDEO_RECORD: > > + return "restricted,secure-video-record"; > > + default: > > + return NULL; > > + } > > +} > > + > > +static int alloc_dma_heap(struct tee_device *teedev, enum tee_dma_heap_id id, > > + struct tee_rstmem_pool *pool) > > +{ > > + struct dma_heap_export_info exp_info = { > > + .ops = &tee_dma_heap_ops, > > + .name = heap_id_2_name(id), > > + }; > > + struct tee_dma_heap *h; > > + int rc; > > + > > + if (!exp_info.name) > > + return -EINVAL; > > + > > + if (xa_reserve(&tee_dma_heap, id, GFP_KERNEL)) { > > + if (!xa_load(&tee_dma_heap, id)) > > + return -EEXIST; > > + return -ENOMEM; > > + } > > + > > + h = kzalloc(sizeof(*h), GFP_KERNEL); > > + if (!h) > > + return -ENOMEM; > > + h->id = id; > > + h->teedev = teedev; > > + h->pool = pool; > > + mutex_init(&h->mu); > > + > > + exp_info.priv = h; > > + h->heap = dma_heap_add(&exp_info); > > + if (IS_ERR(h->heap)) { > > + rc = PTR_ERR(h->heap); > > + kfree(h); > > + > > + return rc; > > + } > > + > > + /* "can't fail" due to the call to xa_reserve() above */ > > + return WARN(xa_store(&tee_dma_heap, id, h, GFP_KERNEL), > > + "xa_store() failed"); > > +} > > + > > +int tee_device_register_dma_heap(struct tee_device *teedev, > > + enum tee_dma_heap_id id, > > + struct tee_rstmem_pool *pool) > > +{ > > + struct tee_dma_heap *h; > > + int rc; > > + > > + h = xa_load(&tee_dma_heap, id); > > + if (h) { > > + mutex_lock(&h->mu); > > + if (h->teedev) { > > + rc = -EBUSY; > > + } else { > > + h->teedev = teedev; > > + h->pool = pool; > > + rc = 0; > > + } > > + mutex_unlock(&h->mu); > > + } else { > > + rc = alloc_dma_heap(teedev, id, pool); > > + } > > + > > + if (rc) > > + dev_err(&teedev->dev, "can't register DMA heap id %d (%s)\n", > > + id, heap_id_2_name(id)); > > + > > + return rc; > > +} > > + > > +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev) > > +{ > > + struct tee_rstmem_pool *pool; > > + struct tee_dma_heap *h; > > + u_long i; > > + > > + xa_for_each(&tee_dma_heap, i, h) { > > + if (h) { > > + pool = NULL; > > + mutex_lock(&h->mu); > > + if (h->teedev == teedev) { > > + pool = h->pool; > > + h->teedev = NULL; > > + h->pool = NULL; > > + } > > + mutex_unlock(&h->mu); > > + if (pool) > > + pool->ops->destroy_pool(pool); > > + } > > + } > > +} > > +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); > > + > > +int tee_heap_update_from_dma_buf(struct tee_device *teedev, > > + struct dma_buf *dmabuf, size_t *offset, > > + struct tee_shm *shm, > > + struct tee_shm **parent_shm) > > +{ > > + struct tee_heap_buffer *buf; > > + int rc; > > + > > + /* The DMA-buf must be from our heap */ > > + if (dmabuf->ops != &tee_heap_buf_ops) > > + return -EINVAL; > > + > > + buf = dmabuf->priv; > > + /* The buffer must be from the same teedev */ > > + if (buf->teedev != teedev) > > + return -EINVAL; > > + > > + shm->size = buf->size; > > + > > + rc = buf->pool->ops->update_shm(buf->pool, &buf->table, buf->offs, shm, > > + parent_shm); > > + if (!rc && *parent_shm) > > + *offset = buf->offs; > > + > > + return rc; > > +} > > +#else > > +int tee_device_register_dma_heap(struct tee_device *teedev __always_unused, > > + enum tee_dma_heap_id id __always_unused, > > + struct tee_rstmem_pool *pool __always_unused) > > +{ > > + return -EINVAL; > > +} > > +EXPORT_SYMBOL_GPL(tee_device_register_dma_heap); > > + > > +void > > +tee_device_unregister_all_dma_heaps(struct tee_device *teedev __always_unused) > > +{ > > +} > > +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); > > + > > +int tee_heap_update_from_dma_buf(struct tee_device *teedev __always_unused, > > + struct dma_buf *dmabuf __always_unused, > > + size_t *offset __always_unused, > > + struct tee_shm *shm __always_unused, > > + struct tee_shm **parent_shm __always_unused) > > +{ > > + return -EINVAL; > > +} > > +#endif > > + > > +static struct tee_rstmem_static_pool * > > +to_rstmem_static_pool(struct tee_rstmem_pool *pool) > > +{ > > + return container_of(pool, struct tee_rstmem_static_pool, pool); > > +} > > + > > +static int rstmem_pool_op_static_alloc(struct tee_rstmem_pool *pool, > > + struct sg_table *sgt, size_t size, > > + size_t *offs) > > +{ > > + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); > > + phys_addr_t pa; > > + int ret; > > + > > + pa = gen_pool_alloc(stp->gen_pool, size); > > + if (!pa) > > + return -ENOMEM; > > + > > + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > + if (ret) { > > + gen_pool_free(stp->gen_pool, pa, size); > > + return ret; > > + } > > + > > + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); > > + *offs = pa - stp->pa_base; > > + > > + return 0; > > +} > > + > > +static void rstmem_pool_op_static_free(struct tee_rstmem_pool *pool, > > + struct sg_table *sgt) > > +{ > > + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); > > + struct scatterlist *sg; > > + int i; > > + > > + for_each_sgtable_sg(sgt, sg, i) > > + gen_pool_free(stp->gen_pool, sg_phys(sg), sg->length); > > + sg_free_table(sgt); > > +} > > + > > +static int rstmem_pool_op_static_update_shm(struct tee_rstmem_pool *pool, > > + struct sg_table *sgt, size_t offs, > > + struct tee_shm *shm, > > + struct tee_shm **parent_shm) > > +{ > > + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); > > + > > + shm->paddr = stp->pa_base + offs; > > + *parent_shm = NULL; > > + > > + return 0; > > +} > > + > > +static void rstmem_pool_op_static_destroy_pool(struct tee_rstmem_pool *pool) > > +{ > > + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); > > + > > + gen_pool_destroy(stp->gen_pool); > > + kfree(stp); > > +} > > + > > +static struct tee_rstmem_pool_ops rstmem_pool_ops_static = { > > + .alloc = rstmem_pool_op_static_alloc, > > + .free = rstmem_pool_op_static_free, > > + .update_shm = rstmem_pool_op_static_update_shm, > > + .destroy_pool = rstmem_pool_op_static_destroy_pool, > > +}; > > + > > +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, > > + size_t size) > > +{ > > + const size_t page_mask = PAGE_SIZE - 1; > > + struct tee_rstmem_static_pool *stp; > > + int rc; > > + > > + /* Check it's page aligned */ > > + if ((paddr | size) & page_mask) > > + return ERR_PTR(-EINVAL); > > + > > + stp = kzalloc(sizeof(*stp), GFP_KERNEL); > > + if (!stp) > > + return ERR_PTR(-ENOMEM); > > + > > + stp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); > > + if (!stp->gen_pool) { > > + rc = -ENOMEM; > > + goto err_free; > > + } > > + > > + rc = gen_pool_add(stp->gen_pool, paddr, size, -1); > > + if (rc) > > + goto err_free_pool; > > + > > + stp->pool.ops = &rstmem_pool_ops_static; > > + stp->pa_base = paddr; > > + return &stp->pool; > > + > > +err_free_pool: > > + gen_pool_destroy(stp->gen_pool); > > +err_free: > > + kfree(stp); > > + > > + return ERR_PTR(rc); > > +} > > +EXPORT_SYMBOL_GPL(tee_rstmem_static_pool_alloc); > > diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h > > index 9bc50605227c..6c6ff5d5eed2 100644 > > --- a/drivers/tee/tee_private.h > > +++ b/drivers/tee/tee_private.h > > @@ -8,6 +8,7 @@ > > #include <linux/cdev.h> > > #include <linux/completion.h> > > #include <linux/device.h> > > +#include <linux/dma-buf.h> > > #include <linux/kref.h> > > #include <linux/mutex.h> > > #include <linux/types.h> > > @@ -24,4 +25,9 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); > > struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, > > unsigned long addr, size_t length); > > > > +int tee_heap_update_from_dma_buf(struct tee_device *teedev, > > + struct dma_buf *dmabuf, size_t *offset, > > + struct tee_shm *shm, > > + struct tee_shm **parent_shm); > > + > > #endif /*TEE_PRIVATE_H*/ > > diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h > > index a38494d6b5f4..16ef078247ae 100644 > > --- a/include/linux/tee_core.h > > +++ b/include/linux/tee_core.h > > @@ -8,9 +8,11 @@ > > > > #include <linux/cdev.h> > > #include <linux/device.h> > > +#include <linux/dma-buf.h> > > #include <linux/idr.h> > > #include <linux/kref.h> > > #include <linux/list.h> > > +#include <linux/scatterlist.h> > > #include <linux/tee.h> > > #include <linux/tee_drv.h> > > #include <linux/types.h> > > @@ -30,6 +32,12 @@ > > #define TEE_DEVICE_FLAG_REGISTERED 0x1 > > #define TEE_MAX_DEV_NAME_LEN 32 > > > > +enum tee_dma_heap_id { > > + TEE_DMA_HEAP_SECURE_VIDEO_PLAY = 1, > > + TEE_DMA_HEAP_TRUSTED_UI, > > + TEE_DMA_HEAP_SECURE_VIDEO_RECORD, > > +}; > > + > > /** > > * struct tee_device - TEE Device representation > > * @name: name of device > > @@ -116,6 +124,33 @@ struct tee_desc { > > u32 flags; > > }; > > > > +/** > > + * struct tee_rstmem_pool - restricted memory pool > > + * @ops: operations > > + * > > + * This is an abstract interface where this struct is expected to be > > + * embedded in another struct specific to the implementation. > > + */ > > +struct tee_rstmem_pool { > > + const struct tee_rstmem_pool_ops *ops; > > +}; > > + > > +/** > > + * struct tee_rstmem_pool_ops - restricted memory pool operations > > + * @alloc: called when allocating restricted memory > > + * @free: called when freeing restricted memory > > + * @destroy_pool: called when destroying the pool > > + */ > > +struct tee_rstmem_pool_ops { > > + int (*alloc)(struct tee_rstmem_pool *pool, struct sg_table *sgt, > > + size_t size, size_t *offs); > > + void (*free)(struct tee_rstmem_pool *pool, struct sg_table *sgt); > > + int (*update_shm)(struct tee_rstmem_pool *pool, struct sg_table *sgt, > > + size_t offs, struct tee_shm *shm, > > + struct tee_shm **parent_shm); > > This API isn't descibed in kdoc comment above. Can you descibe the role > of this API and when it's needed? Yes, I'll add something. Cheers, Jens > > -Sumit > > > + void (*destroy_pool)(struct tee_rstmem_pool *pool); > > +}; > > + > > /** > > * tee_device_alloc() - Allocate a new struct tee_device instance > > * @teedesc: Descriptor for this driver > > @@ -154,6 +189,11 @@ int tee_device_register(struct tee_device *teedev); > > */ > > void tee_device_unregister(struct tee_device *teedev); > > > > +int tee_device_register_dma_heap(struct tee_device *teedev, > > + enum tee_dma_heap_id id, > > + struct tee_rstmem_pool *pool); > > +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev); > > + > > /** > > * tee_device_set_dev_groups() - Set device attribute groups > > * @teedev: Device to register > > @@ -229,6 +269,28 @@ static inline void tee_shm_pool_free(struct tee_shm_pool *pool) > > pool->ops->destroy_pool(pool); > > } > > > > +/** > > + * tee_rstmem_static_pool_alloc() - Create a restricted memory manager > > + * @paddr: Physical address of start of pool > > + * @size: Size in bytes of the pool > > + * > > + * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. > > + */ > > +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, > > + size_t size); > > + > > +/** > > + * tee_rstmem_pool_free() - Free a restricted memory pool > > + * @pool: The restricted memory pool to free > > + * > > + * There must be no remaining restricted memory allocated from this pool > > + * when this function is called. > > + */ > > +static inline void tee_rstmem_pool_free(struct tee_rstmem_pool *pool) > > +{ > > + pool->ops->destroy_pool(pool); > > +} > > + > > /** > > * tee_get_drvdata() - Return driver_data pointer > > * @returns the driver_data pointer supplied to tee_register(). > > -- > > 2.43.0 > > ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-04-10 6:49 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] < <CAHUa44FpsCVrbwj1=nsJVJFVJSF1kzKdWAkAMXRu6EdLrLvh8g@mail.gmail.com>
2025-04-08 9:14 ` [PATCH v6 05/10] tee: implement restricted DMA-heap Sumit Garg
2025-04-08 13:28 ` Jens Wiklander
[not found] < <CAHUa44EGWuVPjoxpG-S66he=6dkvkwzxNewaGKVKXUxrO41ztg@mail.gmail.com>
2025-04-09 12:50 ` Sumit Garg
2025-04-10 6:49 ` Jens Wiklander
[not found] < <CAHUa44GjpHT5Nqo+Ar5jNYNPV-YJQYpLTCf=7oJ1o0VjP-t0nA@mail.gmail.com>
2025-04-01 7:58 ` Sumit Garg
2025-04-01 8:33 ` Jens Wiklander
2025-03-05 13:04 [PATCH v6 00/10] TEE subsystem for restricted dma-buf allocations Jens Wiklander
2025-03-05 13:04 ` [PATCH v6 05/10] tee: implement restricted DMA-heap Jens Wiklander
2025-03-25 6:33 ` Sumit Garg
2025-03-25 10:55 ` Jens Wiklander
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox