From: Nicolas Dufresne <nicolas@ndufresne.ca>
To: "Sumit Garg" <sumit.garg@linaro.org>,
"Christian König" <christian.koenig@amd.com>,
"Dmitry Baryshkov" <dmitry.baryshkov@linaro.org>,
"Christian König" <ckoenig.leichtzumerken@gmail.com>
Cc: Andrew Davis <afd@ti.com>,
Jens Wiklander <jens.wiklander@linaro.org>,
linux-kernel@vger.kernel.org, devicetree@vger.kernel.org,
linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org,
linaro-mm-sig@lists.linaro.org,
op-tee@lists.trustedfirmware.org,
linux-arm-kernel@lists.infradead.org,
linux-mediatek@lists.infradead.org,
Olivier Masse <olivier.masse@nxp.com>,
Thierry Reding <thierry.reding@gmail.com>,
Yong Wu <yong.wu@mediatek.com>,
Sumit Semwal <sumit.semwal@linaro.org>,
Benjamin Gaignard <benjamin.gaignard@collabora.com>,
Brian Starkey <Brian.Starkey@arm.com>,
John Stultz <jstultz@google.com>,
"T . J . Mercier" <tjmercier@google.com>,
Matthias Brugger <matthias.bgg@gmail.com>,
AngeloGioacchino Del Regno
<angelogioacchino.delregno@collabora.com>,
Rob Herring <robh@kernel.org>,
Krzysztof Kozlowski <krzk+dt@kernel.org>,
Conor Dooley <conor+dt@kernel.org>
Subject: Re: [Linaro-mm-sig] Re: [RFC PATCH 0/4] Linaro restricted heap
Date: Fri, 27 Sep 2024 15:50:05 -0400 [thread overview]
Message-ID: <7c9e3a1a6092f6574c17d7206767ece0bcefc81f.camel@ndufresne.ca> (raw)
In-Reply-To: <CAFA6WYMd46quafJoGXjkCiPOKpYoDZdXwrNbG3QekyjB3_2FTA@mail.gmail.com>
Le jeudi 26 septembre 2024 à 19:22 +0530, Sumit Garg a écrit :
> [Resend in plain text format as my earlier message was rejected by
> some mailing lists]
>
> On Thu, 26 Sept 2024 at 19:17, Sumit Garg <sumit.garg@linaro.org> wrote:
> >
> > On 9/25/24 19:31, Christian König wrote:
> >
> > Am 25.09.24 um 14:51 schrieb Dmitry Baryshkov:
> >
> > On Wed, Sep 25, 2024 at 10:51:15AM GMT, Christian König wrote:
> >
> > Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov:
> >
> > On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote:
> >
> > On 9/23/24 1:33 AM, Dmitry Baryshkov wrote:
> >
> > Hi,
> >
> > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote:
> >
> > Hi,
> >
> > This patch set is based on top of Yong Wu's restricted heap patch set [1].
> > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2].
> >
> > The Linaro restricted heap uses genalloc in the kernel to manage the heap
> > carvout. This is a difference from the Mediatek restricted heap which
> > relies on the secure world to manage the carveout.
> >
> > I've tried to adress the comments on [2], but [1] introduces changes so I'm
> > afraid I've had to skip some comments.
> >
> > I know I have raised the same question during LPC (in connection to
> > Qualcomm's dma-heap implementation). Is there any reason why we are
> > using generic heaps instead of allocating the dma-bufs on the device
> > side?
> >
> > In your case you already have TEE device, you can use it to allocate and
> > export dma-bufs, which then get imported by the V4L and DRM drivers.
> >
> > This goes to the heart of why we have dma-heaps in the first place.
> > We don't want to burden userspace with having to figure out the right
> > place to get a dma-buf for a given use-case on a given hardware.
> > That would be very non-portable, and fail at the core purpose of
> > a kernel: to abstract hardware specifics away.
> >
> > Unfortunately all proposals to use dma-buf heaps were moving in the
> > described direction: let app select (somehow) from a platform- and
> > vendor- specific list of dma-buf heaps. In the kernel we at least know
> > the platform on which the system is running. Userspace generally doesn't
> > (and shouldn't). As such, it seems better to me to keep the knowledge in
> > the kernel and allow userspace do its job by calling into existing
> > device drivers.
> >
> > The idea of letting the kernel fully abstract away the complexity of inter
> > device data exchange is a completely failed design. There has been plenty of
> > evidence for that over the years.
> >
> > Because of this in DMA-buf it's an intentional design decision that
> > userspace and *not* the kernel decides where and what to allocate from.
> >
> > Hmm, ok.
> >
> > What the kernel should provide are the necessary information what type of
> > memory a device can work with and if certain memory is accessible or not.
> > This is the part which is unfortunately still not well defined nor
> > implemented at the moment.
> >
> > Apart from that there are a whole bunch of intentional design decision which
> > should prevent developers to move allocation decision inside the kernel. For
> > example DMA-buf doesn't know what the content of the buffer is (except for
> > it's total size) and which use cases a buffer will be used with.
> >
> > So the question if memory should be exposed through DMA-heaps or a driver
> > specific allocator is not a question of abstraction, but rather one of the
> > physical location and accessibility of the memory.
> >
> > If the memory is attached to any physical device, e.g. local memory on a
> > dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the
> > memory as device specific allocator.
> >
> > So, for embedded systems with unified memory all buffers (maybe except
> > PCIe BARs) should come from DMA-BUF heaps, correct?
> >
> >
> > From what I know that is correct, yes. Question is really if that will stay this way.
> >
> > Neural accelerators look a lot stripped down FPGAs these days and the benefit of local memory for GPUs is known for decades.
> >
> > Could be that designs with local specialized memory see a revival any time, who knows.
> >
> > If the memory is not physically attached to any device, but rather just
> > memory attached to the CPU or a system wide memory controller then expose
> > the memory as DMA-heap with specific requirements (e.g. certain sized pages,
> > contiguous, restricted, encrypted, ...).
> >
> > Is encrypted / protected a part of the allocation contract or should it
> > be enforced separately via a call to TEE / SCM / anything else?
> >
> >
> > Well that is a really good question I can't fully answer either. From what I know now I would say it depends on the design.
> >
>
> IMHO, I think Dmitry's proposal to rather allow the TEE device to be
> the allocator and exporter of DMA-bufs related to restricted memory
> makes sense to me. Since it's really the TEE implementation (OP-TEE,
> AMD-TEE, TS-TEE or future QTEE) which sets up the restrictions on a
> particular piece of allocated memory. AFAIK, that happens after the
> DMA-buf gets allocated and then user-space calls into TEE to set up
> which media pipeline is going to access that particular DMA-buf. It
> can also be a static contract depending on a particular platform
> design.
When the memory get the protection is hardware specific. Otherwise the design
would be really straightforward, allocate from the a heap or any random driver
API and protect that memory through an call into the TEE. Clear seperation would
be amazingly better, but this is not how hardware and firmware designer have
seen it.
In some implementation, there is a carving of memory that be protected before
the kernel is booted. I believe (but I'm not affiliated with them) that MTK has
hardware restriction making that design the only usable method.
In general, the handling of secure memory is bound to the TEE application for
the specific platform, it has to be separated from the generic part of tee
drivers anyway, and dmabuf heaps is in my opinion the right API for the task.
On MTK, if you have followed, when the SCP (their co-processor) is handling
restricted video, you can't even call into it anymore directly. So to drive the
CODECs, everything has to be routed through the TEE. Would you say that because
of that this should not be a V4L2 driver anymore ?
>
> As Jens noted in the other thread, we already manage shared memory
> allocations (from a static carve-out or dynamically mapped) for
> communications among Linux and TEE that were based on DMA-bufs earlier
> but since we didn't required them to be shared with other devices, so
> we rather switched to anonymous memory.
>
> From user-space perspective, it's cleaner to use TEE device IOCTLs for
> DMA-buf allocations since it already knows which underlying TEE
> implementation it's communicating with rather than first figuring out
> which DMA heap to use for allocation and then communicating with TEE
> implementation.
As a user-space developer in the majority of my time, adding common code to
handle dma heaps is a lot easier and straight forward then having to glue all
the different allocators implement in various subsystems. Communicating which
heap to work can be generic and simple.
Nicolas
next prev parent reply other threads:[~2024-09-27 19:50 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-30 7:03 [RFC PATCH 0/4] Linaro restricted heap Jens Wiklander
2024-08-30 7:03 ` [RFC PATCH 1/4] dma-buf: heaps: restricted_heap: add no_map attribute Jens Wiklander
2024-08-30 8:46 ` Christian König
2024-09-05 6:56 ` Jens Wiklander
2024-09-05 8:01 ` Christian König
2024-08-30 7:03 ` [RFC PATCH 2/4] tee: new ioctl to a register tee_shm from a dmabuf file descriptor Jens Wiklander
2024-09-03 17:49 ` T.J. Mercier
2024-09-04 9:49 ` Jens Wiklander
2024-08-30 7:03 ` [RFC PATCH 3/4] dt-bindings: reserved-memory: add linaro,restricted-heap Jens Wiklander
2024-08-30 8:20 ` Krzysztof Kozlowski
2024-08-30 8:42 ` Jens Wiklander
2024-08-30 7:03 ` [RFC PATCH 4/4] dma-buf: heaps: add Linaro restricted dmabuf heap support Jens Wiklander
2024-09-03 17:50 ` T.J. Mercier
2024-09-04 9:44 ` Jens Wiklander
2024-09-04 21:42 ` T.J. Mercier
2024-09-10 6:06 ` Jens Wiklander
2024-09-10 15:08 ` T.J. Mercier
2024-09-11 5:58 ` Jens Wiklander
2024-09-23 6:33 ` [RFC PATCH 0/4] Linaro restricted heap Dmitry Baryshkov
2024-09-24 18:13 ` Andrew Davis
2024-09-24 23:05 ` Dmitry Baryshkov
[not found] ` <e967e382-6cca-4dee-8333-39892d532f71@gmail.com>
2024-09-25 12:51 ` [Linaro-mm-sig] " Dmitry Baryshkov
[not found] ` <04caa788-19a6-4336-985c-4eb191c24438@amd.com>
[not found] ` <2f9a4abe-b2fc-4bc7-9926-1da2d38f5080@linaro.org>
2024-09-26 13:52 ` Sumit Garg
2024-09-26 14:02 ` Christian König
2024-09-27 6:16 ` Jens Wiklander
2024-09-27 19:50 ` Nicolas Dufresne [this message]
2024-09-30 6:47 ` Sumit Garg
2024-09-26 14:56 ` Andrew Davis
2024-09-25 7:58 ` Jens Wiklander
2024-09-25 7:15 ` Jens Wiklander
2024-09-25 11:41 ` Dmitry Baryshkov
2024-09-25 12:53 ` Jens Wiklander
2024-09-24 22:02 ` Daniel Stone
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7c9e3a1a6092f6574c17d7206767ece0bcefc81f.camel@ndufresne.ca \
--to=nicolas@ndufresne.ca \
--cc=Brian.Starkey@arm.com \
--cc=afd@ti.com \
--cc=angelogioacchino.delregno@collabora.com \
--cc=benjamin.gaignard@collabora.com \
--cc=christian.koenig@amd.com \
--cc=ckoenig.leichtzumerken@gmail.com \
--cc=conor+dt@kernel.org \
--cc=devicetree@vger.kernel.org \
--cc=dmitry.baryshkov@linaro.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=jens.wiklander@linaro.org \
--cc=jstultz@google.com \
--cc=krzk+dt@kernel.org \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=matthias.bgg@gmail.com \
--cc=olivier.masse@nxp.com \
--cc=op-tee@lists.trustedfirmware.org \
--cc=robh@kernel.org \
--cc=sumit.garg@linaro.org \
--cc=sumit.semwal@linaro.org \
--cc=thierry.reding@gmail.com \
--cc=tjmercier@google.com \
--cc=yong.wu@mediatek.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).