From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DF652C5B552 for ; Tue, 10 Jun 2025 15:53:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=a/zrQKrz1ZdcacJlKYkPsE2jrLvANF9CRh8hMHgIKxo=; b=v8Ep9HJntIK95GfXTZRi2a3ftK 5fWfdtE0Z9yP129Gym5zKhPK2m23XUvI4x9VAuBwFyiwvxguiGrNfl/RIB2bLpOfTpMCGA7zVSbNK b0KmWzrwzUxzMoytH4hKwzgR6K6HzKyrJHeVoGn1t97O2ZQJAvj17BHPRYhT89a7wQKY8ZurWwI9f iHZM/i4CT480lkZG5tWLaieIOJI1GXCxujPN5r0wmMhloz9wkRW0/CdIQy7WAZ68xFufOEYk2R32h oO+kERfuZcpuZbYef0Jk5lNdTFj34zPXvTesfxgl5m3N4+wCKz+cV7Oxo5aGlVj5YtUlA/xbUeLBZ NPA79beQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uP1HS-00000007NwW-1HQp; Tue, 10 Jun 2025 15:52:54 +0000 Received: from mail-ej1-x62b.google.com ([2a00:1450:4864:20::62b]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uOypy-00000006vTv-2Wbh for linux-arm-kernel@lists.infradead.org; Tue, 10 Jun 2025 13:16:23 +0000 Received: by mail-ej1-x62b.google.com with SMTP id a640c23a62f3a-adb2e9fd208so1007550866b.3 for ; Tue, 10 Jun 2025 06:16:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1749561381; x=1750166181; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a/zrQKrz1ZdcacJlKYkPsE2jrLvANF9CRh8hMHgIKxo=; b=qAVhmL2E6d503dj9mTQhtNdtFrJY/b3p8Wu3FdcIshFwOhSu+h7TcoGBULBUjw8j/S 12FPSr8DQikX6YyCYVQQZh1PMQvnuYMiw+5/zA7xj8qj9416OhwAtCtPOjbg8Q93vKDN RCmR1xQii0akthUX8FaAvjeEXvGVlhzDRhwmIXEUm3XrI4+xRpt0ZfO/OfGrDoKOAVJ+ u8lAwDUG2rkbrb88wfwrZw2kFZuX9C0Wz1Ce5w9nqIISO+BM+/o473PGJZmQkc5FpN7X PgjNUX21nOZXT4Y62hkoxmQOS+TLEdUSAS+gTb1keAOeIvn5bfBibUd3Nj+Oz8aleOo7 F6Nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749561381; x=1750166181; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a/zrQKrz1ZdcacJlKYkPsE2jrLvANF9CRh8hMHgIKxo=; b=abPmY3R55vd5sHfarXaaibFCzpgLeW+XVd7SSApDwR0F0jf6sej3G+xWSFFVL4k9vC SLqAVj8nkmDW92c0DiupZ/wdObxQPav73jE3/uI4Ma8vWTKPDCb1sDjiSR/7A7ieHbvI KOtM3D1f8ZCx4yyF3aCctgJDby+bgwE0EuTyTG1Js1IMPlRAZL8qN0Z/zteaysSILmWH yfMTv1MmDZ70u4Xmd9MXTnS71r1G9NfK8pciffOhjckmi333+CH56W9uVkIuMYJVifCe zxgS97IqhaO3ajaSZwZJvMs/mGaxLicARNyFFO8bVY+LtsRYV4Xi6Ai4vjXUyyHwgFUn CARg== X-Forwarded-Encrypted: i=1; AJvYcCW1iEVmgiJl9hV+71bLcFdc80CJBT+bQfMUH5r0y6ids6WW9aKAiC1sPYTQj7gUsh403wYLYrWj50AoWNnGLe7X@lists.infradead.org X-Gm-Message-State: AOJu0YxDxCk4yvjYSMjMUnt7X/1OIzMyTRzG+3eUWJ7wGGx7RpzapvdL pLroYj9xfWPDrDmIeGirkmFGkbZf9EZ2pPHrTib+3Y0Qjj9ury0td5XYViBWrMXXxbs= X-Gm-Gg: ASbGncsFz6GREQbE7dw0ALfy0zBnWaqESRO73ufchPxyaQdezCYXkBgiLXpoFm3JzUZ 5QViRbO+csOiilWydculbm/K2C2pqX4lK+65KjOanZzQoRuDQJnllC73vq394D9ka9u1hUP/T6B mLVCrBScQON0tmKfq6k+FweGyr+aGdJD5Te+XuLb/ZxNZ2QbeISAA4WXfKsxvoM/THpxpSu8bG6 XWbrJgt85okqC2eHPMtYkYsDQlLsazdM5MD6UywQSjTVG3YNDcvVgP8DwOmTaUwIleOWWecSUjh eDmX9gu1JVtC78NwwBX7JngUbC4jKHf+HMXbDt/NV+Z+MCyvI5WLLvj+KkhK5bXy8ioiYXL1xaw 5QjqvdsOD7vQ4TjD7GK+6aZq6hHNYHlGYEhM9+3w= X-Google-Smtp-Source: AGHT+IGBftzOQtLPgzGHNeXG6c2zhp6CTLR8q3dJgPSZSp1EfRa2hlxKgikylIdO9cmYC3sLXz6TCQ== X-Received: by 2002:a17:907:9720:b0:adb:413e:2a2f with SMTP id a640c23a62f3a-ade1aa0fbdbmr1346254066b.9.1749561380962; Tue, 10 Jun 2025 06:16:20 -0700 (PDT) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ade3206a1efsm643651666b.67.2025.06.10.06.16.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jun 2025 06:16:20 -0700 (PDT) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?UTF-8?q?Christian=20K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Rouven Czerwinski , robin.murphy@arm.com, Jens Wiklander Subject: [PATCH v10 6/9] tee: add tee_shm_alloc_dma_mem() Date: Tue, 10 Jun 2025 15:13:50 +0200 Message-ID: <20250610131600.2972232-7-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250610131600.2972232-1-jens.wiklander@linaro.org> References: <20250610131600.2972232-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250610_061622_645394_6B24A92B X-CRM114-Status: GOOD ( 20.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add tee_shm_alloc_dma_mem() to allocate DMA memory. The memory is represented by a tee_shm object using the new flag TEE_SHM_DMA_MEM to identify it as DMA memory. The allocated memory will later be lent to the TEE to be used as protected memory. Signed-off-by: Jens Wiklander --- drivers/tee/tee_shm.c | 85 +++++++++++++++++++++++++++++++++++++++- include/linux/tee_core.h | 5 +++ 2 files changed, 88 insertions(+), 2 deletions(-) diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index e63095e84644..60b0f3932cee 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -5,6 +5,8 @@ #include #include #include +#include +#include #include #include #include @@ -13,9 +15,14 @@ #include #include #include -#include #include "tee_private.h" +struct tee_shm_dma_mem { + struct tee_shm shm; + dma_addr_t dma_addr; + struct page *page; +}; + static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -48,7 +55,16 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) { void *p = shm; - if (shm->flags & TEE_SHM_DMA_BUF) { + if (shm->flags & TEE_SHM_DMA_MEM) { +#if IS_ENABLED(CONFIG_TEE_DMABUF_HEAPS) + struct tee_shm_dma_mem *dma_mem; + + dma_mem = container_of(shm, struct tee_shm_dma_mem, shm); + p = dma_mem; + dma_free_pages(&teedev->dev, shm->size, dma_mem->page, + dma_mem->dma_addr, DMA_BIDIRECTIONAL); +#endif + } else if (shm->flags & TEE_SHM_DMA_BUF) { struct tee_shm_dmabuf_ref *ref; ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); @@ -303,6 +319,71 @@ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_priv_buf); +#if IS_ENABLED(CONFIG_TEE_DMABUF_HEAPS) +/** + * tee_shm_alloc_dma_mem() - Allocate DMA memory as shared memory object + * @ctx: Context that allocates the shared memory + * @page_count: Number of pages + * + * The allocated memory is expected to be lent (made inaccessible to the + * kernel) to the TEE while it's used and returned (accessible to the + * kernel again) before it's freed. + * + * This function should normally only be used internally in the TEE + * drivers. + * + * @returns a pointer to 'struct tee_shm' + */ +struct tee_shm *tee_shm_alloc_dma_mem(struct tee_context *ctx, + size_t page_count) +{ + struct tee_device *teedev = ctx->teedev; + struct tee_shm_dma_mem *dma_mem; + dma_addr_t dma_addr; + struct page *page; + + if (!tee_device_get(teedev)) + return ERR_PTR(-EINVAL); + + page = dma_alloc_pages(&teedev->dev, page_count * PAGE_SIZE, + &dma_addr, DMA_BIDIRECTIONAL, GFP_KERNEL); + if (!page) + goto err_put_teedev; + + dma_mem = kzalloc(sizeof(*dma_mem), GFP_KERNEL); + if (!dma_mem) + goto err_free_pages; + + refcount_set(&dma_mem->shm.refcount, 1); + dma_mem->shm.ctx = ctx; + dma_mem->shm.paddr = page_to_phys(page); + dma_mem->dma_addr = dma_addr; + dma_mem->page = page; + dma_mem->shm.size = page_count * PAGE_SIZE; + dma_mem->shm.flags = TEE_SHM_DMA_MEM; + + teedev_ctx_get(ctx); + + return &dma_mem->shm; + +err_free_pages: + dma_free_pages(&teedev->dev, page_count * PAGE_SIZE, page, dma_addr, + DMA_BIDIRECTIONAL); +err_put_teedev: + tee_device_put(teedev); + + return ERR_PTR(-ENOMEM); +} +EXPORT_SYMBOL_GPL(tee_shm_alloc_dma_mem); +#else +struct tee_shm *tee_shm_alloc_dma_mem(struct tee_context *ctx, + size_t page_count) +{ + return ERR_PTR(-EINVAL); +} +EXPORT_SYMBOL_GPL(tee_shm_alloc_dma_mem); +#endif + int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm, diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index f17710196c4c..e46a53e753af 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -29,6 +29,8 @@ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ #define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */ +#define TEE_SHM_DMA_MEM BIT(5) /* Memory allocated with */ + /* dma_alloc_pages() */ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 @@ -310,6 +312,9 @@ void *tee_get_drvdata(struct tee_device *teedev); */ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size); +struct tee_shm *tee_shm_alloc_dma_mem(struct tee_context *ctx, + size_t page_count); + int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm, -- 2.43.0