From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5CD6FC27C77 for ; Fri, 7 Jun 2024 20:43:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0001910ED1C; Fri, 7 Jun 2024 20:43:38 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="B9n1Aor8"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5597510ED11 for ; Fri, 7 Jun 2024 20:43:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1717793007; x=1749329007; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=PKhn+EoJ90WcCtvlX/Z4OJliCyM4AX4frex4UkAPsno=; b=B9n1Aor8CEt4Fw5mTGXAa/zvxmixNg8U7vcCMBefuIZnFT9BsU1k8e+1 WWloEHuGlMtVlPW2o5zWYeo6tBKQXc7oRYPZZUvHLSpwnl+98C1y+WUSk c16pLyGMR5Axo3JuXs3HV39rqzO1vljRcIH9d2sP3pFadLrGw6YEDSJLo 8U8BvfuKci5DLT1ATm1yPh3W5XQUPTYvGkk4j0rhfLN/+4cuprYTQ5+c2 LySpNgxSSTBVEwWTYYadqPv9CN1LpFEsu5oVia79K5GblIxFveMaEhAub LYfw7+cd3IRUNPNpguFHVENOVxTxf5MaBluPIuK5lTAkNtYXbmaMavOv+ g==; X-CSE-ConnectionGUID: i5PTByxnS2+19mVg6tIaqQ== X-CSE-MsgGUID: 5ozV6tLJQWqdjYmjHpkqXQ== X-IronPort-AV: E=McAfee;i="6600,9927,11096"; a="14651055" X-IronPort-AV: E=Sophos;i="6.08,221,1712646000"; d="scan'208";a="14651055" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2024 13:43:26 -0700 X-CSE-ConnectionGUID: KAC6+KU2Rq+7eYN41v6Oaw== X-CSE-MsgGUID: iIW7yFlERi68cA8BT5EMtQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,221,1712646000"; d="scan'208";a="43368452" Received: from orsosgc001.jf.intel.com ([10.165.21.138]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2024 13:43:26 -0700 From: Ashutosh Dixit To: intel-xe@lists.freedesktop.org Subject: [PATCH 13/17] drm/xe/oa/uapi: OA buffer mmap Date: Fri, 7 Jun 2024 13:43:18 -0700 Message-ID: <20240607204322.1966831-14-ashutosh.dixit@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20240607204322.1966831-1-ashutosh.dixit@intel.com> References: <20240607204322.1966831-1-ashutosh.dixit@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" Allow the OA buffer to be mmap'd to userspace. This is needed for the MMIO trigger use case. Even otherwise, with whitelisted OA head/tail ptr registers, userspace can receive/interpret OA data from the mmap'd buffer without issuing read()'s on the OA stream fd. v2: Remove unmap_mapping_range from xe_oa_release (Thomas H) Use vm_flags_mod (Umesh) Suggested-by: Umesh Nerlige Ramappa Reviewed-by: Umesh Nerlige Ramappa Signed-off-by: Ashutosh Dixit --- drivers/gpu/drm/xe/xe_oa.c | 46 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c index 690abe4f934f..41d59e9959e5 100644 --- a/drivers/gpu/drm/xe/xe_oa.c +++ b/drivers/gpu/drm/xe/xe_oa.c @@ -822,6 +822,8 @@ static int xe_oa_alloc_oa_buffer(struct xe_oa_stream *stream) return PTR_ERR(bo); stream->oa_buffer.bo = bo; + /* mmap implementation requires OA buffer to be in system memory */ + xe_assert(stream->oa->xe, bo->vmap.is_iomem == 0); stream->oa_buffer.vaddr = bo->vmap.vaddr; return 0; } @@ -1123,6 +1125,49 @@ static int xe_oa_release(struct inode *inode, struct file *file) return 0; } +static int xe_oa_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct xe_oa_stream *stream = file->private_data; + struct xe_bo *bo = stream->oa_buffer.bo; + unsigned long start = vma->vm_start; + int i, ret; + + if (xe_perf_stream_paranoid && !perfmon_capable()) { + drm_dbg(&stream->oa->xe->drm, "Insufficient privilege to map OA buffer\n"); + return -EACCES; + } + + /* Can mmap the entire OA buffer or nothing (no partial OA buffer mmaps) */ + if (vma->vm_end - vma->vm_start != XE_OA_BUFFER_SIZE) { + drm_dbg(&stream->oa->xe->drm, "Wrong mmap size, must be OA buffer size\n"); + return -EINVAL; + } + + /* + * Only support VM_READ, enforce MAP_PRIVATE by checking for + * VM_MAYSHARE, don't copy the vma on fork + */ + if (vma->vm_flags & (VM_WRITE | VM_EXEC | VM_SHARED | VM_MAYSHARE)) { + drm_dbg(&stream->oa->xe->drm, "mmap must be read only\n"); + return -EINVAL; + } + vm_flags_mod(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_DONTCOPY, + VM_MAYWRITE | VM_MAYEXEC); + + xe_assert(stream->oa->xe, bo->ttm.ttm->num_pages == + (vma->vm_end - vma->vm_start) >> PAGE_SHIFT); + for (i = 0; i < bo->ttm.ttm->num_pages; i++) { + ret = remap_pfn_range(vma, start, page_to_pfn(bo->ttm.ttm->pages[i]), + PAGE_SIZE, vma->vm_page_prot); + if (ret) + break; + + start += PAGE_SIZE; + } + + return ret; +} + static const struct file_operations xe_oa_fops = { .owner = THIS_MODULE, .llseek = no_llseek, @@ -1130,6 +1175,7 @@ static const struct file_operations xe_oa_fops = { .poll = xe_oa_poll, .read = xe_oa_read, .unlocked_ioctl = xe_oa_ioctl, + .mmap = xe_oa_mmap, }; static bool engine_supports_mi_query(struct xe_hw_engine *hwe) -- 2.41.0