From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 936BBC10F04 for ; Fri, 8 Dec 2023 04:22:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6668710E9CA; Fri, 8 Dec 2023 04:22:08 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4D13E10E12E for ; Fri, 8 Dec 2023 04:22:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702009325; x=1733545325; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4JhP5RGSQkP7lDzJD9NqzivWBtmZNhFF2KloOLv+HJE=; b=WJpLu4YqHw7Z17yYx0k3Qy5zDtaqtINsQKLe/g/u7oPi2/UMdNzcV7XL 9PHPK3bRe2reJhTm9GLqenrUbQXKXykYEoelu/FfApJMqEHg4JHg4s5gd XR2uXySLjHLgBBO7u6btFsZJRqYDxbkBV0gMI9TMQS8sBSX+uKsAbIziz /f7Gc6GvjcdBfFzeHaIqK+tp27ReBStewulsLAuZej5riz3TsvyVOL324 aOgEeNCoT5iQUEB5Y/JyY/yhiG0ypXEjSY/Ja7FVBiULvza3SCC9/KRZC rWfuCiDyDd+F5i0XQqtg9ay10HnKbAwa/rglPRKx5la96WP+gd70kcOaj Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10917"; a="458668132" X-IronPort-AV: E=Sophos;i="6.04,259,1695711600"; d="scan'208";a="458668132" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Dec 2023 20:22:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10917"; a="837999904" X-IronPort-AV: E=Sophos;i="6.04,259,1695711600"; d="scan'208";a="837999904" Received: from kbommu-desk.iind.intel.com ([10.145.169.159]) by fmsmga008.fm.intel.com with ESMTP; 07 Dec 2023 20:22:03 -0800 From: Bommu Krishnaiah To: intel-xe@lists.freedesktop.org Subject: [PATCH v5 1/2] drm/xe/uapi: add exec_queue_id member to drm_xe_wait_user_fence structure Date: Fri, 8 Dec 2023 09:47:15 +0530 Message-Id: <20231208041716.32369-2-krishnaiah.bommu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231208041716.32369-1-krishnaiah.bommu@intel.com> References: <20231208041716.32369-1-krishnaiah.bommu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Francois Dugast , Rodrigo Vivi Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" remove the num_engines/instances members from drm_xe_wait_user_fence structure and add a exec_queue_id member Right now this is only checking if the engine list is sane and nothing else. In the end every operation with this IOCTL is a soft check. So, let's formalize that and only use this IOCTL to wait on the fence. exec_queue_id member will help to user space to get proper error code from kernel while in exec_queue reset Signed-off-by: Bommu Krishnaiah Signed-off-by: Rodrigo Vivi Signed-off-by: Francois Dugast Acked-by: Matthew Brost Reviewed-by: Francois Dugast --- drivers/gpu/drm/xe/xe_wait_user_fence.c | 65 +------------------------ include/uapi/drm/xe_drm.h | 17 ++----- 2 files changed, 7 insertions(+), 75 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_wait_user_fence.c b/drivers/gpu/drm/xe/xe_wait_user_fence.c index 4d5c2555ce41..59af65b6ed89 100644 --- a/drivers/gpu/drm/xe/xe_wait_user_fence.c +++ b/drivers/gpu/drm/xe/xe_wait_user_fence.c @@ -50,37 +50,7 @@ static int do_compare(u64 addr, u64 value, u64 mask, u16 op) return passed ? 0 : 1; } -static const enum xe_engine_class user_to_xe_engine_class[] = { - [DRM_XE_ENGINE_CLASS_RENDER] = XE_ENGINE_CLASS_RENDER, - [DRM_XE_ENGINE_CLASS_COPY] = XE_ENGINE_CLASS_COPY, - [DRM_XE_ENGINE_CLASS_VIDEO_DECODE] = XE_ENGINE_CLASS_VIDEO_DECODE, - [DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE] = XE_ENGINE_CLASS_VIDEO_ENHANCE, - [DRM_XE_ENGINE_CLASS_COMPUTE] = XE_ENGINE_CLASS_COMPUTE, -}; - -static int check_hw_engines(struct xe_device *xe, - struct drm_xe_engine_class_instance *eci, - int num_engines) -{ - int i; - - for (i = 0; i < num_engines; ++i) { - enum xe_engine_class user_class = - user_to_xe_engine_class[eci[i].engine_class]; - - if (eci[i].gt_id >= xe->info.tile_count) - return -EINVAL; - - if (!xe_gt_hw_engine(xe_device_get_gt(xe, eci[i].gt_id), - user_class, eci[i].engine_instance, true)) - return -EINVAL; - } - - return 0; -} - -#define VALID_FLAGS (DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP | \ - DRM_XE_UFENCE_WAIT_FLAG_ABSTIME) +#define VALID_FLAGS DRM_XE_UFENCE_WAIT_FLAG_ABSTIME #define MAX_OP DRM_XE_UFENCE_WAIT_OP_LTE static long to_jiffies_timeout(struct xe_device *xe, @@ -132,16 +102,13 @@ int xe_wait_user_fence_ioctl(struct drm_device *dev, void *data, struct xe_device *xe = to_xe_device(dev); DEFINE_WAIT_FUNC(w_wait, woken_wake_function); struct drm_xe_wait_user_fence *args = data; - struct drm_xe_engine_class_instance eci[XE_HW_ENGINE_MAX_INSTANCE]; - struct drm_xe_engine_class_instance __user *user_eci = - u64_to_user_ptr(args->instances); u64 addr = args->addr; int err; - bool no_engines = args->flags & DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP; long timeout; ktime_t start; if (XE_IOCTL_DBG(xe, args->extensions) || XE_IOCTL_DBG(xe, args->pad) || + XE_IOCTL_DBG(xe, args->pad2) || XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1])) return -EINVAL; @@ -151,41 +118,13 @@ int xe_wait_user_fence_ioctl(struct drm_device *dev, void *data, if (XE_IOCTL_DBG(xe, args->op > MAX_OP)) return -EINVAL; - if (XE_IOCTL_DBG(xe, no_engines && - (args->num_engines || args->instances))) - return -EINVAL; - - if (XE_IOCTL_DBG(xe, !no_engines && !args->num_engines)) - return -EINVAL; - if (XE_IOCTL_DBG(xe, addr & 0x7)) return -EINVAL; - if (XE_IOCTL_DBG(xe, args->num_engines > XE_HW_ENGINE_MAX_INSTANCE)) - return -EINVAL; - - if (!no_engines) { - err = copy_from_user(eci, user_eci, - sizeof(struct drm_xe_engine_class_instance) * - args->num_engines); - if (XE_IOCTL_DBG(xe, err)) - return -EFAULT; - - if (XE_IOCTL_DBG(xe, check_hw_engines(xe, eci, - args->num_engines))) - return -EINVAL; - } - timeout = to_jiffies_timeout(xe, args); start = ktime_get(); - /* - * FIXME: Very simple implementation at the moment, single wait queue - * for everything. Could be optimized to have a wait queue for every - * hardware engine. Open coding as 'do_compare' can sleep which doesn't - * work with the wait_event_* macros. - */ add_wait_queue(&xe->ufence_wq, &w_wait); for (;;) { err = do_compare(addr, args->value, args->mask, args->op); diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h index 0895e4d2a981..5a8e3b326347 100644 --- a/include/uapi/drm/xe_drm.h +++ b/include/uapi/drm/xe_drm.h @@ -1031,8 +1031,7 @@ struct drm_xe_wait_user_fence { /** @op: wait operation (type of comparison) */ __u16 op; -#define DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP (1 << 0) /* e.g. Wait on VM bind */ -#define DRM_XE_UFENCE_WAIT_FLAG_ABSTIME (1 << 1) +#define DRM_XE_UFENCE_WAIT_FLAG_ABSTIME (1 << 0) /** @flags: wait flags */ __u16 flags; @@ -1065,17 +1064,11 @@ struct drm_xe_wait_user_fence { */ __s64 timeout; - /** - * @num_engines: number of engine instances to wait on, must be zero - * when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set - */ - __u64 num_engines; + /** @exec_queue_id: exec_queue_id returned from xe_exec_queue_create_ioctl */ + __u32 exec_queue_id; - /** - * @instances: user pointer to array of drm_xe_engine_class_instance to - * wait on, must be NULL when DRM_XE_UFENCE_WAIT_FLAG_SOFT_OP set - */ - __u64 instances; + /** @pad2: MBZ */ + __u32 pad2; /** @reserved: Reserved */ __u64 reserved[2]; -- 2.25.1