From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ACCB9C10DC1 for ; Wed, 6 Dec 2023 22:21:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 764E310E7D6; Wed, 6 Dec 2023 22:21:30 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1537B10E7CE for ; Wed, 6 Dec 2023 22:21:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701901283; x=1733437283; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DfaStDuEnvTDMEI8N54KXWZTCQz3HUmXF9evMj3BQ+g=; b=LryBwTGG/5ZBXCfzZVcZlYHvW+QTlUTXvNIGKsdwXKsGbI3mu8n2yyrO +w+Hp78jyQVrdLRgpKvK3KsYB2t14XITnMDK+Ga129TgpFrbYEXvSqSI8 z52x4uzC+bNB7TL6ryu9gppz+jI2A+JpCuUfnJBtPT5MGfsObrgPtLkyH GoCusHbWSTqutZhBz6Cp037z8R4RqsMVGEB+YvhuiNEVwPpkRcG+SJnei bc6miENOc9QtbXSxnHO4CanivxreRxC1U+Bp46QwBZLDhC8lp2WnMTdNm yR0JHXUC/qnqlU+ISJlCh4MQ/YQmTwe97Z2lf99d2KsX/3A+9UWUKo0vO g==; X-IronPort-AV: E=McAfee;i="6600,9927,10916"; a="425286689" X-IronPort-AV: E=Sophos;i="6.04,256,1695711600"; d="scan'208";a="425286689" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 14:21:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10916"; a="800500661" X-IronPort-AV: E=Sophos;i="6.04,256,1695711600"; d="scan'208";a="800500661" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 14:21:21 -0800 From: Matthew Brost To: Date: Wed, 6 Dec 2023 14:21:36 -0800 Message-Id: <20231206222141.398040-2-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231206222141.398040-1-matthew.brost@intel.com> References: <20231206222141.398040-1-matthew.brost@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [Intel-xe] [PATCH v4 1/6] drm/xe: Use a flags field instead of bools for VMA create X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" Use a flags field instead of severval bools for VMA create as it is easier to read and less bug prone. Suggested-by: Thomas Hellström Signed-off-by: Matthew Brost Reviewed-by: Thomas Hellström --- drivers/gpu/drm/xe/xe_vm.c | 64 ++++++++++++++++++++------------------ 1 file changed, 34 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index e09050f16f07..44b2972d5d5f 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -860,17 +860,20 @@ struct dma_fence *xe_vm_rebind(struct xe_vm *vm, bool rebind_worker) return fence; } +#define VMA_CREATE_FLAG_READ_ONLY BIT(0) +#define VMA_CREATE_FLAG_IS_NULL BIT(1) + static struct xe_vma *xe_vma_create(struct xe_vm *vm, struct xe_bo *bo, u64 bo_offset_or_userptr, u64 start, u64 end, - bool read_only, - bool is_null, - u16 pat_index) + u16 pat_index, unsigned int flags) { struct xe_vma *vma; struct xe_tile *tile; u8 id; + bool read_only = (flags & VMA_CREATE_FLAG_READ_ONLY); + bool is_null = (flags & VMA_CREATE_FLAG_IS_NULL); xe_assert(vm->xe, start < end); xe_assert(vm->xe, end < vm->size); @@ -2242,7 +2245,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo, } static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op, - bool read_only, bool is_null, u16 pat_index) + u16 pat_index, unsigned int flags) { struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL; struct xe_vma *vma; @@ -2257,8 +2260,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op, } vma = xe_vma_create(vm, bo, op->gem.offset, op->va.addr, op->va.addr + - op->va.range - 1, read_only, is_null, - pat_index); + op->va.range - 1, pat_index, flags); if (bo) xe_bo_unlock(bo); @@ -2384,7 +2386,9 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q, drm_gpuva_for_each_op(__op, ops) { struct xe_vma_op *op = gpuva_op_to_vma_op(__op); + struct xe_vma *vma; bool first = list_empty(ops_list); + unsigned int flags = 0; INIT_LIST_HEAD(&op->link); list_add_tail(&op->link, ops_list); @@ -2400,10 +2404,13 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q, switch (op->base.op) { case DRM_GPUVA_OP_MAP: { - struct xe_vma *vma; + flags |= op->map.read_only ? + VMA_CREATE_FLAG_READ_ONLY : 0; + flags |= op->map.is_null ? + VMA_CREATE_FLAG_IS_NULL : 0; - vma = new_vma(vm, &op->base.map, op->map.read_only, - op->map.is_null, op->map.pat_index); + vma = new_vma(vm, &op->base.map, op->map.pat_index, + flags); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -2419,16 +2426,15 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q, op->remap.range = xe_vma_size(old); if (op->base.remap.prev) { - struct xe_vma *vma; - bool read_only = - op->base.remap.unmap->va->flags & - XE_VMA_READ_ONLY; - bool is_null = - op->base.remap.unmap->va->flags & - DRM_GPUVA_SPARSE; - - vma = new_vma(vm, op->base.remap.prev, read_only, - is_null, old->pat_index); + flags |= op->base.remap.unmap->va->flags & + XE_VMA_READ_ONLY ? + VMA_CREATE_FLAG_READ_ONLY : 0; + flags |= op->base.remap.unmap->va->flags & + DRM_GPUVA_SPARSE ? + VMA_CREATE_FLAG_IS_NULL : 0; + + vma = new_vma(vm, op->base.remap.prev, + old->pat_index, flags); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -2451,17 +2457,15 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct xe_exec_queue *q, } if (op->base.remap.next) { - struct xe_vma *vma; - bool read_only = - op->base.remap.unmap->va->flags & - XE_VMA_READ_ONLY; - - bool is_null = - op->base.remap.unmap->va->flags & - DRM_GPUVA_SPARSE; - - vma = new_vma(vm, op->base.remap.next, read_only, - is_null, old->pat_index); + flags |= op->base.remap.unmap->va->flags & + XE_VMA_READ_ONLY ? + VMA_CREATE_FLAG_READ_ONLY : 0; + flags |= op->base.remap.unmap->va->flags & + DRM_GPUVA_SPARSE ? + VMA_CREATE_FLAG_IS_NULL : 0; + + vma = new_vma(vm, op->base.remap.next, + old->pat_index, flags); if (IS_ERR(vma)) return PTR_ERR(vma); -- 2.34.1