Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Sanjay Yadav <sanjay.kumar.yadav@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <matthew.auld@intel.com>
Subject: Re: [PATCH] drm/xe: Fix spelling and typos across XE driver files
Date: Tue, 21 Oct 2025 14:48:03 -0700	[thread overview]
Message-ID: <aPf/k227kIoDDgFM@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20251021132054.840023-2-sanjay.kumar.yadav@intel.com>

On Tue, Oct 21, 2025 at 06:50:55PM +0530, Sanjay Yadav wrote:
> Corrected various spelling mistakes and typos in multiple
> files under the XE directory. These fixes improve clarity
> and maintain consistency in documentation.

s/XE/Xe

This how we refer to our driver. We likely have multiple instances of
'XE' in doc too, perhaps clean that up too.

Matt

> 
> Signed-off-by: Sanjay Yadav <sanjay.kumar.yadav@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_bo.c            | 4 ++--
>  drivers/gpu/drm/xe/xe_device.c        | 2 +-
>  drivers/gpu/drm/xe/xe_gt_freq.c       | 2 +-
>  drivers/gpu/drm/xe/xe_gt_sriov_vf.c   | 2 +-
>  drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 2 +-
>  drivers/gpu/drm/xe/xe_migrate.c       | 4 ++--
>  drivers/gpu/drm/xe/xe_pm.c            | 2 +-
>  drivers/gpu/drm/xe/xe_svm.c           | 2 +-
>  drivers/gpu/drm/xe/xe_tlb_inval.h     | 2 +-
>  drivers/gpu/drm/xe/xe_validation.h    | 6 +++---
>  drivers/gpu/drm/xe/xe_vm.c            | 8 ++++----
>  drivers/gpu/drm/xe/xe_vm_types.h      | 2 +-
>  12 files changed, 19 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 7b6502081873..c899a895492e 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -2105,7 +2105,7 @@ void xe_bo_free(struct xe_bo *bo)
>   * if the function should allocate a new one.
>   * @tile: The tile to select for migration of this bo, and the tile used for
>   * GGTT binding if any. Only to be non-NULL for ttm_bo_type_kernel bos.
> - * @resv: Pointer to a locked shared reservation object to use fo this bo,
> + * @resv: Pointer to a locked shared reservation object to use of this bo,
>   * or NULL for the xe_bo to use its own.
>   * @bulk: The bulk move to use for LRU bumping, or NULL for external bos.
>   * @size: The storage size to use for the bo.
> @@ -2629,7 +2629,7 @@ struct xe_bo *xe_bo_create_pin_map(struct xe_device *xe, struct xe_tile *tile,
>   * @size: The storage size to use for the bo.
>   * @type: The TTM buffer object type.
>   * @flags: XE_BO_FLAG_ flags.
> - * @intr: Whether to execut any waits for backing store interruptible.
> + * @intr: Whether to execute any waits for backing store interruptible.
>   *
>   * Create a pinned and mapped bo. The bo will be external and not associated
>   * with a VM.
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 5f6a412b571c..47f5391ad8e9 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -1217,7 +1217,7 @@ static void xe_device_wedged_fini(struct drm_device *drm, void *arg)
>   *
>   *   /sys/bus/pci/devices/<device>/survivability_mode
>   *
> - * - Admin/userpsace consumer can use firmware flashing tools like fwupd to flash
> + * - Admin/userspace consumer can use firmware flashing tools like fwupd to flash
>   *   firmware and restore device to normal operation.
>   */
>  
> diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
> index 701349251bbc..e88f113226bc 100644
> --- a/drivers/gpu/drm/xe/xe_gt_freq.c
> +++ b/drivers/gpu/drm/xe/xe_gt_freq.c
> @@ -36,7 +36,7 @@
>   * - act_freq: The actual resolved frequency decided by PCODE.
>   * - cur_freq: The current one requested by GuC PC to the PCODE.
>   * - rpn_freq: The Render Performance (RP) N level, which is the minimal one.
> - * - rpa_freq: The Render Performance (RP) A level, which is the achiveable one.
> + * - rpa_freq: The Render Performance (RP) A level, which is the achievable one.
>   *   Calculated by PCODE at runtime based on multiple running conditions
>   * - rpe_freq: The Render Performance (RP) E level, which is the efficient one.
>   *   Calculated by PCODE at runtime based on multiple running conditions
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> index 46518e629ba3..382083675021 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> @@ -739,7 +739,7 @@ static void vf_start_migration_recovery(struct xe_gt *gt)
>  		gt->sriov.vf.migration.recovery_queued = true;
>  		WRITE_ONCE(gt->sriov.vf.migration.recovery_inprogress, true);
>  		WRITE_ONCE(gt->sriov.vf.migration.ggtt_need_fixes, true);
> -		smp_wmb();	/* Ensure above writes visable before wake */
> +		smp_wmb();	/* Ensure above writes visible before wake */
>  
>  		xe_guc_ct_wake_waiters(&gt->uc.guc.ct);
>  
> diff --git a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
> index 6bf2103602f8..a80175c7c478 100644
> --- a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
> +++ b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
> @@ -207,7 +207,7 @@ static const struct xe_tlb_inval_ops guc_tlb_inval_ops = {
>   * @guc: GuC object
>   * @tlb_inval: TLB invalidation client
>   *
> - * Inititialize GuC TLB invalidation by setting back pointer in TLB invalidation
> + * Initialize GuC TLB invalidation by setting back pointer in TLB invalidation
>   * client to the GuC and setting GuC backend ops.
>   */
>  void xe_guc_tlb_inval_init_early(struct xe_guc *guc,
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index 3112c966c67d..7d60c7c09f33 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -1981,7 +1981,7 @@ static struct dma_fence *xe_migrate_vram(struct xe_migrate *m,
>   *
>   * Copy from an array dma addresses to a VRAM device physical address
>   *
> - * Return: dma fence for migrate to signal completion on succees, ERR_PTR on
> + * Return: dma fence for migrate to signal completion on success, ERR_PTR on
>   * failure
>   */
>  struct dma_fence *xe_migrate_to_vram(struct xe_migrate *m,
> @@ -2002,7 +2002,7 @@ struct dma_fence *xe_migrate_to_vram(struct xe_migrate *m,
>   *
>   * Copy from a VRAM device physical address to an array dma addresses
>   *
> - * Return: dma fence for migrate to signal completion on succees, ERR_PTR on
> + * Return: dma fence for migrate to signal completion on success, ERR_PTR on
>   * failure
>   */
>  struct dma_fence *xe_migrate_from_vram(struct xe_migrate *m,
> diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> index 210298c4bcb1..4f8688fd3f00 100644
> --- a/drivers/gpu/drm/xe/xe_pm.c
> +++ b/drivers/gpu/drm/xe/xe_pm.c
> @@ -102,7 +102,7 @@ static void xe_pm_block_end_signalling(void)
>  /**
>   * xe_pm_might_block_on_suspend() - Annotate that the code might block on suspend
>   *
> - * Annotation to use where the code might block or sieze to make
> + * Annotation to use where the code might block or seize to make
>   * progress pending resume completion.
>   */
>  void xe_pm_might_block_on_suspend(void)
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 129e7818565c..13af589715a7 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -633,7 +633,7 @@ static int xe_svm_copy(struct page **pages,
>  
>  	/*
>  	 * XXX: We can't derive the GT here (or anywhere in this functions, but
> -	 * compute always uses the primary GT so accumlate stats on the likely
> +	 * compute always uses the primary GT so accumulate stats on the likely
>  	 * GT of the fault.
>  	 */
>  	if (gt)
> diff --git a/drivers/gpu/drm/xe/xe_tlb_inval.h b/drivers/gpu/drm/xe/xe_tlb_inval.h
> index 554634dfd4e2..05614915463a 100644
> --- a/drivers/gpu/drm/xe/xe_tlb_inval.h
> +++ b/drivers/gpu/drm/xe/xe_tlb_inval.h
> @@ -33,7 +33,7 @@ void xe_tlb_inval_fence_init(struct xe_tlb_inval *tlb_inval,
>   * xe_tlb_inval_fence_wait() - TLB invalidiation fence wait
>   * @fence: TLB invalidation fence to wait on
>   *
> - * Wait on a TLB invalidiation fence until it signals, non interruptable
> + * Wait on a TLB invalidiation fence until it signals, non interruptible
>   */
>  static inline void
>  xe_tlb_inval_fence_wait(struct xe_tlb_inval_fence *fence)
> diff --git a/drivers/gpu/drm/xe/xe_validation.h b/drivers/gpu/drm/xe/xe_validation.h
> index fec331d791e7..1699b9ea16a9 100644
> --- a/drivers/gpu/drm/xe/xe_validation.h
> +++ b/drivers/gpu/drm/xe/xe_validation.h
> @@ -108,7 +108,7 @@ struct xe_val_flags {
>   * @request_exclusive: Whether to lock exclusively (write mode) the next time
>   * the domain lock is locked.
>   * @exec_flags: The drm_exec flags used for drm_exec (re-)initialization.
> - * @nr: The drm_exec nr parameter used for drm_exec (re-)initializaiton.
> + * @nr: The drm_exec nr parameter used for drm_exec (re-)initialization.
>   */
>  struct xe_validation_ctx {
>  	struct drm_exec *exec;
> @@ -137,7 +137,7 @@ bool xe_validation_should_retry(struct xe_validation_ctx *ctx, int *ret);
>   * @_ret: The current error value possibly holding -ENOMEM
>   *
>   * Use this in way similar to drm_exec_retry_on_contention().
> - * If @_ret contains -ENOMEM the tranaction is restarted once in a way that
> + * If @_ret contains -ENOMEM the transaction is restarted once in a way that
>   * blocks other transactions and allows exhastive eviction. If the transaction
>   * was already restarted once, Just return the -ENOMEM. May also set
>   * _ret to -EINTR if not retrying and waits are interruptible.
> @@ -180,7 +180,7 @@ static inline void *class_xe_validation_lock_ptr(class_xe_validation_t *_T)
>   * @_val: The xe_validation_device.
>   * @_exec: The struct drm_exec object
>   * @_flags: Flags for the xe_validation_ctx initialization.
> - * @_ret: Return in / out parameter. May be set by this macro. Typicall 0 when called.
> + * @_ret: Return in / out parameter. May be set by this macro. Typical 0 when called.
>   *
>   * This macro is will initiate a drm_exec transaction with additional support for
>   * exhaustive eviction.
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 10d77666a425..4eefe902f8a4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -824,7 +824,7 @@ xe_vm_ops_add_range_rebind(struct xe_vma_ops *vops,
>   *
>   * (re)bind SVM range setting up GPU page tables for the range.
>   *
> - * Return: dma fence for rebind to signal completion on succees, ERR_PTR on
> + * Return: dma fence for rebind to signal completion on success, ERR_PTR on
>   * failure
>   */
>  struct dma_fence *xe_vm_range_rebind(struct xe_vm *vm,
> @@ -907,7 +907,7 @@ xe_vm_ops_add_range_unbind(struct xe_vma_ops *vops,
>   *
>   * Unbind SVM range removing the GPU page tables for the range.
>   *
> - * Return: dma fence for unbind to signal completion on succees, ERR_PTR on
> + * Return: dma fence for unbind to signal completion on success, ERR_PTR on
>   * failure
>   */
>  struct dma_fence *xe_vm_range_unbind(struct xe_vm *vm,
> @@ -1291,7 +1291,7 @@ static u16 pde_pat_index(struct xe_bo *bo)
>  	 * selection of options. The user PAT index is only for encoding leaf
>  	 * nodes, where we have use of more bits to do the encoding. The
>  	 * non-leaf nodes are instead under driver control so the chosen index
> -	 * here should be distict from the user PAT index. Also the
> +	 * here should be distinct from the user PAT index. Also the
>  	 * corresponding coherency of the PAT index should be tied to the
>  	 * allocation type of the page table (or at least we should pick
>  	 * something which is always safe).
> @@ -4319,7 +4319,7 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
>  			xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
>  		} else if (__op->op == DRM_GPUVA_OP_MAP) {
>  			vma = op->map.vma;
> -			/* In case of madvise call, MAP will always be follwed by REMAP.
> +			/* In case of madvise call, MAP will always be followed by REMAP.
>  			 * Therefore temp_attr will always have sane values, making it safe to
>  			 * copy them to new vma.
>  			 */
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index d6e2a0fdd4b3..afde0e34eab2 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -52,7 +52,7 @@ struct xe_vm_pgtable_update_op;
>   * struct xe_vma_mem_attr - memory attributes associated with vma
>   */
>  struct xe_vma_mem_attr {
> -	/** @preferred_loc: perferred memory_location */
> +	/** @preferred_loc: preferred memory_location */
>  	struct {
>  		/** @preferred_loc.migration_policy: Pages migration policy */
>  		u32 migration_policy;
> -- 
> 2.43.0
> 

      parent reply	other threads:[~2025-10-21 21:48 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-21 13:20 [PATCH] drm/xe: Fix spelling and typos across XE driver files Sanjay Yadav
2025-10-21 14:11 ` Summers, Stuart
2025-10-21 14:14   ` Ruhl, Michael J
2025-10-21 15:14 ` ✓ CI.KUnit: success for " Patchwork
2025-10-21 15:57 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-21 18:04 ` ✗ Xe.CI.Full: failure " Patchwork
2025-10-21 21:48 ` Matthew Brost [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aPf/k227kIoDDgFM@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    --cc=sanjay.kumar.yadav@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox