Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Nirmoy Das <nirmoy.das@linux.intel.com>
To: "Cavitt, Jonathan" <jonathan.cavitt@intel.com>,
	"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Cc: "Gupta, saurabhg" <saurabhg.gupta@intel.com>,
	"Zuo, Alex" <alex.zuo@intel.com>,
	"Vivi, Rodrigo" <rodrigo.vivi@intel.com>
Subject: Re: [PATCH v2] drm/xe/xe_sync: Add debug printing to check_ufence
Date: Thu, 12 Dec 2024 18:30:51 +0100	[thread overview]
Message-ID: <c1d4a307-bebf-401d-ba88-e27675b388d7@linux.intel.com> (raw)
In-Reply-To: <CH0PR11MB5444CF26B806C6784B54AEFBE53E2@CH0PR11MB5444.namprd11.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 3696 bytes --]


On 12/11/2024 4:24 PM, Cavitt, Jonathan wrote:
>
>  
>
>  
>
> *From:*Nirmoy Das <nirmoy.das@linux.intel.com>
> *Sent:* Friday, December 6, 2024 11:28 PM
> *To:* Cavitt, Jonathan <jonathan.cavitt@intel.com>; intel-xe@lists.freedesktop.org
> *Cc:* Gupta, saurabhg <saurabhg.gupta@intel.com>; Zuo, Alex <alex.zuo@intel.com>; Vivi, Rodrigo <rodrigo.vivi@intel.com>
> *Subject:* Re: [PATCH v2] drm/xe/xe_sync: Add debug printing to check_ufence
>
>  
>
>  
>
> On 12/6/2024 7:11 PM, Jonathan Cavitt wrote:
>
>     The xe_sync helper function check_ufence can occasionally report EBUSY
>
>     if the ufence has not been signalled yet.  EBUSY is a non-fatal error
>
>     value for the function, so it is not desireable to warn in cases where
>
>     EBUSY is reported because it is up to the user to decide if EBUSY is a
>
>     fatal error in their use cases.  However, we can and should report EBUSY
>
>     to the debug logs for diagnostic purposes.
>
>      
>
>     v2: Use vm_dbg instead of XE_IOCTL_DBG (Rodrigo)
>
>      
>
>     Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com> <mailto:jonathan.cavitt@intel.com>
>
>     CC: Rodrigo Vivi <rodrigo.vivi@intel.com> <mailto:rodrigo.vivi@intel.com>
>
>     ---
>
>      drivers/gpu/drm/xe/xe_vm.c | 10 ++++++++--
>
>      1 file changed, 8 insertions(+), 2 deletions(-)
>
>      
>
>     diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>
>     index 74d684708b00..8c770d1b916c 100644
>
>     --- a/drivers/gpu/drm/xe/xe_vm.c
>
>     +++ b/drivers/gpu/drm/xe/xe_vm.c
>
>     @@ -2402,8 +2402,11 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>
>                break;
>
>        case DRM_GPUVA_OP_REMAP:
>
>                err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
>
>     -          if (err)
>
>     +          if (err) {
>
>     +                  vm_dbg(&vm->xe->drm,
>
>     +                         "REMAP: vma check ufence status = %i\n", err);
>
> Move that to check_ufence() instead so there there is only one copy of logging ?
>
> IMO I think there’s value in knowing which operation is failing, and that information would
>
> be lost if we moved the logging into check_ufence.
>
igt stack strace[1] dumps the operation info and any UMD should be able know the failed operation but not against as it is now:

Reviewed-by: Nirmoy Das <nirmoy.das@intel.com>


[1] https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_12052/shard-bmg-8/igt@xe_exec_compute_mode@many-rebind.html

> -Jonathan Cavitt
>
> Regards,
>
> Nirmoy
>
>
>
>      
>
>                        break;
>
>     +          }
>
>      
>
>                err = vma_lock_and_validate(exec,
>
>                                           gpuva_to_vma(op->base.remap.unmap->va),
>
>     @@ -2415,8 +2418,11 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>
>                break;
>
>        case DRM_GPUVA_OP_UNMAP:
>
>                err = check_ufence(gpuva_to_vma(op->base.unmap.va));
>
>     -          if (err)
>
>     +          if (err) {
>
>     +                  vm_dbg(&vm->xe->drm,
>
>     +                         "UNMAP: vma check ufence status = %i\n", err);
>
>                        break;
>
>     +          }
>
>      
>
>                err = vma_lock_and_validate(exec,
>
>                                           gpuva_to_vma(op->base.unmap.va),
>

[-- Attachment #2: Type: text/html, Size: 10606 bytes --]

      reply	other threads:[~2024-12-12 17:31 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-06 18:11 [PATCH v2] drm/xe/xe_sync: Add debug printing to check_ufence Jonathan Cavitt
2024-12-06 18:46 ` ✓ CI.Patch_applied: success for drm/xe/xe_sync: Add debug printing to check_ufence (rev2) Patchwork
2024-12-06 18:46 ` ✓ CI.checkpatch: " Patchwork
2024-12-06 18:47 ` ✓ CI.KUnit: " Patchwork
2024-12-06 19:05 ` ✓ CI.Build: " Patchwork
2024-12-06 19:07 ` ✓ CI.Hooks: " Patchwork
2024-12-06 19:09 ` ✓ CI.checksparse: " Patchwork
2024-12-06 19:30 ` ✓ Xe.CI.BAT: " Patchwork
2024-12-06 22:34 ` ✗ Xe.CI.Full: failure " Patchwork
2024-12-07  7:27 ` [PATCH v2] drm/xe/xe_sync: Add debug printing to check_ufence Nirmoy Das
2024-12-11 15:24   ` Cavitt, Jonathan
2024-12-12 17:30     ` Nirmoy Das [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c1d4a307-bebf-401d-ba88-e27675b388d7@linux.intel.com \
    --to=nirmoy.das@linux.intel.com \
    --cc=alex.zuo@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jonathan.cavitt@intel.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=saurabhg.gupta@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox