AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Yunxiang Li <Yunxiang.Li@amd.com>, amd-gfx@lists.freedesktop.org
Cc: Alexander.Deucher@amd.com, Likun.Gao@amd.com, Hawking.Zhang@amd.com
Subject: Re: [PATCH v2 08/10] drm/amdgpu: fix locking scope when flushing tlb
Date: Wed, 29 May 2024 08:49:41 +0200	[thread overview]
Message-ID: <329e7ed7-c039-407b-916c-7a15e8a51f46@amd.com> (raw)
In-Reply-To: <20240528172340.34517-9-Yunxiang.Li@amd.com>

Am 28.05.24 um 19:23 schrieb Yunxiang Li:
> Which method is used to flush tlb does not depend on whether a reset is
> in progress or not. We should skip flush altogether if the GPU will get
> reset. So put both path under reset_domain read lock.
>
> Signed-off-by: Yunxiang Li <Yunxiang.Li@amd.com>

Reviewed-by: Christian König <christian.koenig@amd.com>

Maybe add CC: stable?

Regards,
Christian.

> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 66 +++++++++++++------------
>   1 file changed, 34 insertions(+), 32 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index 603c0738fd03..4edd10b10a92 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -684,12 +684,17 @@ int amdgpu_gmc_flush_gpu_tlb_pasid(struct amdgpu_device *adev, uint16_t pasid,
>   	struct amdgpu_ring *ring = &adev->gfx.kiq[inst].ring;
>   	struct amdgpu_kiq *kiq = &adev->gfx.kiq[inst];
>   	unsigned int ndw;
> -	signed long r;
> +	int r;
>   	uint32_t seq;
>   
> -	if (!adev->gmc.flush_pasid_uses_kiq || !ring->sched.ready ||
> -	    !down_read_trylock(&adev->reset_domain->sem)) {
> +	/*
> +	 * A GPU reset should flush all TLBs anyway, so no need to do
> +	 * this while one is ongoing.
> +	 */
> +	if (!down_read_trylock(&adev->reset_domain->sem))
> +		return 0;
>   
> +	if (!adev->gmc.flush_pasid_uses_kiq || !ring->sched.ready) {
>   		if (adev->gmc.flush_tlb_needs_extra_type_2)
>   			adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,
>   								 2, all_hub,
> @@ -703,43 +708,40 @@ int amdgpu_gmc_flush_gpu_tlb_pasid(struct amdgpu_device *adev, uint16_t pasid,
>   		adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,
>   							 flush_type, all_hub,
>   							 inst);
> -		return 0;
> -	}
> +		r = 0;
> +	} else {
> +		/* 2 dwords flush + 8 dwords fence */
> +		ndw = kiq->pmf->invalidate_tlbs_size + 8;
>   
> -	/* 2 dwords flush + 8 dwords fence */
> -	ndw = kiq->pmf->invalidate_tlbs_size + 8;
> +		if (adev->gmc.flush_tlb_needs_extra_type_2)
> +			ndw += kiq->pmf->invalidate_tlbs_size;
>   
> -	if (adev->gmc.flush_tlb_needs_extra_type_2)
> -		ndw += kiq->pmf->invalidate_tlbs_size;
> +		if (adev->gmc.flush_tlb_needs_extra_type_0)
> +			ndw += kiq->pmf->invalidate_tlbs_size;
>   
> -	if (adev->gmc.flush_tlb_needs_extra_type_0)
> -		ndw += kiq->pmf->invalidate_tlbs_size;
> +		spin_lock(&adev->gfx.kiq[inst].ring_lock);
> +		amdgpu_ring_alloc(ring, ndw);
> +		if (adev->gmc.flush_tlb_needs_extra_type_2)
> +			kiq->pmf->kiq_invalidate_tlbs(ring, pasid, 2, all_hub);
>   
> -	spin_lock(&adev->gfx.kiq[inst].ring_lock);
> -	amdgpu_ring_alloc(ring, ndw);
> -	if (adev->gmc.flush_tlb_needs_extra_type_2)
> -		kiq->pmf->kiq_invalidate_tlbs(ring, pasid, 2, all_hub);
> +		if (flush_type == 2 && adev->gmc.flush_tlb_needs_extra_type_0)
> +			kiq->pmf->kiq_invalidate_tlbs(ring, pasid, 0, all_hub);
>   
> -	if (flush_type == 2 && adev->gmc.flush_tlb_needs_extra_type_0)
> -		kiq->pmf->kiq_invalidate_tlbs(ring, pasid, 0, all_hub);
> +		kiq->pmf->kiq_invalidate_tlbs(ring, pasid, flush_type, all_hub);
> +		r = amdgpu_fence_emit_polling(ring, &seq, MAX_KIQ_REG_WAIT);
> +		if (r) {
> +			amdgpu_ring_undo(ring);
> +			spin_unlock(&adev->gfx.kiq[inst].ring_lock);
> +			goto error_unlock_reset;
> +		}
>   
> -	kiq->pmf->kiq_invalidate_tlbs(ring, pasid, flush_type, all_hub);
> -	r = amdgpu_fence_emit_polling(ring, &seq, MAX_KIQ_REG_WAIT);
> -	if (r) {
> -		amdgpu_ring_undo(ring);
> +		amdgpu_ring_commit(ring);
>   		spin_unlock(&adev->gfx.kiq[inst].ring_lock);
> -		goto error_unlock_reset;
> -	}
> -
> -	amdgpu_ring_commit(ring);
> -	spin_unlock(&adev->gfx.kiq[inst].ring_lock);
> -	r = amdgpu_fence_wait_polling(ring, seq, usec_timeout);
> -	if (r < 1) {
> -		dev_err(adev->dev, "wait for kiq fence error: %ld.\n", r);
> -		r = -ETIME;
> -		goto error_unlock_reset;
> +		if (amdgpu_fence_wait_polling(ring, seq, usec_timeout) < 1) {
> +			dev_err(adev->dev, "timeout waiting for kiq fence\n");
> +			r = -ETIME;
> +		}
>   	}
> -	r = 0;
>   
>   error_unlock_reset:
>   	up_read(&adev->reset_domain->sem);


  reply	other threads:[~2024-05-29  6:49 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-28 17:23 [PATCH v2 00/10] drm/amdgpu: prevent concurrent GPU access during reset Yunxiang Li
2024-05-28 17:23 ` [PATCH v2 01/10] drm/amdgpu: add skip_hw_access checks for sriov Yunxiang Li
2024-05-29  6:36   ` Christian König
2024-05-28 17:23 ` [PATCH v2 02/10] drm/amdgpu: fix sriov host flr handler Yunxiang Li
2024-05-29  6:41   ` Christian König
2024-05-28 17:23 ` [PATCH v2 03/10] drm/amdgpu: abort fence poll if reset is started Yunxiang Li
2024-05-29  6:38   ` Christian König
2024-05-29 13:22     ` Li, Yunxiang (Teddy)
2024-05-29 13:31       ` Christian König
2024-05-29 13:44         ` Li, Yunxiang (Teddy)
2024-05-29 13:55           ` Christian König
2024-05-29 14:31             ` Li, Yunxiang (Teddy)
2024-05-29 14:35               ` Christian König
2024-05-29 14:48                 ` Li, Yunxiang (Teddy)
2024-05-29 15:19                   ` Christian König
2024-05-31 14:44                     ` Liu, Shaoyun
2024-06-03 10:58                       ` Christian König
2024-06-03 18:28                         ` Liu, Shaoyun
2024-06-04  8:07                           ` Christian König
2024-06-05 12:32                             ` Liu, Shaoyun
2024-05-28 17:23 ` [PATCH v2 04/10] drm/amdgpu/kfd: remove is_hws_hang and is_resetting Yunxiang Li
2024-05-29  6:41   ` Christian König
2024-05-29 23:04   ` Felix Kuehling
2024-05-30  0:06     ` Li, Yunxiang (Teddy)
2024-05-28 17:23 ` [PATCH v2 05/10] drm/amd/amdgpu: remove unnecessary flush when enable gart Yunxiang Li
2024-05-29  6:43   ` Christian König
2024-05-28 17:23 ` [PATCH v2 06/10] drm/amdgpu: remove tlb flush in amdgpu_gtt_mgr_recover Yunxiang Li
2024-05-29  6:45   ` Christian König
2024-05-28 17:23 ` [PATCH v2 07/10] drm/amdgpu: use helper in amdgpu_gart_unbind Yunxiang Li
2024-05-29  6:46   ` Christian König
2024-05-28 17:23 ` [PATCH v2 08/10] drm/amdgpu: fix locking scope when flushing tlb Yunxiang Li
2024-05-29  6:49   ` Christian König [this message]
2024-05-28 17:23 ` [PATCH v2 09/10] drm/amdgpu: fix missing reset domain locks Yunxiang Li
2024-05-29  6:55   ` Christian König
2024-05-30 22:02   ` Felix Kuehling
2024-05-30 22:35     ` Li, Yunxiang (Teddy)
2024-05-31  6:52     ` Christian König
2024-05-31 15:47       ` Felix Kuehling
2024-06-04 12:52         ` Li, Yunxiang (Teddy)
2024-05-28 17:23 ` [PATCH v2 10/10] Revert "drm/amdgpu: Queue KFD reset workitem in VF FED" Yunxiang Li
2024-05-28 19:04   ` Skvortsov, Victor
2024-05-30 21:47 ` [PATCH v3 0/8] drm/amdgpu: prevent concurrent GPU access during reset Yunxiang Li
2024-05-30 21:47   ` [PATCH v3 1/8] drm/amdgpu: add skip_hw_access checks for sriov Yunxiang Li
2024-05-30 21:47   ` [PATCH v3 2/8] drm/amdgpu: fix sriov host flr handler Yunxiang Li
2024-06-05  1:12     ` Deng, Emily
2024-05-30 21:48   ` [PATCH v3 3/8] drm/amdgpu/kfd: remove is_hws_hang and is_resetting Yunxiang Li
2024-05-30 21:48   ` [PATCH v3 4/8] drm/amd/amdgpu: remove unnecessary flush when enable gart Yunxiang Li
2024-05-30 21:48   ` [PATCH v3 5/8] drm/amdgpu: remove tlb flush in amdgpu_gtt_mgr_recover Yunxiang Li
2024-05-30 21:48   ` [PATCH v3 6/8] drm/amdgpu: use helper in amdgpu_gart_unbind Yunxiang Li
2024-05-30 21:48   ` [PATCH v3 7/8] drm/amdgpu: fix locking scope when flushing tlb Yunxiang Li
2024-05-30 21:48   ` [PATCH v3 8/8] drm/amdgpu: fix missing reset domain locks Yunxiang Li
2024-05-31  6:50     ` Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=329e7ed7-c039-407b-916c-7a15e8a51f46@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Hawking.Zhang@amd.com \
    --cc=Likun.Gao@amd.com \
    --cc=Yunxiang.Li@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox