From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DFF7CEDE986 for ; Thu, 14 Sep 2023 06:24:01 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7095710E253; Thu, 14 Sep 2023 06:24:01 +0000 (UTC) Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by gabe.freedesktop.org (Postfix) with ESMTPS id A63B910E253 for ; Thu, 14 Sep 2023 06:23:59 +0000 (UTC) Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-31fc91d5ca6so555413f8f.0 for ; Wed, 13 Sep 2023 23:23:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1694672638; x=1695277438; darn=lists.freedesktop.org; h=in-reply-to:from:references:cc:to:content-language:subject :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=mqy5cwN0LTnrFIBTaYrfFf1QTGbY5nnnR5m7Z0q4bls=; b=rn6bqI76CRmK8cZhem4Gh5k3RMG24uKUiB8BRjAOmyRTEnMsH7oJ6urMtB3+lftnd5 rFD1ISRNC35NzucUJ1BS1DnDk/wFOGx2TFHKoWFHP4ysSZgVo7jFXkM6hZFoaKBZLIDV NGqPtbREHG81af7HA3Ib5awOtEVNC7yUwffDLK8Oqr1dPexshGXRdMpWtkhkw+ReC18s Vg5tl5BRzJfw0Lk9jksolXSFaW9TgdI0UYM3YXCAm5E7OPeEZnIVEQ3Ecd5Zu8pSCZqF xNtac9KgdDMbem85DSJfV1GhPU8W3xm+flN79RS2Kimeerrc/oaZyHuz/Ep6Yo83pq6z Gckg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694672638; x=1695277438; h=in-reply-to:from:references:cc:to:content-language:subject :user-agent:mime-version:date:message-id:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=mqy5cwN0LTnrFIBTaYrfFf1QTGbY5nnnR5m7Z0q4bls=; b=lGuRnDMOFO7HleA46NjxOOJHjzVRica6i4nlcej7czCeVNod9jgQEm4YyPVH7Krtnu IU3xvS/svj865kQ2d0BPrPxDJzjMF20y1/feiW+8ftJMeLZuFEFMC3PlCrQOmZvq7nlA 42UgOD3wto9l3iKEsh3wNUrq3QQ1yME+GncIdLaBwnHCdszaLCEQyISIpNESOK+3x80G kvnDJhhjWRwNm8LSk2XwZA3e+4iLxaVlEMI26kNtUBE930E7q11j7TrS8X70yEJPT0Yu GPcLUmjRrAPLCtiH4peOK7X6egz0cPrVi037Y9LR2CffRinc/w607cTzr3EEj2Qagny3 eE7g== X-Gm-Message-State: AOJu0YwvVK9jlzyct5+hjPiLEdWGazBxrQlTfdBKEswsPQkmiy1lDZYl ba5b8aNU0FBGhWkBJ1/5dWTNMLT3WlEkLw== X-Google-Smtp-Source: AGHT+IFnEhxQBqQeF316TkvCo7T2ak3S+NO8c/a2/Mu81mM23+b0FzONOuiUfao1W1BlUBWdowXnhA== X-Received: by 2002:a5d:4944:0:b0:315:ad00:e628 with SMTP id r4-20020a5d4944000000b00315ad00e628mr3587484wrs.47.1694672637465; Wed, 13 Sep 2023 23:23:57 -0700 (PDT) Received: from [10.254.108.106] (munvpn.amd.com. [165.204.72.6]) by smtp.gmail.com with ESMTPSA id e8-20020a5d5008000000b0031f8be5b41bsm820550wrt.5.2023.09.13.23.23.53 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 13 Sep 2023 23:23:54 -0700 (PDT) Content-Type: multipart/alternative; boundary="------------IViCcbGdgLtGB10kUm80YvFX" Message-ID: <2e2c730d-f8f2-cda7-74cb-91b493da8902@gmail.com> Date: Thu, 14 Sep 2023 08:23:51 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: =?UTF-8?B?UmU6IOWbnuWkjTogW1BBVENIXSBkcm0vYW1kZ3B1OiBJZ25vcmUgZmly?= =?UTF-8?Q?st_evction_failure_during_suspend?= Content-Language: en-US To: "Pan, Xinhui" , "Koenig, Christian" , "Kuehling, Felix" , "amd-gfx@lists.freedesktop.org" References: <20230908033952.41872-1-xinhui.pan@amd.com> <9ee0c0b2-dbe8-7e47-cd64-d35b974861e3@gmail.com> <55b144a2-ce60-4f37-e5d8-a25c3b5e21ef@gmail.com> <303c2bbb-865c-d5da-1418-21dc803f61a3@gmail.com> <59d9927d-5216-e2fa-22cd-205e4a35ebaa@amd.com> From: =?UTF-8?Q?Christian_K=c3=b6nig?= In-Reply-To: X-BeenThere: amd-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Discussion list for AMD gfx List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Deucher, Alexander" , "Fan, Shikang" Errors-To: amd-gfx-bounces@lists.freedesktop.org Sender: "amd-gfx" This is a multi-part message in MIME format. --------------IViCcbGdgLtGB10kUm80YvFX Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit [putting Harry on BCC, sorry for the noise] Yeah, that is clearly a bug in the KFD. During the second eviction the hw should already be disabled, so we don't have any SDMA or similar to evict BOs any more and can only copy them with the CPU. @Felix what workqueue do you guys use for the restore work? I've just double checked and on the system workqueues you explicitly need to specify that stuff is freezable. E.g. use system_freezable_wq instead of system_wq. Alternatively as Xinhui mentioned it might be necessary to flush all restore work before the first eviction phase or we have the chance that BOs are moved back into VRAM again. Regards, Christian. Am 14.09.23 um 03:54 schrieb Pan, Xinhui: > > [AMD Official Use Only - General] > > > I just make one debug patch to show busy BO’s alloc-trace when the > eviction fails in suspend. > > And dmesg log attached. > > Looks like they are just kfd user Bos and locked by evict/restore work. > > So in kfd suspend callback, it really need to flush the evict/restore > work before HW fini as it do now. > > That is why the first very early eviction fails and the second > eviction succeed. > > Thanks > > xinhui > > *From:* Pan, Xinhui > *Sent:* Thursday, September 14, 2023 8:02 AM > *To:* Koenig, Christian ; Kuehling, Felix > ; Christian König > ; amd-gfx@lists.freedesktop.org; > Wentland, Harry > *Cc:* Deucher, Alexander ; Fan, Shikang > > *Subject:* RE: 回复: [PATCH] drm/amdgpu: Ignore first evction failure > during suspend > > Chris, > > I can dump these busy BOs with their alloc/free stack later today. > > BTW, the two evictions and the kfd suspend are all called before > hw_fini. IOW, between phase 1 and phase 2. SDMA is turned only in > phase2. So current code works fine maybe. > > *From:* Koenig, Christian > *Sent:* Wednesday, September 13, 2023 10:29 PM > *To:* Kuehling, Felix ; Christian König > ; Pan, Xinhui ; > amd-gfx@lists.freedesktop.org; Wentland, Harry > *Cc:* Deucher, Alexander ; Fan, Shikang > > *Subject:* Re: 回复: [PATCH] drm/amdgpu: Ignore first evction failure > during suspend > > [+Harry] > > Am 13.09.23 um 15:54 schrieb Felix Kuehling: > > On 2023-09-13 4:07, Christian König wrote: > > [+Fleix] > > Well that looks like quite a serious bug. > > If I'm not completely mistaken the KFD work item tries to > restore the process by moving BOs into memory even after the > suspend freeze. Normally work items are frozen together with > the user space processes unless explicitly marked as not > freezable. > > That this causes problem during the first eviction phase is > just the tip of the iceberg here. If a BO is moved into > invisible memory during this we wouldn't be able to get it out > of that in the second phase because SDMA and hw is already > turned off. > > @Felix any idea how that can happen? Have you guys marked a > work item / work queue as not freezable? > > We don't set anything to non-freezable in KFD. > > Regards, >   Felix > > Or maybe the display guys? > > > Do you guys in the display do any delayed update in a work item which > is marked as not-freezable? > > Otherwise I have absolutely no idea what's going on here. > > Thanks, > Christian. > > > @Xinhui please investigate what work item that is and where > that is coming from. Something like "if (adev->in_suspend) > dump_stack();" in the right place should probably do it. > > Thanks, > Christian. > > Am 13.09.23 um 07:13 schrieb Pan, Xinhui: > > [AMD Official Use Only - General] > > I notice that only user space process are frozen on my > side.  kthread and workqueue  keeps running. Maybe some > kernel configs are not enabled. > > I made one module which just prints something like i++ > with mutex lock both in workqueue and kthread. I paste > some logs below. > > [438619.696196] XH: 14 from workqueue > > [438619.700193] XH: 15 from kthread > > [438620.394335] PM: suspend entry (deep) > > [438620.399619] Filesystems sync: 0.001 seconds > > [438620.403887] PM: Preparing system for sleep (deep) > > [438620.409299] Freezing user space processes > > [438620.414862] Freezing user space processes completed > (elapsed 0.001 seconds) > > [438620.421881] OOM killer disabled. > > [438620.425197] Freezing remaining freezable tasks > > [438620.430890] Freezing remaining freezable tasks > completed (elapsed 0.001 seconds) > > [438620.438348] PM: Suspending system (deep) > > ..... > > [438623.746038] PM: suspend of devices complete after > 3303.137 msecs > > [438623.752125] PM: start suspend of devices complete > after 3309.713 msecs > > [438623.758722] PM: suspend debug: Waiting for 5 second(s). > > [438623.792166] XH: 22 from kthread > > [438623.824140] XH: 23 from workqueue > > So BOs definitely can be in use during suspend. > > Even if kthread or workqueue can be stopped with one > special kernel config. I think suspend can only stop the > workqueue with its callback finish. > > otherwise something like below makes things crazy. > > LOCK BO > > do something > > -> schedule or wait, anycode might sleep. Stopped by > suspend now? no, i think. > > UNLOCK BO > > I do tests  with  cmds below. > > echo devices  > /sys/power/pm_test > > echo 0  > /sys/power/pm_async > > echo 1  > /sys/power/pm_print_times > > echo 1 > /sys/power/pm_debug_messages > > echo 1 > /sys/module/amdgpu/parameters/debug_evictions > > ./kfd.sh --gtest_filter=KFDEvictTest.BasicTest > > pm-suspend > > thanks > > xinhui > > ------------------------------------------------------------------------ > > *发件人:*Christian König > > *发送时间:*2023年9月12日17:01 > *收件人:*Pan, Xinhui > ; amd-gfx@lists.freedesktop.org > > > *抄送:*Deucher, Alexander > ; Koenig, Christian > > ; Fan, Shikang > > *主题:*Re: [PATCH] drm/amdgpu: Ignore first evction failure > during suspend > > When amdgpu_device_suspend() is called processes should be > frozen > already. In other words KFD queues etc... should already > be idle. > > So when the eviction fails here we missed something > previously and that > in turn can cause tons amount of problems. > > So ignoring those errors is most likely not a good idea at > all. > > Regards, > Christian. > > Am 12.09.23 um 02:21 schrieb Pan, Xinhui: > > [AMD Official Use Only - General] > > > > Oh yep, Pinned BO is moved to other LRU list, So > eviction fails because of other reason. > > I will change the comments in the patch. > > The problem is eviction fails as many reasons, say, BO > is locked. > > ASAIK, kfd will stop the queues and flush some > evict/restore work in its suspend callback. SO the first > eviction before kfd callback likely fails. > > > > -----Original Message----- > > From: Christian König > > > Sent: Friday, September 8, 2023 2:49 PM > > To: Pan, Xinhui > ; amd-gfx@lists.freedesktop.org > > Cc: Deucher, Alexander > ; Koenig, Christian > > ; Fan, Shikang > > > Subject: Re: [PATCH] drm/amdgpu: Ignore first evction > failure during suspend > > > > Am 08.09.23 um 05:39 schrieb xinhui pan: > >> Some BOs might be pinned. So the first eviction's > failure will abort > >> the suspend sequence. These pinned BOs will be unpined > afterwards > >> during suspend. > > That doesn't make much sense since pinned BOs don't > cause eviction failure here. > > > > What exactly is the error code you see? > > > > Christian. > > > >> Actaully it has evicted most BOs, so that should stil > work fine in > >> sriov full access mode. > >> > >> Fixes: 47ea20762bb7 ("drm/amdgpu: Add an extra > evict_resource call > >> during device_suspend.") > >> Signed-off-by: xinhui pan > > >> --- > >> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 9 +++++---- > >>    1 file changed, 5 insertions(+), 4 deletions(-) > >> > >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > >> index 5c0e2b766026..39af526cdbbe 100644 > >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > >> @@ -4148,10 +4148,11 @@ int > amdgpu_device_suspend(struct drm_device > >> *dev, bool fbcon) > >> > >>        adev->in_suspend = true; > >> > >> -     /* Evict the majority of BOs before grabbing the > full access */ > >> -     r = amdgpu_device_evict_resources(adev); > >> -     if (r) > >> -             return r; > >> +     /* Try to evict the majority of BOs before > grabbing the full access > >> +      * Ignore the ret val at first place as we will > unpin some BOs if any > >> +      * afterwards. > >> +      */ > >> + (void)amdgpu_device_evict_resources(adev); > >> > >>        if (amdgpu_sriov_vf(adev)) { > >> amdgpu_virt_fini_data_exchange(adev); > --------------IViCcbGdgLtGB10kUm80YvFX Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit [putting Harry on BCC, sorry for the noise]

Yeah, that is clearly a bug in the KFD.

During the second eviction the hw should already be disabled, so we don't have any SDMA or similar to evict BOs any more and can only copy them with the CPU.

@Felix what workqueue do you guys use for the restore work? I've just double checked and on the system workqueues you explicitly need to specify that stuff is freezable. E.g. use system_freezable_wq instead of system_wq.

Alternatively as Xinhui mentioned it might be necessary to flush all restore work before the first eviction phase or we have the chance that BOs are moved back into VRAM again.

Regards,
Christian.

Am 14.09.23 um 03:54 schrieb Pan, Xinhui:

[AMD Official Use Only - General]


I just make one debug patch to show busy BO’s alloc-trace when the eviction fails in suspend.

And dmesg log attached.

Looks like they are just kfd user Bos and locked by evict/restore work.

So in kfd suspend callback, it really need to flush the evict/restore work before HW fini as it do now.

That is why the first very early eviction fails and the second eviction succeed.

 

Thanks

xinhui

From: Pan, Xinhui
Sent: Thursday, September 14, 2023 8:02 AM
To: Koenig, Christian <Christian.Koenig@amd.com>; Kuehling, Felix <Felix.Kuehling@amd.com>; Christian König <ckoenig.leichtzumerken@gmail.com>; amd-gfx@lists.freedesktop.org; Wentland, Harry <Harry.Wentland@amd.com>
Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Fan, Shikang <Shikang.Fan@amd.com>
Subject: RE: 回复: [PATCH] drm/amdgpu: Ignore first evction failure during suspend

 

Chris,

I can dump these busy BOs with their alloc/free stack later today.

 

BTW, the two evictions and the kfd suspend are all called before hw_fini. IOW, between phase 1 and phase 2. SDMA is turned only in phase2. So current code works fine maybe.

 

From: Koenig, Christian <Christian.Koenig@amd.com>
Sent: Wednesday, September 13, 2023 10:29 PM
To: Kuehling, Felix <Felix.Kuehling@amd.com>; Christian König <ckoenig.leichtzumerken@gmail.com>; Pan, Xinhui <Xinhui.Pan@amd.com>; amd-gfx@lists.freedesktop.org; Wentland, Harry <Harry.Wentland@amd.com>
Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Fan, Shikang <Shikang.Fan@amd.com>
Subject: Re: 回复: [PATCH] drm/amdgpu: Ignore first evction failure during suspend

 

[+Harry]

Am 13.09.23 um 15:54 schrieb Felix Kuehling:

On 2023-09-13 4:07, Christian König wrote:

[+Fleix]

Well that looks like quite a serious bug.

If I'm not completely mistaken the KFD work item tries to restore the process by moving BOs into memory even after the suspend freeze. Normally work items are frozen together with the user space processes unless explicitly marked as not freezable.

That this causes problem during the first eviction phase is just the tip of the iceberg here. If a BO is moved into invisible memory during this we wouldn't be able to get it out of that in the second phase because SDMA and hw is already turned off.

@Felix any idea how that can happen? Have you guys marked a work item / work queue as not freezable?

We don't set anything to non-freezable in KFD.

 

Regards,
  Felix

 

Or maybe the display guys?


Do you guys in the display do any delayed update in a work item which is marked as not-freezable?

Otherwise I have absolutely no idea what's going on here.

Thanks,
Christian.


@Xinhui please investigate what work item that is and where that is coming from. Something like "if (adev->in_suspend) dump_stack();" in the right place should probably do it.

Thanks,
Christian.

Am 13.09.23 um 07:13 schrieb Pan, Xinhui:

[AMD Official Use Only - General]

 

I notice that only user space process are frozen on my side.  kthread and workqueue  keeps running. Maybe some kernel configs are not enabled.

I made one module which just prints something like i++ with mutex lock both in workqueue and kthread. I paste some logs below.

[438619.696196] XH: 14 from workqueue

[438619.700193] XH: 15 from kthread

[438620.394335] PM: suspend entry (deep)

[438620.399619] Filesystems sync: 0.001 seconds

[438620.403887] PM: Preparing system for sleep (deep)

[438620.409299] Freezing user space processes

[438620.414862] Freezing user space processes completed (elapsed 0.001 seconds)

[438620.421881] OOM killer disabled.

[438620.425197] Freezing remaining freezable tasks

[438620.430890] Freezing remaining freezable tasks completed (elapsed 0.001 seconds)

[438620.438348] PM: Suspending system (deep)

.....

[438623.746038] PM: suspend of devices complete after 3303.137 msecs

[438623.752125] PM: start suspend of devices complete after 3309.713 msecs

[438623.758722] PM: suspend debug: Waiting for 5 second(s).

[438623.792166] XH: 22 from kthread

[438623.824140] XH: 23 from workqueue

 

 

So BOs definitely can be in use during suspend.

Even if kthread or workqueue can be stopped with one special kernel config. I think suspend can only stop the workqueue with its callback finish.

otherwise something like below makes things crazy.

LOCK BO

do something

    -> schedule or wait, anycode might sleep.  Stopped by suspend now? no, i think.

UNLOCK BO

 

I do tests  with  cmds below.

echo devices  > /sys/power/pm_test

echo 0  > /sys/power/pm_async

echo 1  > /sys/power/pm_print_times

echo 1 > /sys/power/pm_debug_messages

echo 1 > /sys/module/amdgpu/parameters/debug_evictions

./kfd.sh --gtest_filter=KFDEvictTest.BasicTest

pm-suspend

 

thanks

xinhui

 

 


发件人: Christian König <ckoenig.leichtzumerken@gmail.com>
发送时间: 2023912 17:01
收件人: Pan, Xinhui <Xinhui.Pan@amd.com>; amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>
抄送: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>; Fan, Shikang <Shikang.Fan@amd.com>
主题: Re: [PATCH] drm/amdgpu: Ignore first evction failure during suspend

 

When amdgpu_device_suspend() is called processes should be frozen
already. In other words KFD queues etc... should already be idle.

So when the eviction fails here we missed something previously and that
in turn can cause tons amount of problems.

So ignoring those errors is most likely not a good idea at all.

Regards,
Christian.

Am 12.09.23 um 02:21 schrieb Pan, Xinhui:
> [AMD Official Use Only - General]
>
> Oh yep, Pinned BO is moved to other LRU list, So eviction fails because of other reason.
> I will change the comments in the patch.
> The problem is eviction fails as many reasons, say, BO is locked.
> ASAIK, kfd will stop the queues and flush some evict/restore work in its suspend callback. SO the first eviction before kfd callback likely fails.
>
> -----Original Message-----
> From: Christian König <ckoenig.leichtzumerken@gmail.com>
> Sent: Friday, September 8, 2023 2:49 PM
> To: Pan, Xinhui <Xinhui.Pan@amd.com>; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander <Alexander.Deucher@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>; Fan, Shikang <Shikang.Fan@amd.com>
> Subject: Re: [PATCH] drm/amdgpu: Ignore first evction failure during suspend
>
> Am 08.09.23 um 05:39 schrieb xinhui pan:
>> Some BOs might be pinned. So the first eviction's failure will abort
>> the suspend sequence. These pinned BOs will be unpined afterwards
>> during suspend.
> That doesn't make much sense since pinned BOs don't cause eviction failure here.
>
> What exactly is the error code you see?
>
> Christian.
>
>> Actaully it has evicted most BOs, so that should stil work fine in
>> sriov full access mode.
>>
>> Fixes: 47ea20762bb7 ("drm/amdgpu: Add an extra evict_resource call
>> during device_suspend.")
>> Signed-off-by: xinhui pan <xinhui.pan@amd.com>
>> ---
>>    drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 9 +++++----
>>    1 file changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index 5c0e2b766026..39af526cdbbe 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -4148,10 +4148,11 @@ int amdgpu_device_suspend(struct drm_device
>> *dev, bool fbcon)
>>
>>        adev->in_suspend = true;
>>
>> -     /* Evict the majority of BOs before grabbing the full access */
>> -     r = amdgpu_device_evict_resources(adev);
>> -     if (r)
>> -             return r;
>> +     /* Try to evict the majority of BOs before grabbing the full access
>> +      * Ignore the ret val at first place as we will unpin some BOs if any
>> +      * afterwards.
>> +      */
>> +     (void)amdgpu_device_evict_resources(adev);
>>
>>        if (amdgpu_sriov_vf(adev)) {
>>                amdgpu_virt_fini_data_exchange(adev);

 

 


--------------IViCcbGdgLtGB10kUm80YvFX--