From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "Michel Dänzer" <michel@daenzer.net>,
"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
"Pekka Paalanen" <ppaalanen@gmail.com>,
dri-devel <dri-devel@lists.freedesktop.org>,
"Alex Deucher" <alexdeucher@gmail.com>,
christian.koenig@amd.com
Subject: Re: [PATCH v2 8/8] drm/amdgpu: Prevent any job recoveries after device is unplugged.
Date: Thu, 19 Nov 2020 16:24:57 -0500 [thread overview]
Message-ID: <3d780dd0-e7c0-ffa5-c8fa-ba1c320fe390@amd.com> (raw)
In-Reply-To: <20201119152951.GD401619@phenom.ffwll.local>
On 11/19/20 10:29 AM, Daniel Vetter wrote:
> On Thu, Nov 19, 2020 at 10:02:28AM -0500, Andrey Grodzovsky wrote:
>> On 11/19/20 2:55 AM, Christian König wrote:
>>> Am 18.11.20 um 17:20 schrieb Andrey Grodzovsky:
>>>> On 11/18/20 7:01 AM, Christian König wrote:
>>>>> Am 18.11.20 um 08:39 schrieb Daniel Vetter:
>>>>>> On Tue, Nov 17, 2020 at 9:07 PM Andrey Grodzovsky
>>>>>> <Andrey.Grodzovsky@amd.com> wrote:
>>>>>>> On 11/17/20 2:49 PM, Daniel Vetter wrote:
>>>>>>>> On Tue, Nov 17, 2020 at 02:18:49PM -0500, Andrey Grodzovsky wrote:
>>>>>>>>> On 11/17/20 1:52 PM, Daniel Vetter wrote:
>>>>>>>>>> On Tue, Nov 17, 2020 at 01:38:14PM -0500, Andrey Grodzovsky wrote:
>>>>>>>>>>> On 6/22/20 5:53 AM, Daniel Vetter wrote:
>>>>>>>>>>>> On Sun, Jun 21, 2020 at 02:03:08AM -0400, Andrey Grodzovsky wrote:
>>>>>>>>>>>>> No point to try recovery if device is gone, just messes up things.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 16 ++++++++++++++++
>>>>>>>>>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 8 ++++++++
>>>>>>>>>>>>> 2 files changed, 24 insertions(+)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git
>>>>>>>>>>>>> a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>>>>>>>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>>>>>>>>>>>> index 6932d75..5d6d3d9 100644
>>>>>>>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>>>>>>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
>>>>>>>>>>>>> @@ -1129,12 +1129,28 @@ static
>>>>>>>>>>>>> int amdgpu_pci_probe(struct
>>>>>>>>>>>>> pci_dev *pdev,
>>>>>>>>>>>>> return ret;
>>>>>>>>>>>>> }
>>>>>>>>>>>>> +static void amdgpu_cancel_all_tdr(struct amdgpu_device *adev)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> + int i;
>>>>>>>>>>>>> +
>>>>>>>>>>>>> + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
>>>>>>>>>>>>> + struct amdgpu_ring *ring = adev->rings[i];
>>>>>>>>>>>>> +
>>>>>>>>>>>>> + if (!ring || !ring->sched.thread)
>>>>>>>>>>>>> + continue;
>>>>>>>>>>>>> +
>>>>>>>>>>>>> + cancel_delayed_work_sync(&ring->sched.work_tdr);
>>>>>>>>>>>>> + }
>>>>>>>>>>>>> +}
>>>>>>>>>>>> I think this is a function that's supposed to be in drm/scheduler, not
>>>>>>>>>>>> here. Might also just be your
>>>>>>>>>>>> cleanup code being ordered wrongly,
>>>>>>>>>>>> or your
>>>>>>>>>>>> split in one of the earlier patches not done quite right.
>>>>>>>>>>>> -Daniel
>>>>>>>>>>> This function iterates across all the
>>>>>>>>>>> schedulers per amdgpu device and
>>>>>>>>>>> accesses
>>>>>>>>>>> amdgpu specific structures ,
>>>>>>>>>>> drm/scheduler deals with single
>>>>>>>>>>> scheduler at most
>>>>>>>>>>> so looks to me like this is the right place for this function
>>>>>>>>>> I guess we could keep track of all schedulers somewhere in a list in
>>>>>>>>>> struct drm_device and wrap this up. That was kinda the idea.
>>>>>>>>>>
>>>>>>>>>> Minimally I think a tiny wrapper with docs for the
>>>>>>>>>> cancel_delayed_work_sync(&sched->work_tdr); which explains what you must
>>>>>>>>>> observe to make sure there's no race.
>>>>>>>>> Will do
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> I'm not exactly sure there's no
>>>>>>>>>> guarantee here we won't get a new tdr work launched right afterwards at
>>>>>>>>>> least, so this looks a bit like a hack.
>>>>>>>>> Note that for any TDR work happening post amdgpu_cancel_all_tdr
>>>>>>>>> amdgpu_job_timedout->drm_dev_is_unplugged
>>>>>>>>> will return true and so it will return early. To make it water proof tight
>>>>>>>>> against race
>>>>>>>>> i can switch from drm_dev_is_unplugged to drm_dev_enter/exit
>>>>>>>> Hm that's confusing. You do a work_cancel_sync, so that at least looks
>>>>>>>> like "tdr work must not run after this point"
>>>>>>>>
>>>>>>>> If you only rely on drm_dev_enter/exit check with the tdr work, then
>>>>>>>> there's no need to cancel anything.
>>>>>>> Agree, synchronize_srcu from drm_dev_unplug should play the role
>>>>>>> of 'flushing' any earlier (in progress) tdr work which is
>>>>>>> using drm_dev_enter/exit pair. Any later arising tdr
>>>>>>> will terminate early when
>>>>>>> drm_dev_enter
>>>>>>> returns false.
>>>>>> Nope, anything you put into the work itself cannot close this race.
>>>>>> It's the schedule_work that matters here. Or I'm missing something ...
>>>>>> I thought that the tdr work you're cancelling here is launched by
>>>>>> drm/scheduler code, not by the amd callback?
>>>>
>>>> My bad, you are right, I am supposed to put drm_dev_enter.exit pair
>>>> into drm_sched_job_timedout
>>>>
>>>>
>>>>> Yes that is correct. Canceling the work item is not the right
>>>>> approach at all, nor is adding dev_enter/exit pair in the
>>>>> recovery handler.
>>>>
>>>> Without adding the dev_enter/exit guarding pair in the recovery
>>>> handler you are ending up with GPU reset starting while
>>>> the device is already unplugged, this leads to multiple errors and general mess.
>>>>
>>>>
>>>>> What we need to do here is to stop the scheduler thread and then
>>>>> wait for any timeout handling to have finished.
>>>>>
>>>>> Otherwise it can scheduler a new timeout just after we have canceled this one.
>>>>>
>>>>> Regards,
>>>>> Christian.
>>>>
>>>> Schedulers are stopped from amdgpu_driver_unload_kms which indeed
>>>> happens after drm_dev_unplug
>>>> so yes, there is still a chance for new work being scheduler and
>>>> timeout armed after but, once i fix the code
>>>> to place drm_dev_enter/exit pair into drm_sched_job_timeout I don't
>>>> see why that not a good solution ?
>>> Yeah that should work as well, but then you also don't need to cancel
>>> the work item from the driver.
>>
>> Indeed, as Daniel pointed out no need and I dropped it. One correction - I
>> previously said that w/o
>> dev_enter/exit guarding pair in scheduler's TO handler you will get GPU
>> reset starting while device already gone -
>> of course this is not fully preventing this as the device can be extracted
>> at any moment just after we
>> already entered GPU recovery. But it does saves us processing a futile GPU
>> recovery which always
>> starts once you unplug the device if there are active gobs in progress at
>> the moment and so I think it's
>> still justifiable to keep the dev_enter/exit guarding pair there.
> Yeah sprinkling drm_dev_enter/exit over the usual suspect code paths like
> tdr to make the entire unloading much faster makes sense. Waiting for
> enormous amounts of mmio ops to time out isn't fun. A comment might be
> good for that though, to explain why we're doing that.
> -Daniel
Will do, I also tried to insert drm_dev_enter/exit in all MMIO accessors in amdgpu
to try and avoid at that level but didn't get good results for unclear reason,
will probably get
to this as a follow up work to again avoid expanding the scope of current work
too much.
Andrey
>
>> Andrey
>>
>>
>>>
>>>> Any tdr work started after drm_dev_unplug finished will simply abort
>>>> on entry to drm_sched_job_timedout
>>>> because drm_dev_enter will be false and the function will return
>>>> without rearming the timeout timer and
>>>> so will have no impact.
>>>>
>>>> The only issue i see here now is of possible use after free if some
>>>> late tdr work will try to execute after
>>>> drm device already gone, for this we probably should add
>>>> cancel_delayed_work_sync(sched.work_tdr)
>>>> to drm_sched_fini after sched->thread is stopped there.
>>> Good point, that is indeed missing as far as I can see.
>>>
>>> Christian.
>>>
>>>> Andrey
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
next prev parent reply other threads:[~2020-11-19 21:25 UTC|newest]
Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-21 6:03 [PATCH v2 0/8] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2020-06-21 6:03 ` [PATCH v2 1/8] drm: Add dummy page per device or GEM object Andrey Grodzovsky
2020-06-22 9:35 ` Daniel Vetter
2020-06-22 14:21 ` Pekka Paalanen
2020-06-22 14:24 ` Daniel Vetter
2020-06-22 14:28 ` Pekka Paalanen
2020-11-09 20:34 ` Andrey Grodzovsky
2020-11-15 6:39 ` Andrey Grodzovsky
2020-06-22 13:18 ` Christian König
2020-06-22 14:23 ` Daniel Vetter
2020-06-22 14:32 ` Andrey Grodzovsky
2020-06-22 17:45 ` Christian König
2020-06-22 17:50 ` Daniel Vetter
2020-11-09 20:53 ` Andrey Grodzovsky
2020-11-13 20:52 ` Andrey Grodzovsky
2020-11-14 8:41 ` Christian König
2020-11-14 9:51 ` Daniel Vetter
2020-11-14 9:57 ` Daniel Vetter
2020-11-16 9:42 ` Michel Dänzer
2020-11-15 6:34 ` Andrey Grodzovsky
2020-11-16 9:48 ` Christian König
2020-11-16 19:00 ` Andrey Grodzovsky
2020-11-16 20:36 ` Christian König
2020-11-16 20:42 ` Andrey Grodzovsky
2020-11-19 10:01 ` Christian König
2020-06-21 6:03 ` [PATCH v2 2/8] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2020-06-22 9:41 ` Daniel Vetter
2020-06-24 3:31 ` Andrey Grodzovsky
2020-06-24 7:19 ` Daniel Vetter
2020-11-10 17:41 ` Andrey Grodzovsky
2020-06-22 19:30 ` Christian König
2020-06-21 6:03 ` [PATCH v2 3/8] drm/ttm: Add unampping of the entire device address space Andrey Grodzovsky
2020-06-22 9:45 ` Daniel Vetter
2020-06-23 5:00 ` Andrey Grodzovsky
2020-06-23 10:25 ` Daniel Vetter
2020-06-23 12:55 ` Christian König
2020-06-22 19:37 ` Christian König
2020-06-22 19:47 ` Alex Deucher
2020-06-21 6:03 ` [PATCH v2 4/8] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2020-06-22 9:48 ` Daniel Vetter
2020-11-12 4:19 ` Andrey Grodzovsky
2020-11-12 9:29 ` Daniel Vetter
2020-06-21 6:03 ` [PATCH v2 5/8] drm/amdgpu: Refactor sysfs removal Andrey Grodzovsky
2020-06-22 9:51 ` Daniel Vetter
2020-06-22 11:21 ` Greg KH
2020-06-22 16:07 ` Andrey Grodzovsky
2020-06-22 16:45 ` Greg KH
2020-06-23 4:51 ` Andrey Grodzovsky
2020-06-23 6:05 ` Greg KH
2020-06-24 3:04 ` Andrey Grodzovsky
2020-06-24 6:11 ` Greg KH
2020-06-25 1:52 ` Andrey Grodzovsky
2020-11-10 17:54 ` Andrey Grodzovsky
2020-11-10 17:59 ` Greg KH
2020-11-11 15:13 ` Andrey Grodzovsky
2020-11-11 15:34 ` Greg KH
2020-11-11 15:45 ` Andrey Grodzovsky
2020-11-11 16:06 ` Greg KH
2020-11-11 16:34 ` Andrey Grodzovsky
2020-12-02 15:48 ` Andrey Grodzovsky
2020-12-02 17:34 ` Greg KH
2020-12-02 18:02 ` Andrey Grodzovsky
2020-12-02 18:20 ` Greg KH
2020-12-02 18:40 ` Andrey Grodzovsky
2020-06-22 13:19 ` Christian König
2020-06-21 6:03 ` [PATCH v2 6/8] drm/amdgpu: Unmap entire device address space on device remove Andrey Grodzovsky
2020-06-22 9:56 ` Daniel Vetter
2020-06-22 19:38 ` Christian König
2020-06-22 19:48 ` Alex Deucher
2020-06-23 10:22 ` Daniel Vetter
2020-06-23 13:16 ` Christian König
2020-06-24 3:12 ` Andrey Grodzovsky
2020-06-21 6:03 ` [PATCH v2 7/8] drm/amdgpu: Fix sdma code crash post device unplug Andrey Grodzovsky
2020-06-22 9:55 ` Daniel Vetter
2020-06-22 19:40 ` Christian König
2020-06-23 5:11 ` Andrey Grodzovsky
2020-06-23 7:14 ` Christian König
2020-06-21 6:03 ` [PATCH v2 8/8] drm/amdgpu: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2020-06-22 9:53 ` Daniel Vetter
2020-11-17 18:38 ` Andrey Grodzovsky
2020-11-17 18:52 ` Daniel Vetter
2020-11-17 19:18 ` Andrey Grodzovsky
2020-11-17 19:49 ` Daniel Vetter
2020-11-17 20:07 ` Andrey Grodzovsky
2020-11-18 7:39 ` Daniel Vetter
2020-11-18 12:01 ` Christian König
2020-11-18 15:43 ` Luben Tuikov
2020-11-18 16:20 ` Andrey Grodzovsky
2020-11-19 7:55 ` Christian König
2020-11-19 15:02 ` Andrey Grodzovsky
2020-11-19 15:29 ` Daniel Vetter
2020-11-19 21:24 ` Andrey Grodzovsky [this message]
2020-11-18 0:46 ` Luben Tuikov
2020-06-22 9:46 ` [PATCH v2 0/8] RFC Support hot device unplug in amdgpu Daniel Vetter
2020-06-23 5:14 ` Andrey Grodzovsky
2020-06-23 9:04 ` Michel Dänzer
2020-06-24 3:21 ` Andrey Grodzovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3d780dd0-e7c0-ffa5-c8fa-ba1c320fe390@amd.com \
--to=andrey.grodzovsky@amd.com \
--cc=alexdeucher@gmail.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=michel@daenzer.net \
--cc=ppaalanen@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox