AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
To: Luben Tuikov <luben.tuikov@amd.com>,
	christian.koenig@amd.com, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, daniel.vetter@ffwll.ch,
	robh@kernel.org, l.stach@pengutronix.de, yuq825@gmail.com,
	eric@anholt.net
Cc: Alexander.Deucher@amd.com, gregkh@linuxfoundation.org
Subject: Re: [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged.
Date: Tue, 24 Nov 2020 12:17:17 -0500	[thread overview]
Message-ID: <54f9dd60-6a7a-d8f1-044c-2fe93929a7f9@amd.com> (raw)
In-Reply-To: <0ddcc645-ef0e-90af-6212-93967175cac2@amd.com>


On 11/24/20 12:11 PM, Luben Tuikov wrote:
> On 2020-11-24 2:50 a.m., Christian König wrote:
>> Am 24.11.20 um 02:12 schrieb Luben Tuikov:
>>> On 2020-11-23 3:06 a.m., Christian König wrote:
>>>> Am 23.11.20 um 06:37 schrieb Andrey Grodzovsky:
>>>>> On 11/22/20 6:57 AM, Christian König wrote:
>>>>>> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
>>>>>>> No point to try recovery if device is gone, it's meaningless.
>>>>>> I think that this should go into the device specific recovery
>>>>>> function and not in the scheduler.
>>>>> The timeout timer is rearmed here, so this prevents any new recovery
>>>>> work to restart from here
>>>>> after drm_dev_unplug was executed from amdgpu_pci_remove.It will not
>>>>> cover other places like
>>>>> job cleanup or starting new job but those should stop once the
>>>>> scheduler thread is stopped later.
>>>> Yeah, but this is rather unclean. We should probably return an error
>>>> code instead if the timer should be rearmed or not.
>>> Christian, this is exactly my work I told you about
>>> last week on Wednesday in our weekly meeting. And
>>> which I wrote to you in an email last year about this
>>> time.
>> Yeah, that's why I'm suggesting it here as well.
> It seems you're suggesting that Andrey do it, while
> all too well you know I've been working on this
> for some time now.
>
> I wrote you about this last year same time
> in an email. And I discussed it on the Wednesday
> meeting.
>
> You could've mentioned that here the first time.


Luben, I actually strongly prefer that you do it and share ur patch with me 
since I don't
want to do unneeded refactoring which will conflict with with ur work. Also, please
usedrm-misc for this since it's not amdgpu specific work and will be easier for me.

Andrey


>
>>> So what do we do now?
>> Split your patches into smaller parts and submit them chunk by chunk.
>>
>> E.g. renames first and then functional changes grouped by area they change.
> I have, but my final patch, a tiny one but which implements
> the core reason for the change seems buggy, and I'm looking
> for a way to debug it.
>
> Regards,
> Luben
>
>
>> Regards,
>> Christian.
>>
>>> I can submit those changes without the last part,
>>> which builds on this change.
>>>
>>> I'm still testing the last part and was hoping
>>> to submit it all in one sequence of patches,
>>> after my testing.
>>>
>>> Regards,
>>> Luben
>>>
>>>> Christian.
>>>>
>>>>> Andrey
>>>>>
>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>> ---
>>>>>>>     drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c |  2 +-
>>>>>>>     drivers/gpu/drm/etnaviv/etnaviv_sched.c   |  3 ++-
>>>>>>>     drivers/gpu/drm/lima/lima_sched.c         |  3 ++-
>>>>>>>     drivers/gpu/drm/panfrost/panfrost_job.c   |  2 +-
>>>>>>>     drivers/gpu/drm/scheduler/sched_main.c    | 15 ++++++++++++++-
>>>>>>>     drivers/gpu/drm/v3d/v3d_sched.c           | 15 ++++++++++-----
>>>>>>>     include/drm/gpu_scheduler.h               |  6 +++++-
>>>>>>>     7 files changed, 35 insertions(+), 11 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>>>>>> index d56f402..d0b0021 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>>>>>> @@ -487,7 +487,7 @@ int amdgpu_fence_driver_init_ring(struct
>>>>>>> amdgpu_ring *ring,
>>>>>>>               r = drm_sched_init(&ring->sched, &amdgpu_sched_ops,
>>>>>>>                        num_hw_submission, amdgpu_job_hang_limit,
>>>>>>> -                   timeout, ring->name);
>>>>>>> +                   timeout, ring->name, &adev->ddev);
>>>>>>>             if (r) {
>>>>>>>                 DRM_ERROR("Failed to create scheduler on ring %s.\n",
>>>>>>>                       ring->name);
>>>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>>>> b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>>>> index cd46c88..7678287 100644
>>>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>>>> @@ -185,7 +185,8 @@ int etnaviv_sched_init(struct etnaviv_gpu *gpu)
>>>>>>>           ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
>>>>>>>                      etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
>>>>>>> -                 msecs_to_jiffies(500), dev_name(gpu->dev));
>>>>>>> +                 msecs_to_jiffies(500), dev_name(gpu->dev),
>>>>>>> +                 gpu->drm);
>>>>>>>         if (ret)
>>>>>>>             return ret;
>>>>>>>     diff --git a/drivers/gpu/drm/lima/lima_sched.c
>>>>>>> b/drivers/gpu/drm/lima/lima_sched.c
>>>>>>> index dc6df9e..8a7e5d7ca 100644
>>>>>>> --- a/drivers/gpu/drm/lima/lima_sched.c
>>>>>>> +++ b/drivers/gpu/drm/lima/lima_sched.c
>>>>>>> @@ -505,7 +505,8 @@ int lima_sched_pipe_init(struct lima_sched_pipe
>>>>>>> *pipe, const char *name)
>>>>>>>           return drm_sched_init(&pipe->base, &lima_sched_ops, 1,
>>>>>>>                       lima_job_hang_limit, msecs_to_jiffies(timeout),
>>>>>>> -                  name);
>>>>>>> +                  name,
>>>>>>> +                  pipe->ldev->ddev);
>>>>>>>     }
>>>>>>>       void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
>>>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> index 30e7b71..37b03b01 100644
>>>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>> @@ -520,7 +520,7 @@ int panfrost_job_init(struct panfrost_device
>>>>>>> *pfdev)
>>>>>>>             ret = drm_sched_init(&js->queue[j].sched,
>>>>>>>                          &panfrost_sched_ops,
>>>>>>>                          1, 0, msecs_to_jiffies(500),
>>>>>>> -                     "pan_js");
>>>>>>> +                     "pan_js", pfdev->ddev);
>>>>>>>             if (ret) {
>>>>>>>                 dev_err(pfdev->dev, "Failed to create scheduler: %d.",
>>>>>>> ret);
>>>>>>>                 goto err_sched;
>>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>> index c3f0bd0..95db8c6 100644
>>>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>> @@ -53,6 +53,7 @@
>>>>>>>     #include <drm/drm_print.h>
>>>>>>>     #include <drm/gpu_scheduler.h>
>>>>>>>     #include <drm/spsc_queue.h>
>>>>>>> +#include <drm/drm_drv.h>
>>>>>>>       #define CREATE_TRACE_POINTS
>>>>>>>     #include "gpu_scheduler_trace.h"
>>>>>>> @@ -283,8 +284,16 @@ static void drm_sched_job_timedout(struct
>>>>>>> work_struct *work)
>>>>>>>         struct drm_gpu_scheduler *sched;
>>>>>>>         struct drm_sched_job *job;
>>>>>>>     +    int idx;
>>>>>>> +
>>>>>>>         sched = container_of(work, struct drm_gpu_scheduler,
>>>>>>> work_tdr.work);
>>>>>>>     +    if (!drm_dev_enter(sched->ddev, &idx)) {
>>>>>>> +        DRM_INFO("%s - device unplugged skipping recovery on
>>>>>>> scheduler:%s",
>>>>>>> +             __func__, sched->name);
>>>>>>> +        return;
>>>>>>> +    }
>>>>>>> +
>>>>>>>         /* Protects against concurrent deletion in
>>>>>>> drm_sched_get_cleanup_job */
>>>>>>>         spin_lock(&sched->job_list_lock);
>>>>>>>         job = list_first_entry_or_null(&sched->ring_mirror_list,
>>>>>>> @@ -316,6 +325,8 @@ static void drm_sched_job_timedout(struct
>>>>>>> work_struct *work)
>>>>>>>         spin_lock(&sched->job_list_lock);
>>>>>>>         drm_sched_start_timeout(sched);
>>>>>>>         spin_unlock(&sched->job_list_lock);
>>>>>>> +
>>>>>>> +    drm_dev_exit(idx);
>>>>>>>     }
>>>>>>>        /**
>>>>>>> @@ -845,7 +856,8 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
>>>>>>>                unsigned hw_submission,
>>>>>>>                unsigned hang_limit,
>>>>>>>                long timeout,
>>>>>>> -           const char *name)
>>>>>>> +           const char *name,
>>>>>>> +           struct drm_device *ddev)
>>>>>>>     {
>>>>>>>         int i, ret;
>>>>>>>         sched->ops = ops;
>>>>>>> @@ -853,6 +865,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
>>>>>>>         sched->name = name;
>>>>>>>         sched->timeout = timeout;
>>>>>>>         sched->hang_limit = hang_limit;
>>>>>>> +    sched->ddev = ddev;
>>>>>>>         for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT;
>>>>>>> i++)
>>>>>>>             drm_sched_rq_init(sched, &sched->sched_rq[i]);
>>>>>>>     diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
>>>>>>> b/drivers/gpu/drm/v3d/v3d_sched.c
>>>>>>> index 0747614..f5076e5 100644
>>>>>>> --- a/drivers/gpu/drm/v3d/v3d_sched.c
>>>>>>> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
>>>>>>> @@ -401,7 +401,8 @@ v3d_sched_init(struct v3d_dev *v3d)
>>>>>>>                      &v3d_bin_sched_ops,
>>>>>>>                      hw_jobs_limit, job_hang_limit,
>>>>>>>                      msecs_to_jiffies(hang_limit_ms),
>>>>>>> -                 "v3d_bin");
>>>>>>> +                 "v3d_bin",
>>>>>>> +                 &v3d->drm);
>>>>>>>         if (ret) {
>>>>>>>             dev_err(v3d->drm.dev, "Failed to create bin scheduler:
>>>>>>> %d.", ret);
>>>>>>>             return ret;
>>>>>>> @@ -411,7 +412,8 @@ v3d_sched_init(struct v3d_dev *v3d)
>>>>>>>                      &v3d_render_sched_ops,
>>>>>>>                      hw_jobs_limit, job_hang_limit,
>>>>>>>                      msecs_to_jiffies(hang_limit_ms),
>>>>>>> -                 "v3d_render");
>>>>>>> +                 "v3d_render",
>>>>>>> +                 &v3d->drm);
>>>>>>>         if (ret) {
>>>>>>>             dev_err(v3d->drm.dev, "Failed to create render scheduler:
>>>>>>> %d.",
>>>>>>>                 ret);
>>>>>>> @@ -423,7 +425,8 @@ v3d_sched_init(struct v3d_dev *v3d)
>>>>>>>                      &v3d_tfu_sched_ops,
>>>>>>>                      hw_jobs_limit, job_hang_limit,
>>>>>>>                      msecs_to_jiffies(hang_limit_ms),
>>>>>>> -                 "v3d_tfu");
>>>>>>> +                 "v3d_tfu",
>>>>>>> +                 &v3d->drm);
>>>>>>>         if (ret) {
>>>>>>>             dev_err(v3d->drm.dev, "Failed to create TFU scheduler: %d.",
>>>>>>>                 ret);
>>>>>>> @@ -436,7 +439,8 @@ v3d_sched_init(struct v3d_dev *v3d)
>>>>>>>                          &v3d_csd_sched_ops,
>>>>>>>                          hw_jobs_limit, job_hang_limit,
>>>>>>>                          msecs_to_jiffies(hang_limit_ms),
>>>>>>> -                     "v3d_csd");
>>>>>>> +                     "v3d_csd",
>>>>>>> +                     &v3d->drm);
>>>>>>>             if (ret) {
>>>>>>>                 dev_err(v3d->drm.dev, "Failed to create CSD scheduler:
>>>>>>> %d.",
>>>>>>>                     ret);
>>>>>>> @@ -448,7 +452,8 @@ v3d_sched_init(struct v3d_dev *v3d)
>>>>>>>                          &v3d_cache_clean_sched_ops,
>>>>>>>                          hw_jobs_limit, job_hang_limit,
>>>>>>>                          msecs_to_jiffies(hang_limit_ms),
>>>>>>> -                     "v3d_cache_clean");
>>>>>>> +                     "v3d_cache_clean",
>>>>>>> +                     &v3d->drm);
>>>>>>>             if (ret) {
>>>>>>>                 dev_err(v3d->drm.dev, "Failed to create CACHE_CLEAN
>>>>>>> scheduler: %d.",
>>>>>>>                     ret);
>>>>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>>>>> index 9243655..a980709 100644
>>>>>>> --- a/include/drm/gpu_scheduler.h
>>>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>>>> @@ -32,6 +32,7 @@
>>>>>>>       struct drm_gpu_scheduler;
>>>>>>>     struct drm_sched_rq;
>>>>>>> +struct drm_device;
>>>>>>>       /* These are often used as an (initial) index
>>>>>>>      * to an array, and as such should start at 0.
>>>>>>> @@ -267,6 +268,7 @@ struct drm_sched_backend_ops {
>>>>>>>      * @score: score to help loadbalancer pick a idle sched
>>>>>>>      * @ready: marks if the underlying HW is ready to work
>>>>>>>      * @free_guilty: A hit to time out handler to free the guilty job.
>>>>>>> + * @ddev: Pointer to drm device of this scheduler.
>>>>>>>      *
>>>>>>>      * One scheduler is implemented for each hardware ring.
>>>>>>>      */
>>>>>>> @@ -288,12 +290,14 @@ struct drm_gpu_scheduler {
>>>>>>>         atomic_t                        score;
>>>>>>>         bool                ready;
>>>>>>>         bool                free_guilty;
>>>>>>> +    struct drm_device        *ddev;
>>>>>>>     };
>>>>>>>       int drm_sched_init(struct drm_gpu_scheduler *sched,
>>>>>>>                const struct drm_sched_backend_ops *ops,
>>>>>>>                uint32_t hw_submission, unsigned hang_limit, long timeout,
>>>>>>> -           const char *name);
>>>>>>> +           const char *name,
>>>>>>> +           struct drm_device *ddev);
>>>>>>>       void drm_sched_fini(struct drm_gpu_scheduler *sched);
>>>>>>>     int drm_sched_job_init(struct drm_sched_job *job,
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx@lists.freedesktop.org
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7Cluben.tuikov%40amd.com%7C644a4f3feb79447fd6a408d8904dab27%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637418010548375418%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=wNLdozuhVS3smIpAuWB0tjFO3XDo1OmmZEgTCxviJaI%3D&amp;reserved=0
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@lists.freedesktop.org
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=04%7C01%7Cluben.tuikov%40amd.com%7C644a4f3feb79447fd6a408d8904dab27%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637418010548385367%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=qXKgWmi%2FU042boaDF43w5uIKRLFVNgwiPYrEN%2FxV0pc%3D&amp;reserved=0
>>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7Cluben.tuikov%40amd.com%7C644a4f3feb79447fd6a408d8904dab27%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637418010548385367%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=OZGMVRwFXiuhoG3%2FTP54e6vk0xpMQujqAlNxtCcX7kA%3D&amp;reserved=0
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=04%7C01%7Cluben.tuikov%40amd.com%7C644a4f3feb79447fd6a408d8904dab27%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637418010548385367%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=qXKgWmi%2FU042boaDF43w5uIKRLFVNgwiPYrEN%2FxV0pc%3D&amp;reserved=0
>>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-11-24 17:17 UTC|newest]

Thread overview: 105+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21  5:21 [PATCH v3 00/12] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 01/12] drm: Add dummy page per device or GEM object Andrey Grodzovsky
2020-11-21 14:15   ` Christian König
2020-11-23  4:54     ` Andrey Grodzovsky
2020-11-23  8:01       ` Christian König
2021-01-05 21:04         ` Andrey Grodzovsky
2021-01-07 16:21           ` Daniel Vetter
2021-01-07 16:26             ` Andrey Grodzovsky
2021-01-07 16:28               ` Andrey Grodzovsky
2021-01-07 16:30               ` Daniel Vetter
2021-01-07 16:37                 ` Andrey Grodzovsky
2021-01-08 14:26                   ` Andrey Grodzovsky
2021-01-08 14:33                     ` Christian König
2021-01-08 14:46                       ` Andrey Grodzovsky
2021-01-08 14:52                         ` Christian König
2021-01-08 16:49                           ` Grodzovsky, Andrey
2021-01-11 16:13                             ` Daniel Vetter
2021-01-11 16:15                               ` Daniel Vetter
2021-01-11 17:41                                 ` Andrey Grodzovsky
2021-01-11 20:45                                 ` Andrey Grodzovsky
2021-01-12  9:10                                   ` Daniel Vetter
2021-01-12 12:32                                     ` Christian König
2021-01-12 15:59                                       ` Andrey Grodzovsky
2021-01-13  9:14                                         ` Christian König
2021-01-13 14:40                                           ` Andrey Grodzovsky
2021-01-12 15:54                                     ` Andrey Grodzovsky
2021-01-12  8:12                               ` Christian König
2021-01-12  9:13                                 ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 02/12] drm: Unamp the entire device address space on device unplug Andrey Grodzovsky
2020-11-21 14:16   ` Christian König
2020-11-24 14:44     ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 03/12] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 04/12] drm/ttm: Set dma addr to null after freee Andrey Grodzovsky
2020-11-21 14:13   ` Christian König
2020-11-23  5:15     ` Andrey Grodzovsky
2020-11-23  8:04       ` Christian König
2020-11-21  5:21 ` [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use Andrey Grodzovsky
2020-11-25 10:42   ` Christian König
2020-11-23 20:05     ` Andrey Grodzovsky
2020-11-23 20:20       ` Christian König
2020-11-23 20:38         ` Andrey Grodzovsky
2020-11-23 20:41           ` Christian König
2020-11-23 21:08             ` Andrey Grodzovsky
2020-11-24  7:41               ` Christian König
2020-11-24 16:22                 ` Andrey Grodzovsky
2020-11-24 16:44                   ` Christian König
2020-11-25 10:40                     ` Daniel Vetter
2020-11-25 12:57                       ` Christian König
2020-11-25 16:36                         ` Daniel Vetter
2020-11-25 19:34                           ` Andrey Grodzovsky
2020-11-27 13:10                             ` Grodzovsky, Andrey
2020-11-27 14:59                             ` Daniel Vetter
2020-11-27 16:04                               ` Andrey Grodzovsky
2020-11-30 14:15                                 ` Daniel Vetter
2020-11-25 16:56                         ` Michel Dänzer
2020-11-25 17:02                           ` Daniel Vetter
2020-12-15 20:18                     ` Andrey Grodzovsky
2020-12-16  8:04                       ` Christian König
2020-12-16 14:21                         ` Daniel Vetter
2020-12-16 16:13                           ` Andrey Grodzovsky
2020-12-16 16:18                             ` Christian König
2020-12-16 17:12                               ` Daniel Vetter
2020-12-16 17:20                                 ` Daniel Vetter
2020-12-16 18:26                                 ` Andrey Grodzovsky
2020-12-16 23:15                                   ` Daniel Vetter
2020-12-17  0:20                                     ` Andrey Grodzovsky
2020-12-17 12:01                                       ` Daniel Vetter
2020-12-17 19:19                                         ` Andrey Grodzovsky
2020-12-17 20:10                                           ` Christian König
2020-12-17 20:38                                             ` Andrey Grodzovsky
2020-12-17 20:48                                               ` Daniel Vetter
2020-12-17 21:06                                                 ` Andrey Grodzovsky
2020-12-18 14:30                                                   ` Daniel Vetter
2020-12-17 20:42                                           ` Daniel Vetter
2020-12-17 21:13                                             ` Andrey Grodzovsky
2021-01-04 16:33                                               ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 06/12] drm/sched: Cancel and flush all oustatdning jobs before finish Andrey Grodzovsky
2020-11-22 11:56   ` Christian König
2020-11-21  5:21 ` [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2020-11-22 11:57   ` Christian König
2020-11-23  5:37     ` Andrey Grodzovsky
2020-11-23  8:06       ` Christian König
2020-11-24  1:12         ` Luben Tuikov
2020-11-24  7:50           ` Christian König
2020-11-24 17:11             ` Luben Tuikov
2020-11-24 17:17               ` Andrey Grodzovsky [this message]
2020-11-24 17:41                 ` Luben Tuikov
2020-11-24 17:40               ` Christian König
2020-11-24 17:44                 ` Luben Tuikov
2020-11-21  5:21 ` [PATCH v3 08/12] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2020-11-24 14:53   ` Daniel Vetter
2020-11-24 15:51     ` Andrey Grodzovsky
2020-11-25 10:41       ` Daniel Vetter
2020-11-25 17:41         ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 09/12] drm/amdgpu: Add early fini callback Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 10/12] drm/amdgpu: Avoid sysfs dirs removal post device unplug Andrey Grodzovsky
2020-11-24 14:49   ` Daniel Vetter
2020-11-24 22:27     ` Andrey Grodzovsky
2020-11-25  9:04       ` Daniel Vetter
2020-11-25 17:39         ` Andrey Grodzovsky
2020-11-27 13:12           ` Grodzovsky, Andrey
2020-11-27 15:04           ` Daniel Vetter
2020-11-27 15:34             ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 11/12] drm/amdgpu: Register IOMMU topology notifier per device Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 12/12] drm/amdgpu: Fix a bunch of sdma code crash post device unplug Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54f9dd60-6a7a-d8f1-044c-2fe93929a7f9@amd.com \
    --to=andrey.grodzovsky@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=eric@anholt.net \
    --cc=gregkh@linuxfoundation.org \
    --cc=l.stach@pengutronix.de \
    --cc=luben.tuikov@amd.com \
    --cc=robh@kernel.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox