From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, HTML_MESSAGE,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AED46C432C3 for ; Tue, 3 Dec 2019 19:58:07 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 87C0620803 for ; Tue, 3 Dec 2019 19:58:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 87C0620803 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=amd-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 40B866ECB7; Tue, 3 Dec 2019 19:58:07 +0000 (UTC) Received: from NAM04-SN1-obe.outbound.protection.outlook.com (mail-eopbgr700048.outbound.protection.outlook.com [40.107.70.48]) by gabe.freedesktop.org (Postfix) with ESMTPS id EEE4F6ECB3; Tue, 3 Dec 2019 19:58:05 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bOavwxV5Ozw7w0o0rz/TslWqZbKO0ayYgFg4zatFjIgSDcKmiEbfHWZ0leETuQZwwt2/qt9oY3+8UXYXLRAPXFthguxVgA8yLInkCSDa7mj/Zffeb+3guQhDKp/nWxJ4EI2/Si6HxhAgVGAcU6hSyzuhzy0t0QHrt1i5squhrqo6KFU6s2V33u83QZC/M29CHnpDVaPqXBP0USU8woyXCpc2g4b/7Go/IEwJkz3xHZfU42ctMhcyAtPrBQOyGEUOXUSTKHaVmzMy0uXPkLviYvGjuObuZUi777BQsLEI+/QlrcujFiUqG29KI4jiL5qZzMOVW1PKyNBv0LLFv6B/mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CfKRCG5oK2dVk6fdisXjR/1YAH9m08e9ZJsGWq/jJn0=; b=jQqCgH2PL0zN4t/7FTjUoyl47iBPKIWhQK73mrqRz/MTaf0otJTIKxiUQ6LPlfffxhehAz3DDDLv/dCK5QvHND6em9vTM0uliDP4WC+EPhHI8lqJMvtxSdaj6PmLoudZ7b8BqFPLF8tqgpSZvruhjSPDPHOknVnCRgMKyG6Wj2RRuaC4LTcVKxvd3pKSWFNu+BPQcUxNSAQZWApcTecVGHVFb9HjHwXNRTq7w7I5PeKz6vhSDdOAIsbCEgsjAJ4ASInI/hIzCvhVeAbzVx1BMZ5gx1p7zIHpgyIORiJjqtZXyLA1IZM109izlPiD3nwQPe5NrB0Wzugrg5sJGxR2Yw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none Received: from MWHPR12MB1453.namprd12.prod.outlook.com (10.172.55.22) by MWHPR12MB1789.namprd12.prod.outlook.com (10.175.55.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2516.12; Tue, 3 Dec 2019 19:58:02 +0000 Received: from MWHPR12MB1453.namprd12.prod.outlook.com ([fe80::514b:dbf8:d19f:a80]) by MWHPR12MB1453.namprd12.prod.outlook.com ([fe80::514b:dbf8:d19f:a80%12]) with mapi id 15.20.2495.014; Tue, 3 Dec 2019 19:58:02 +0000 Subject: Re: [PATCH v4] drm/scheduler: Avoid accessing freed bad job. To: "Deucher, Alexander" , "Deng, Emily" References: <1574715089-14875-1-git-send-email-andrey.grodzovsky@amd.com> <40f1020c-fccf-99f9-33ff-f82ef1f5f360@amd.com> From: Andrey Grodzovsky Message-ID: <0137aad4-bd70-2abf-d321-e9c88101480a@amd.com> Date: Tue, 3 Dec 2019 14:57:59 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 In-Reply-To: Content-Language: en-US X-ClientProxiedBy: YT1PR01CA0023.CANPRD01.PROD.OUTLOOK.COM (2603:10b6:b01::36) To MWHPR12MB1453.namprd12.prod.outlook.com (2603:10b6:301:e::22) MIME-Version: 1.0 X-Originating-IP: [165.204.55.251] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: c4ee82e6-9659-4bed-e42c-08d7782b1a8e X-MS-TrafficTypeDiagnostic: MWHPR12MB1789:|MWHPR12MB1789: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-Forefront-PRVS: 02408926C4 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4636009)(376002)(366004)(39860400002)(396003)(136003)(346002)(199004)(189003)(13464003)(386003)(26005)(6512007)(65956001)(53546011)(6506007)(6486002)(54896002)(99286004)(81156014)(2906002)(236005)(4326008)(31696002)(6666004)(25786009)(31686004)(14454004)(6436002)(8936002)(54906003)(186003)(110136005)(316002)(66946007)(81166006)(66476007)(66556008)(16586007)(58126008)(229853002)(37036004)(446003)(6636002)(52116002)(6246003)(36756003)(6116002)(86362001)(3846002)(11346002)(2616005)(14444005)(30864003)(105004)(7736002)(478600001)(71190400001)(33964004)(66574012)(5660300002)(19627405001)(76176011)(8676002); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR12MB1789; H:MWHPR12MB1453.namprd12.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: E7PVInK7/6oqA7D+oKj0d3+6+VOjpL0N8Quavke5X0W//6K84H4jEkTuGx9G75Cl0uOGf6DpRt6yJC0Ntd7NnPvlyCi6CgawjIP7m9FEwxTfr0mmAiUesBJKTJYiUOkmPc8c/+bXuQBEoZnIDhHtxz75MkVgpMtTPW8adHhZtO3P2mNdfpJW7vzMy+e1DJmhnA+5XwgwyWh/J1GR5v4QSVxD2f84odRZvkUlO4n1ivXnDY6hl1hI/u5QFbsHqy3z0RqbTOCj8p2kGLxpG6pBVXoMLuGCU6iV3SU7pHGzi7QKYfYIIEOzSyfJrr9SInCrDexwbgoyz4fFWEuhqo9sWICZ9mfpz0szkQv2zVHAIyrV32Nwlmv+U5Jp3uF38FN4yv7SJwvVm7aOdfYYDm9NJzSvnXlRzoFkJBMNY/vB61gZIIBBZ2hp8A3MBh++XP+6VaehsOBSxRdMHB+v//9M8cuYKUQusVg4xUcTgzAGJ8wUQ+TSyQ2avvQau0yPJaGy X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: c4ee82e6-9659-4bed-e42c-08d7782b1a8e X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2019 19:58:02.3332 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: DtkkEDxjjBSmxBYWexaVJD41OKZhxPWoIz5gCk65eeTSMPyqxQWE74mfOnH5r2JeO5LUtTVlRklMNibXONGw5Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1789 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CfKRCG5oK2dVk6fdisXjR/1YAH9m08e9ZJsGWq/jJn0=; b=VYhOY/NaeaDnsPkAU9ZL8Ut7AJ/ygKU2IHoaaSPmOXay0SZw879PdwmshpKkO4UjXB2i515nAgLA3EV0fVZxQ4yZmDegpDoAtgw/CuPONWJl+vtacpg55u8MEEo/u0FQwnCMUxDucLcs3LxAhv9QOxO0OJ+tVQvTE4oBs2zrYHk= X-Mailman-Original-Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Andrey.Grodzovsky@amd.com; X-BeenThere: amd-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Discussion list for AMD gfx List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "steven.price@arm.com" , "amd-gfx@lists.freedesktop.org" , "dri-devel@lists.freedesktop.org" , "Koenig, Christian" Content-Type: multipart/mixed; boundary="===============0932992256==" Errors-To: amd-gfx-bounces@lists.freedesktop.org Sender: "amd-gfx" --===============0932992256== Content-Type: multipart/alternative; boundary="------------02434B88F6D87A76D3F824EE" Content-Language: en-US --------------02434B88F6D87A76D3F824EE Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit I don't think i can apply this patch 'as is' as this has dependency on patch by Steven which also wasn't applied yet - 588b982 Steven Price        6 weeks ago    drm: Don't free jobs in wait_event_interruptible() Andrey On 12/3/19 2:44 PM, Deucher, Alexander wrote: > > [AMD Official Use Only - Internal Distribution Only] > > > Please go ahead an apply whatever version is necessary for > amd-staging-drm-next. > > Alex > > ------------------------------------------------------------------------ > *From:* Grodzovsky, Andrey > *Sent:* Tuesday, December 3, 2019 2:10 PM > *To:* Deng, Emily ; Deucher, Alexander > > *Cc:* dri-devel@lists.freedesktop.org > ; amd-gfx@lists.freedesktop.org > ; Koenig, Christian > ; steven.price@arm.com > *Subject:* Re: [PATCH v4] drm/scheduler: Avoid accessing freed bad job. > Yes - Christian just pushed it to drm-next-misc - I guess Alex/Christian > didn't pull to amd-staging-drm-next yet. > > Andrey > > On 12/2/19 2:24 PM, Deng, Emily wrote: > > [AMD Official Use Only - Internal Distribution Only] > > > > Hi Andrey, > >      Seems this patch is still not in amd-staging-drm-next? > > > > Best wishes > > Emily Deng > > > > > > > >> -----Original Message----- > >> From: Deng, Emily > >> Sent: Tuesday, November 26, 2019 4:41 PM > >> To: Grodzovsky, Andrey > >> Cc: dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; > Koenig, > >> Christian ; steven.price@arm.com > >> Subject: RE: [PATCH v4] drm/scheduler: Avoid accessing freed bad job. > >> > >> [AMD Official Use Only - Internal Distribution Only] > >> > >> Reviewed-by: Emily Deng > >> > >>> -----Original Message----- > >>> From: Grodzovsky, Andrey > >>> Sent: Tuesday, November 26, 2019 7:37 AM > >>> Cc: dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; > >>> Koenig, Christian ; Deng, Emily > >>> ; steven.price@arm.com > >>> Subject: Re: [PATCH v4] drm/scheduler: Avoid accessing freed bad job. > >>> > >>> Ping > >>> > >>> Andrey > >>> > >>> On 11/25/19 3:51 PM, Andrey Grodzovsky wrote: > >>>> Problem: > >>>> Due to a race between drm_sched_cleanup_jobs in sched thread and > >>>> drm_sched_job_timedout in timeout work there is a possiblity that bad > >>>> job was already freed while still being accessed from the timeout > >>>> thread. > >>>> > >>>> Fix: > >>>> Instead of just peeking at the bad job in the mirror list remove it > >>>> from the list under lock and then put it back later when we are > >>>> garanteed no race with main sched thread is possible which is after > >>>> the thread is parked. > >>>> > >>>> v2: Lock around processing ring_mirror_list in > drm_sched_cleanup_jobs. > >>>> > >>>> v3: Rebase on top of drm-misc-next. v2 is not needed anymore as > >>>> drm_sched_get_cleanup_job already has a lock there. > >>>> > >>>> v4: Fix comments to relfect latest code in drm-misc. > >>>> > >>>> Signed-off-by: Andrey Grodzovsky > >>>> Reviewed-by: Christian König > >>>> Tested-by: Emily Deng > >>>> --- > >>>> drivers/gpu/drm/scheduler/sched_main.c | 27 > >>> +++++++++++++++++++++++++++ > >>>>    1 file changed, 27 insertions(+) > >>>> > >>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c > >>>> b/drivers/gpu/drm/scheduler/sched_main.c > >>>> index 6774955..1bf9c40 100644 > >>>> --- a/drivers/gpu/drm/scheduler/sched_main.c > >>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c > >>>> @@ -284,10 +284,21 @@ static void drm_sched_job_timedout(struct > >>> work_struct *work) > >>>>     unsigned long flags; > >>>> > >>>>     sched = container_of(work, struct drm_gpu_scheduler, > >>>> work_tdr.work); > >>>> + > >>>> +  /* Protects against concurrent deletion in > >>> drm_sched_get_cleanup_job */ > >>>> + spin_lock_irqsave(&sched->job_list_lock, flags); > >>>>     job = list_first_entry_or_null(&sched->ring_mirror_list, > >>>> struct drm_sched_job, node); > >>>> > >>>>     if (job) { > >>>> +          /* > >>>> +           * Remove the bad job so it cannot be freed by concurrent > >>>> +           * drm_sched_cleanup_jobs. It will be reinserted back > after > >>> sched->thread > >>>> +           * is parked at which point it's safe. > >>>> +           */ > >>>> + list_del_init(&job->node); > >>>> + spin_unlock_irqrestore(&sched->job_list_lock, flags); > >>>> + > >>>> job->sched->ops->timedout_job(job); > >>>> > >>>>             /* > >>>> @@ -298,6 +309,8 @@ static void drm_sched_job_timedout(struct > >>> work_struct *work) > >>>> job->sched->ops->free_job(job); > >>>> sched->free_guilty = false; > >>>>             } > >>>> +  } else { > >>>> + spin_unlock_irqrestore(&sched->job_list_lock, flags); > >>>>     } > >>>> > >>>> spin_lock_irqsave(&sched->job_list_lock, flags); @@ -370,6 +383,20 > >>>> @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct > >>> drm_sched_job *bad) > >>>>     kthread_park(sched->thread); > >>>> > >>>>     /* > >>>> +   * Reinsert back the bad job here - now it's safe as > >>>> +   * drm_sched_get_cleanup_job cannot race against us and > release the > >>>> +   * bad job at this point - we parked (waited for) any in progress > >>>> +   * (earlier) cleanups and drm_sched_get_cleanup_job will not be > >>> called > >>>> +   * now until the scheduler thread is unparked. > >>>> +   */ > >>>> +  if (bad && bad->sched == sched) > >>>> +          /* > >>>> +           * Add at the head of the queue to reflect it was the > earliest > >>>> +           * job extracted. > >>>> +           */ > >>>> +          list_add(&bad->node, &sched->ring_mirror_list); > >>>> + > >>>> +  /* > >>>>      * Iterate the job list from later to  earlier one and either > deactive > >>>>      * their HW callbacks or remove them from mirror list if they > already > >>>>      * signaled. --------------02434B88F6D87A76D3F824EE Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit

I don't think i can apply this patch 'as is' as this has dependency on patch by Steven which also wasn't applied yet - 588b982 Steven Price        6 weeks ago    drm: Don't free jobs in wait_event_interruptible()


Andrey


On 12/3/19 2:44 PM, Deucher, Alexander wrote:

[AMD Official Use Only - Internal Distribution Only]


Please go ahead an apply whatever version is necessary for amd-staging-drm-next.

Alex


From: Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>
Sent: Tuesday, December 3, 2019 2:10 PM
To: Deng, Emily <Emily.Deng@amd.com>; Deucher, Alexander <Alexander.Deucher@amd.com>
Cc: dri-devel@lists.freedesktop.org <dri-devel@lists.freedesktop.org>; amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>; Koenig, Christian <Christian.Koenig@amd.com>; steven.price@arm.com <steven.price@arm.com>
Subject: Re: [PATCH v4] drm/scheduler: Avoid accessing freed bad job.
 
Yes - Christian just pushed it to drm-next-misc - I guess Alex/Christian
didn't pull to amd-staging-drm-next yet.

Andrey

On 12/2/19 2:24 PM, Deng, Emily wrote:
> [AMD Official Use Only - Internal Distribution Only]
>
> Hi Andrey,
>      Seems this patch is still not in amd-staging-drm-next?
>
> Best wishes
> Emily Deng
>
>
>
>> -----Original Message-----
>> From: Deng, Emily
>> Sent: Tuesday, November 26, 2019 4:41 PM
>> To: Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>
>> Cc: dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; Koenig,
>> Christian <Christian.Koenig@amd.com>; steven.price@arm.com
>> Subject: RE: [PATCH v4] drm/scheduler: Avoid accessing freed bad job.
>>
>> [AMD Official Use Only - Internal Distribution Only]
>>
>> Reviewed-by: Emily Deng <Emily.Deng@amd.com>
>>
>>> -----Original Message-----
>>> From: Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>
>>> Sent: Tuesday, November 26, 2019 7:37 AM
>>> Cc: dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org;
>>> Koenig, Christian <Christian.Koenig@amd.com>; Deng, Emily
>>> <Emily.Deng@amd.com>; steven.price@arm.com
>>> Subject: Re: [PATCH v4] drm/scheduler: Avoid accessing freed bad job.
>>>
>>> Ping
>>>
>>> Andrey
>>>
>>> On 11/25/19 3:51 PM, Andrey Grodzovsky wrote:
>>>> Problem:
>>>> Due to a race between drm_sched_cleanup_jobs in sched thread and
>>>> drm_sched_job_timedout in timeout work there is a possiblity that bad
>>>> job was already freed while still being accessed from the timeout
>>>> thread.
>>>>
>>>> Fix:
>>>> Instead of just peeking at the bad job in the mirror list remove it
>>>> from the list under lock and then put it back later when we are
>>>> garanteed no race with main sched thread is possible which is after
>>>> the thread is parked.
>>>>
>>>> v2: Lock around processing ring_mirror_list in drm_sched_cleanup_jobs.
>>>>
>>>> v3: Rebase on top of drm-misc-next. v2 is not needed anymore as
>>>> drm_sched_get_cleanup_job already has a lock there.
>>>>
>>>> v4: Fix comments to relfect latest code in drm-misc.
>>>>
>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>> Reviewed-by: Christian König <christian.koenig@amd.com>
>>>> Tested-by: Emily Deng <Emily.Deng@amd.com>
>>>> ---
>>>>    drivers/gpu/drm/scheduler/sched_main.c | 27
>>> +++++++++++++++++++++++++++
>>>>    1 file changed, 27 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>>> index 6774955..1bf9c40 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>> @@ -284,10 +284,21 @@ static void drm_sched_job_timedout(struct
>>> work_struct *work)
>>>>     unsigned long flags;
>>>>
>>>>     sched = container_of(work, struct drm_gpu_scheduler,
>>>> work_tdr.work);
>>>> +
>>>> +  /* Protects against concurrent deletion in
>>> drm_sched_get_cleanup_job */
>>>> +  spin_lock_irqsave(&sched->job_list_lock, flags);
>>>>     job = list_first_entry_or_null(&sched->ring_mirror_list,
>>>>                                    struct drm_sched_job, node);
>>>>
>>>>     if (job) {
>>>> +          /*
>>>> +           * Remove the bad job so it cannot be freed by concurrent
>>>> +           * drm_sched_cleanup_jobs. It will be reinserted back after
>>> sched->thread
>>>> +           * is parked at which point it's safe.
>>>> +           */
>>>> +          list_del_init(&job->node);
>>>> +          spin_unlock_irqrestore(&sched->job_list_lock, flags);
>>>> +
>>>>             job->sched->ops->timedout_job(job);
>>>>
>>>>             /*
>>>> @@ -298,6 +309,8 @@ static void drm_sched_job_timedout(struct
>>> work_struct *work)
>>>>                     job->sched->ops->free_job(job);
>>>>                     sched->free_guilty = false;
>>>>             }
>>>> +  } else {
>>>> +          spin_unlock_irqrestore(&sched->job_list_lock, flags);
>>>>     }
>>>>
>>>>     spin_lock_irqsave(&sched->job_list_lock, flags); @@ -370,6 +383,20
>>>> @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct
>>> drm_sched_job *bad)
>>>>     kthread_park(sched->thread);
>>>>
>>>>     /*
>>>> +   * Reinsert back the bad job here - now it's safe as
>>>> +   * drm_sched_get_cleanup_job cannot race against us and release the
>>>> +   * bad job at this point - we parked (waited for) any in progress
>>>> +   * (earlier) cleanups and drm_sched_get_cleanup_job will not be
>>> called
>>>> +   * now until the scheduler thread is unparked.
>>>> +   */
>>>> +  if (bad && bad->sched == sched)
>>>> +          /*
>>>> +           * Add at the head of the queue to reflect it was the earliest
>>>> +           * job extracted.
>>>> +           */
>>>> +          list_add(&bad->node, &sched->ring_mirror_list);
>>>> +
>>>> +  /*
>>>>      * Iterate the job list from later to  earlier one and either deactive
>>>>      * their HW callbacks or remove them from mirror list if they already
>>>>      * signaled.
--------------02434B88F6D87A76D3F824EE-- --===============0932992256== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KYW1kLWdmeCBt YWlsaW5nIGxpc3QKYW1kLWdmeEBsaXN0cy5mcmVlZGVza3RvcC5vcmcKaHR0cHM6Ly9saXN0cy5m cmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9hbWQtZ2Z4 --===============0932992256==--