From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 758E219CD0A for ; Thu, 26 Mar 2026 01:54:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774490054; cv=none; b=rRH7QLqsJn21tEqSxSqY24gCCdK+xwYgsEkbPEFwMzuQNRR2RuO0b/42QhqboRB5t6aSuwtbEgGEEpJk5b9oXDwidUgMmVUyJzcC0zNzHtpUgbtxUuYOukEZ/T7/e5nU5GXtp58kJf+uCg1vJ0NnsxaddLg45eXI9rh123vYjvI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774490054; c=relaxed/simple; bh=hWVdFUpGT8pUe61SceF3AT0j9MA+qROa0BHE5CK7SAU=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=poVD6wuVjpT/I3uVG76lpcajfmnBj43o6tVLjSsmoP42yPyZGBwrGVTIWe7ZO8eeKry/eK2PfPoJba9OGPKlCJDfGMAhm5i09MWcPOyS9lQN3rxwPcEyxO2UhL7FPYarHM+jrDt5Vgs6ZreMjNo64mIKLO/vbu5bBUbWQeoE+bo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=bFsMF761; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="bFsMF761" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-82a67ce6969so358618b3a.1 for ; Wed, 25 Mar 2026 18:54:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1774490053; x=1775094853; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Xcdp7IsK8iyD34L5HISg9Ao7kXj1r2bEu5hpo8TsYx0=; b=bFsMF761WCfH/p9EE0bdksCQZEWUJUl0XhC7pJq/jGYjL/ovJ80m51hvTmgy/mQJme wOBfQ+rCN9E3bPZ9bKE3T81Kq8HN+3OMhLX6cYgYsHKdo7gx8mn1Uk6AePahA2XxRR69 7HotJGPXtib3rVdZK+ABrhFK17bMiP6ydf+gw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774490053; x=1775094853; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xcdp7IsK8iyD34L5HISg9Ao7kXj1r2bEu5hpo8TsYx0=; b=SK6Z8yac8mv1Fel0FemyH/kLI+EdtddZTDZG3bWuEYVnmxznYBMr42im3qn4RZXZrM RCXIsLIKJgsqmn9Q9vyeKIFR1qrX5GEynLduh4NMbij7tWvJBlcmRd18IzTpUayq05tp fu1bjdpw/k4CIfkxuubM0wJr+HLL1hj4L4vLj1NKff1lKCC66kq23B6kcvof5va5wR0R +tFQ8M8M1F1IiZmtDdMt19/m07YnxVVl77c8HBvscDMja8s2+RW/59QNir+XEHrAo9XZ iFWysBgjqQUcRrR/6I7D7qJIy3kGnOBJx+LkIiTAyNJTqA/0+VMVFkGno9YZVVIJzAdv FrNw== X-Forwarded-Encrypted: i=1; AJvYcCVJnF45ulcrlsPlEBNRZWYsHd83whT2SkDy8lUaGRWgOw12epphAYXhRBhjKY5kM4SOkBC4EsyFShjZX/0=@vger.kernel.org X-Gm-Message-State: AOJu0YyaJw1PoEzMcvhCf+rDCduEWAoLiXIbtfC3yCZqLUdSWfdkn1Yv 3v7GIAhqhR2vlFrnHRxlGgQ6S89YampDGy01gGpvEYfNq042QgfE/izCIyuylVUe1Q== X-Gm-Gg: ATEYQzwmarh68VlQIM7QIrbmhDYun76/3xoS23xPQIcoicJDDb+T5oFq4lo1mMgVKV6 xzf4nN3GIY4xrD3foXY4ufV8rXPkb4tFGkpqditAJKnb/9FLCjJuYYblQtlq1STP+i2+k9EhYxM Z9wmQv9GWwiTlKayYvdkNrGAfSUTGdkLdeisRaY7Tn9BR0tKBRVBbbmkpPUYFv7VVg7I549TPrM BXLWxIPtVlJEijU/0fFWsA5eyoJi8sBo9AlrSpcBOwqn3m9RpVCpRl5znt8zh6gwzlKBSNw+Cce AcB/Nj0owDJykafj8dm02zVGMfZdkNzhhYOivASTXD2TAb04DxM9s8YQThAHAsC15KoyPc7wzvD UtApkWLVIUi4u0d8rm12s405UzjqPmhvPvXNRYKfpcV8HB3ronKnOGydKem1hnTjkGVoFOeWhIw ozlXFZGDjdC/kLGA7qD4Sf+mBhWckqAjIUfPvUCLkk7NuJnztbgKOts8r4l76LZMs= X-Received: by 2002:a05:6a00:390b:b0:82a:687e:c048 with SMTP id d2e1a72fcca58-82c6df8e77bmr5314949b3a.30.1774490052823; Wed, 25 Mar 2026 18:54:12 -0700 (PDT) Received: from google.com ([2a00:79e0:2031:6:bba3:6463:d2dc:395a]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82c7d390979sm947769b3a.32.2026.03.25.18.54.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2026 18:54:12 -0700 (PDT) Date: Thu, 26 Mar 2026 10:54:07 +0900 From: Sergey Senozhatsky To: Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter Cc: Sean Paul , Konrad Dybcio , Akhil P Oommen , linux-arm-msm@vger.kernel.org, dri-devel@lists.freedesktop.org, freedreno@lists.freedesktop.org, linux-kernel@vger.kernel.org, Tomasz Figa , Sergey Senozhatsky Subject: Re: [RFC PATCH] drm: gpu: msm: forbid mem reclaim from reset Message-ID: References: <20260127073341.2862078-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260127073341.2862078-1-senozhatsky@chromium.org> On (26/01/27 16:33), Sergey Senozhatsky wrote: > We sometimes get into a situtation where GPU hangcheck fails to > recover GPU: > > [..] > msm_dpu ae01000.display-controller: [drm:hangcheck_handler] *ERROR* (IPv4: 1): hangcheck detected gpu lockup rb 0! > msm_dpu ae01000.display-controller: [drm:hangcheck_handler] *ERROR* (IPv4: 1): completed fence: 7840161 > msm_dpu ae01000.display-controller: [drm:hangcheck_handler] *ERROR* (IPv4: 1): submitted fence: 7840162 > msm_dpu ae01000.display-controller: [drm:hangcheck_handler] *ERROR* (IPv4: 1): hangcheck detected gpu lockup rb 0! > msm_dpu ae01000.display-controller: [drm:hangcheck_handler] *ERROR* (IPv4: 1): completed fence: 7840162 > msm_dpu ae01000.display-controller: [drm:hangcheck_handler] *ERROR* (IPv4: 1): submitted fence: 7840163 > [..] > > The problem is that msm_job worker is blocked on gpu->lock > > INFO: task ring0:155 blocked for more than 122 seconds. > Not tainted 6.6.99-08727-gaac38b365d2c #1 > task:ring0 state:D stack:0 pid:155 ppid:2 flags:0x00000008 > Call trace: > __switch_to+0x108/0x208 > schedule+0x544/0x11f0 > schedule_preempt_disabled+0x30/0x50 > __mutex_lock_common+0x410/0x850 > __mutex_lock_slowpath+0x28/0x40 > mutex_lock+0x5c/0x90 > msm_job_run+0x9c/0x140 > drm_sched_main+0x514/0x938 > kthread+0x114/0x138 > ret_from_fork+0x10/0x20 > > which is owned by recover worker, which is waiting for DMA fences > from a memory reclaim path, under the very same gpu->lock > > INFO: task ring0:155 is blocked on a mutex likely owned by task gpu-worker:154. > task:gpu-worker state:D stack:0 pid:154 ppid:2 flags:0x00000008 > Call trace: > __switch_to+0x108/0x208 > schedule+0x544/0x11f0 > schedule_timeout+0x1f8/0x770 > dma_fence_default_wait+0x108/0x218 > dma_fence_wait_timeout+0x6c/0x1c0 > dma_resv_wait_timeout+0xe4/0x118 > active_purge+0x34/0x98 > drm_gem_lru_scan+0x1d0/0x388 > msm_gem_shrinker_scan+0x1cc/0x2e8 > shrink_slab+0x228/0x478 > shrink_node+0x380/0x730 > try_to_free_pages+0x204/0x510 > __alloc_pages_direct_reclaim+0x90/0x158 > __alloc_pages_slowpath+0x1d4/0x4a0 > __alloc_pages+0x9f0/0xc88 > vm_area_alloc_pages+0x17c/0x260 > __vmalloc_node_range+0x1c0/0x420 > kvmalloc_node+0xe8/0x108 > msm_gpu_crashstate_capture+0x1e4/0x280 > recover_worker+0x1c0/0x638 > kthread_worker_fn+0x150/0x2d8 > kthread+0x114/0x138 > > So no one can make any further progress. > > Forbid recover/fault worker to enter memory reclaim (under > gpu->lock) to address this deadlock scenario. > > Cc: Tomasz Figa > Signed-off-by: Sergey Senozhatsky Folks, can somebody please review/pickup this patch? It solves a real (deadlock) problem that we observe in the field. // keeping the patch body just in case > --- > drivers/gpu/drm/msm/msm_gpu.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c > index 995549d0bbbc..ddcd9e1c217a 100644 > --- a/drivers/gpu/drm/msm/msm_gpu.c > +++ b/drivers/gpu/drm/msm/msm_gpu.c > @@ -17,6 +17,7 @@ > #include > #include > #include > +#include > > /* > * Power Management: > @@ -469,6 +470,7 @@ static void recover_worker(struct kthread_work *work) > struct msm_gem_submit *submit; > struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); > char *comm = NULL, *cmd = NULL; > + unsigned int noreclaim_flag; > struct task_struct *task; > int i; > > @@ -506,6 +508,8 @@ static void recover_worker(struct kthread_work *work) > msm_gem_vm_unusable(submit->vm); > } > > + noreclaim_flag = memalloc_noreclaim_save(); > + > get_comm_cmdline(submit, &comm, &cmd); > > if (comm && cmd) { > @@ -524,6 +528,8 @@ static void recover_worker(struct kthread_work *work) > pm_runtime_get_sync(&gpu->pdev->dev); > msm_gpu_crashstate_capture(gpu, submit, NULL, comm, cmd); > > + memalloc_noreclaim_restore(noreclaim_flag); > + > kfree(cmd); > kfree(comm); > > @@ -588,6 +594,7 @@ void msm_gpu_fault_crashstate_capture(struct msm_gpu *gpu, struct msm_gpu_fault_ > struct msm_gem_submit *submit; > struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); > char *comm = NULL, *cmd = NULL; > + unsigned int noreclaim_flag; > > mutex_lock(&gpu->lock); > > @@ -595,6 +602,8 @@ void msm_gpu_fault_crashstate_capture(struct msm_gpu *gpu, struct msm_gpu_fault_ > if (submit && submit->fault_dumped) > goto resume_smmu; > > + noreclaim_flag = memalloc_noreclaim_save(); > + > if (submit) { > get_comm_cmdline(submit, &comm, &cmd); > > @@ -610,6 +619,8 @@ void msm_gpu_fault_crashstate_capture(struct msm_gpu *gpu, struct msm_gpu_fault_ > msm_gpu_crashstate_capture(gpu, submit, fault_info, comm, cmd); > pm_runtime_put_sync(&gpu->pdev->dev); > > + memalloc_noreclaim_restore(noreclaim_flag); > + > kfree(cmd); > kfree(comm); > > -- > 2.53.0.rc1.217.geba53bf80e-goog >