From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3911015FA84; Mon, 27 May 2024 15:54:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716825267; cv=none; b=IuwlCapw6ODn7EmRHrItWqNz3I38kLJNFv7DjCA/JBN+BiI0tli5yyt8LF+T0xj9L9DBnZ9lzPGDQbhvZ2/+YDuTDBXHccMzTC+JBAUgUZrtmuDY1qtfbGFW8uZDGeMhYmmkQYgbmWo+/rmd/jKiURPXCqtb2f6gwFBMH8qNSFc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716825267; c=relaxed/simple; bh=clmzIFwQqVTQErlnWjVacww8fWmQHhnczrl5TCaeoUk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jxvKT/LfLGs44QV5w0zXggod8dJ5RHvxmzBrn9IvmmUF3/MugjO6wUKy1/kwrMzeq7GefgfEz4auDmCegsIREYXH+x+L7vNdic+45xsrAWjARg1VgFZZ6u3WOEjd015ByYPed5fapHXjULZ63Y4P3Thw07Mu9CUb+qb7Sf1dUXY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kS0Lede1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kS0Lede1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6E0D7C32781; Mon, 27 May 2024 15:54:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1716825266; bh=clmzIFwQqVTQErlnWjVacww8fWmQHhnczrl5TCaeoUk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kS0Lede1ylHi//gzcyeGqoEKwNO1vfRxHujrwBTKSaga3VfQyBvFH7cnTE5x2hYho NA+4eW9/dfKVvnlhKHdxWx0BVti9YEnV8kQoES1bKeVioXMDn3W6hmmX+ssiDFXCHA KV/y0CWyoUBi44MM3Q/c0MrrndQwPR3hWAQA3TI+tZk+bPjKEyecbpyFQN5gkXSf+Q nD9qqLbKo9GJ3PZwKZlFym8fZu1ufULrpCWhJwr3fXmiVKG79HDKiON7RNbEL1Fmmd SuxZ2OJBZyh55faDhfsSLmeazIClfLM1bOQjK/Bo2j6GVA5r88hh8xX/XX9utnrV2n p5pPqlm7z+H0A== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Erico Nunes , Qiang Yu , Sasha Levin , maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, airlied@gmail.com, daniel@ffwll.ch, dri-devel@lists.freedesktop.org, lima@lists.freedesktop.org Subject: [PATCH AUTOSEL 6.8 10/20] drm/lima: mask irqs in timeout path before hard reset Date: Mon, 27 May 2024 11:52:53 -0400 Message-ID: <20240527155349.3864778-10-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240527155349.3864778-1-sashal@kernel.org> References: <20240527155349.3864778-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 6.8.11 Content-Transfer-Encoding: 8bit From: Erico Nunes [ Upstream commit a421cc7a6a001b70415aa4f66024fa6178885a14 ] There is a race condition in which a rendering job might take just long enough to trigger the drm sched job timeout handler but also still complete before the hard reset is done by the timeout handler. This runs into race conditions not expected by the timeout handler. In some very specific cases it currently may result in a refcount imbalance on lima_pm_idle, with a stack dump such as: [10136.669170] WARNING: CPU: 0 PID: 0 at drivers/gpu/drm/lima/lima_devfreq.c:205 lima_devfreq_record_idle+0xa0/0xb0 ... [10136.669459] pc : lima_devfreq_record_idle+0xa0/0xb0 ... [10136.669628] Call trace: [10136.669634] lima_devfreq_record_idle+0xa0/0xb0 [10136.669646] lima_sched_pipe_task_done+0x5c/0xb0 [10136.669656] lima_gp_irq_handler+0xa8/0x120 [10136.669666] __handle_irq_event_percpu+0x48/0x160 [10136.669679] handle_irq_event+0x4c/0xc0 We can prevent that race condition entirely by masking the irqs at the beginning of the timeout handler, at which point we give up on waiting for that job entirely. The irqs will be enabled again at the next hard reset which is already done as a recovery by the timeout handler. Signed-off-by: Erico Nunes Reviewed-by: Qiang Yu Signed-off-by: Qiang Yu Link: https://patchwork.freedesktop.org/patch/msgid/20240405152951.1531555-4-nunes.erico@gmail.com Signed-off-by: Sasha Levin --- drivers/gpu/drm/lima/lima_sched.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index c3bf8cda84982..5ba60fe756167 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -402,6 +402,13 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job struct lima_sched_task *task = to_lima_task(job); struct lima_device *ldev = pipe->ldev; + /* + * The task might still finish while this timeout handler runs. + * To prevent a race condition on its completion, mask all irqs + * on the running core until the next hard reset completes. + */ + pipe->task_mask_irq(pipe); + if (!pipe->error) DRM_ERROR("lima job timeout\n"); -- 2.43.0