From: Sasha Levin <sashal@kernel.org>
To: patches@lists.linux.dev, stable@vger.kernel.org
Cc: "Christian König" <christian.koenig@amd.com>,
"Alex Deucher" <alexander.deucher@amd.com>,
"Sunil Khatri" <sunil.khatri@amd.com>,
"Sasha Levin" <sashal@kernel.org>,
airlied@gmail.com, simona@ffwll.ch,
amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
linux-kernel@vger.kernel.org
Subject: [PATCH AUTOSEL 7.0-6.18] drm/amdgpu: fix amdgpu_userq_evict
Date: Mon, 20 Apr 2026 09:18:28 -0400 [thread overview]
Message-ID: <20260420132314.1023554-114-sashal@kernel.org> (raw)
In-Reply-To: <20260420132314.1023554-1-sashal@kernel.org>
From: Christian König <christian.koenig@amd.com>
[ Upstream commit 98dc529a27c57e184ab47453993ba9cfcfcec0ca ]
Canceling the resume worker synchonized can deadlock because it can in
turn wait for the eviction worker through the userq_mutex.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Sunil Khatri <sunil.khatri@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
LLM Generated explanations, may be completely bogus:
Now I have all the information needed for a complete analysis. Let me
compile the findings.
---
## PHASE 1: COMMIT MESSAGE FORENSICS
**Step 1.1: Subject Line**
- Subsystem: `drm/amdgpu` (AMD GPU DRM driver)
- Action verb: "fix"
- Summary: Fix deadlock in `amdgpu_userq_evict` function
- Record: [drm/amdgpu] [fix] [deadlock in userqueue eviction path]
**Step 1.2: Tags**
- Signed-off-by: Christian König (author, AMD's senior DRM/GPU
maintainer)
- Reviewed-by: Alex Deucher (AMD's kernel graphics lead maintainer)
- Reviewed-by: Sunil Khatri
- Signed-off-by: Alex Deucher (committer)
- No Fixes: tag (expected - that's why manual review is needed)
- No Cc: stable (expected)
- Record: Author is subsystem maintainer. Two Reviewed-by tags from AMD
developers. Strong quality signal.
**Step 1.3: Commit Body**
- Bug: Canceling the resume worker synchronously
(`cancel_delayed_work_sync`) can deadlock because the resume worker
waits for the eviction worker via `userq_mutex`.
- Record: Classic AB-BA deadlock between suspend_worker and
resume_worker via `userq_mutex`.
**Step 1.4: Hidden Bug Fix Detection**
- This is explicitly labeled "fix" and describes a deadlock. Not hidden
at all.
- Record: Obvious deadlock fix.
---
## PHASE 2: DIFF ANALYSIS
**Step 2.1: Inventory**
- 1 file changed: `drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c`
- Lines removed: ~6, lines added: ~2 (net -4 lines)
- Function modified: `amdgpu_userq_evict()`
- Scope: single-file surgical fix in one function
- Record: Very small, contained change.
**Step 2.2: Code Flow Change**
BEFORE:
```c
if (evf_mgr->fd_closing) {
cancel_delayed_work_sync(&uq_mgr->resume_work);
return;
}
schedule_delayed_work(&uq_mgr->resume_work, 0);
```
AFTER:
```c
if (!evf_mgr->fd_closing)
schedule_delayed_work(&uq_mgr->resume_work, 0);
```
Before: When `fd_closing`, synchronously cancel any pending resume work
and return. Otherwise, schedule resume work.
After: Simply don't schedule resume work when `fd_closing`. No
synchronous cancel.
**Step 2.3: Bug Mechanism**
This is a **deadlock** fix. The verified call chain:
1. `amdgpu_eviction_fence_suspend_worker()` acquires
`uq_mgr->userq_mutex` (line 110 in `amdgpu_eviction_fence.c`), then
calls `amdgpu_userq_evict()` (line 119)
2. `amdgpu_userq_evict()` calls
`cancel_delayed_work_sync(&uq_mgr->resume_work)` when `fd_closing` -
this waits for resume_work to finish
3. `amdgpu_userq_restore_worker()` (the resume_work callback) first
calls `flush_delayed_work(&fpriv->evf_mgr.suspend_work)` (line 1277),
which waits for the suspend_worker, then tries to acquire
`userq_mutex` (line 1279)
Result: suspend_worker holds `userq_mutex` and waits for resume_worker;
resume_worker either flushes suspend_worker (direct circular wait) or
waits for `userq_mutex` (held by suspend_worker). Classic deadlock.
Record: [Deadlock] [suspend_worker holds userq_mutex ->
cancel_delayed_work_sync waits for resume_worker -> resume_worker
flushes suspend_worker or waits for userq_mutex = DEADLOCK]
**Step 2.4: Fix Quality**
- Obviously correct: Removing the `cancel_delayed_work_sync` eliminates
the deadlock
- Minimal/surgical: Only removes the dangerous synchronous cancel, keeps
the logic of not resuming when fd is closing
- Regression risk: Very low. The only concern would be if a stale
resume_work runs after fd_closing, but other cleanup paths
(`amdgpu_userq_destroy` at line 632 does `cancel_delayed_work_sync`
safely before taking mutex) handle this properly.
- Record: High quality fix. No regression risk.
---
## PHASE 3: GIT HISTORY
**Step 3.1: Blame**
- The buggy `cancel_delayed_work_sync` in `amdgpu_userq_evict` was
introduced by commit `f10eb185ad0552` (Arvind Yadav, 2025-05-07, "Fix
NULL dereference in amdgpu_userq_restore_worker"), though the original
`fd_closing` branch was from `44cfdf368fb72c` (Shashank Sharma,
2024-11-20, "resume gfx userqueues").
- Record: Buggy code introduced in the v6.16 development cycle.
**Step 3.2:** No Fixes: tag to follow.
**Step 3.3: Related Changes**
- Commit `648a0dc0d78c3` fixed a different deadlock in the same file
(mutex ordering between `adev->userq_mutex` and `uqm->userq_mutex`)
- Record: There have been multiple deadlock fixes in this subsystem,
indicating active locking issues being resolved.
**Step 3.4: Author**
- Christian König is one of the most senior AMD DRM kernel developers
and a core maintainer of the AMDGPU driver.
- Record: Author is THE subsystem expert. Very high trust signal.
**Step 3.5: Dependencies**
- The fix is self-contained and standalone. It modifies one conditional
block in one function.
- Record: No dependencies. Clean standalone fix.
---
## PHASE 4: MAILING LIST
- b4 dig could not find the fix commit directly (it appears to be very
recent, possibly not yet indexed)
- lore.kernel.org search was blocked by anti-scraping protection
- Record: Could not verify mailing list discussion, but the commit has
two Reviewed-by tags confirming peer review.
---
## PHASE 5: CODE SEMANTIC ANALYSIS
**Step 5.1: Functions modified**
- Only `amdgpu_userq_evict()` is modified.
**Step 5.2: Callers**
- `amdgpu_userq_evict()` is called from
`amdgpu_eviction_fence_suspend_worker()` in `amdgpu_eviction_fence.c`
(line 119). This is a workqueue callback triggered by
`amdgpu_eviction_fence_enable_signaling()` (line 141), which is a
dma_fence_ops callback. This means eviction happens automatically when
BO resources need to be moved, making this a common code path during
normal GPU operation.
**Step 5.3-5.4: Call chains**
- The eviction path is triggered when dma_fence signaling is enabled on
eviction fences attached to BOs. This happens during VM page table
operations, memory allocation, etc. - very common GPU operations.
- Record: The buggy path is reachable during normal GPU usage by any
userspace GPU application.
---
## PHASE 6: STABLE TREE ANALYSIS
**Step 6.1: Does the buggy code exist in stable trees?**
- Verified `amdgpu_userq.c` does NOT exist in v6.12, v6.13, v6.14, or
v6.15
- File first appears in v6.16
- The buggy `cancel_delayed_work_sync` in `amdgpu_userq_evict` exists in
v6.16, v6.17, v6.18, v6.19, and v7.0
- This workspace is `linux-autosel-7.0`, evaluating for the 7.0.y stable
tree
- Record: Bug exists in v7.0 (the target tree) and v6.19.y (current
active stable).
**Step 6.2: Backport difficulty**
- The v7.0 version of the function is identical to the current HEAD -
the patch should apply cleanly.
- Record: Clean apply expected.
---
## PHASE 7: SUBSYSTEM CONTEXT
**Step 7.1: Subsystem**
- `drivers/gpu/drm/amd/amdgpu` - AMD GPU driver, one of the most widely
used GPU drivers
- Criticality: IMPORTANT - affects all AMD GPU users
- Record: [drm/amdgpu] [IMPORTANT]
**Step 7.2: Activity**
- Very active subsystem with frequent commits
- Multiple deadlock fixes in the userqueue code recently, indicating
this is a new subsystem under active development and bug fixing
- Record: Very active, new code with multiple recent fixes.
---
## PHASE 8: IMPACT AND RISK ASSESSMENT
**Step 8.1: Who is affected**
- All users of AMD GPUs with userqueue support (modern AMD hardware)
- Record: Driver-specific but large user population (all AMD GPU users
with newer hardware)
**Step 8.2: Trigger conditions**
- The deadlock triggers when: (1) an eviction fence signals while (2)
`fd_closing` is true and (3) a resume_work is pending or running
- This can happen during normal application shutdown/close while GPU
operations are in progress
- Record: Triggered during fd close with concurrent GPU eviction -
realistic scenario during application exit.
**Step 8.3: Failure mode severity**
- DEADLOCK = system hang (at minimum the GPU tasks hang, potentially
wider if other kernel threads wait on the locked mutex)
- Severity: CRITICAL
- Record: [Deadlock -> system hang] [CRITICAL]
**Step 8.4: Risk-Benefit Ratio**
- BENEFIT: Very high - prevents deadlocks during normal GPU operation
- RISK: Very low - 2 lines added, 6 removed, obviously correct, written
and reviewed by subsystem maintainers
- Record: [Very high benefit] [Very low risk] [Strongly favorable]
---
## PHASE 9: FINAL SYNTHESIS
**Step 9.1: Evidence Compilation**
FOR backporting:
- Fixes a real deadlock (CRITICAL severity)
- Very small, surgical fix (net -4 lines)
- Written by Christian König (senior AMD DRM maintainer)
- Reviewed by Alex Deucher (AMD's kernel graphics lead) and Sunil Khatri
- Bug exists in v7.0 (the target stable tree)
- Deadlock is triggered during normal operations (fd close with GPU
activity)
- Obviously correct - removes the synchronous cancel that causes the
deadlock
- Self-contained, no dependencies
AGAINST backporting:
- No items identified
Unresolved:
- Could not verify mailing list discussion (lore blocked)
**Step 9.2: Stable Rules Checklist**
1. Obviously correct and tested? **YES** - The deadlock mechanism is
clear and verifiable from code. Fix is minimal. Two Reviewed-by tags.
2. Fixes a real bug? **YES** - Deadlock during GPU eviction with
fd_closing.
3. Important issue? **YES** - Deadlock = system hang (CRITICAL).
4. Small and contained? **YES** - Net -4 lines in a single function.
5. No new features or APIs? **YES** - Pure bug fix.
6. Can apply to stable trees? **YES** - Code in v7.0 is identical; clean
apply expected.
**Step 9.3: Exception Categories**
- Not applicable - this is a standard bug fix, not an exception
category.
**Step 9.4: Decision**
All evidence strongly supports backporting. This is a textbook deadlock
fix: small, surgical, obviously correct, fixes a critical issue, written
and reviewed by the subsystem maintainers.
---
## Verification
- [Phase 1] Parsed tags: Signed-off-by Christian König (author),
Reviewed-by Alex Deucher, Reviewed-by Sunil Khatri, Signed-off-by Alex
Deucher (committer)
- [Phase 2] Diff analysis: Removes `cancel_delayed_work_sync()` call in
`amdgpu_userq_evict()` (which is called under `userq_mutex`), replaces
6-line if/cancel/return/else/schedule block with 2-line if-not-
closing/schedule
- [Phase 3] git blame: `cancel_delayed_work_sync` at line 1391 was
introduced by commit `f10eb185ad0552` (2025-05-07), fd_closing branch
by `44cfdf368fb72c` (2024-11-20)
- [Phase 3] git log: Found related deadlock fix `648a0dc0d78c3` in same
file, confirming pattern of locking issues
- [Phase 3] Author check: Christian König is a senior AMD DRM maintainer
with extensive commit history
- [Phase 4] b4 dig: Could not find the specific fix commit (likely too
recent); found original buggy series at lore
- [Phase 4] UNVERIFIED: Could not access lore.kernel.org due to anti-
scraping protection
- [Phase 5] Caller analysis: `amdgpu_userq_evict()` called from
`amdgpu_eviction_fence_suspend_worker()` which holds `userq_mutex`
(verified in amdgpu_eviction_fence.c lines 110-119)
- [Phase 5] Deadlock chain verified: suspend_worker(holds userq_mutex)
-> cancel_delayed_work_sync(resume_work) -> resume_worker calls
flush_delayed_work(suspend_work) at line 1277 AND
mutex_lock(userq_mutex) at line 1279 = DEADLOCK
- [Phase 6] File existence check: `amdgpu_userq.c` does NOT exist in
v6.12, v6.13, v6.14, v6.15; EXISTS in v6.16, v6.17, v6.18, v6.19, v7.0
- [Phase 6] Verified buggy `cancel_delayed_work_sync` in
`amdgpu_userq_evict` exists in v6.16 through v7.0 (all versions
checked)
- [Phase 6] Verified v7.0 code is identical to current HEAD - clean
apply expected
- [Phase 8] Failure mode: Deadlock -> system hang during GPU fd close,
severity CRITICAL
**YES**
drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
index 09f1d05328897..e8d12556d690a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
@@ -1389,13 +1389,8 @@ amdgpu_userq_evict(struct amdgpu_userq_mgr *uq_mgr,
/* Signal current eviction fence */
amdgpu_eviction_fence_signal(evf_mgr, ev_fence);
- if (evf_mgr->fd_closing) {
- cancel_delayed_work_sync(&uq_mgr->resume_work);
- return;
- }
-
- /* Schedule a resume work */
- schedule_delayed_work(&uq_mgr->resume_work, 0);
+ if (!evf_mgr->fd_closing)
+ schedule_delayed_work(&uq_mgr->resume_work, 0);
}
int amdgpu_userq_mgr_init(struct amdgpu_userq_mgr *userq_mgr, struct drm_file *file_priv,
--
2.53.0
next prev parent reply other threads:[~2026-04-20 13:26 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20260420132314.1023554-1-sashal@kernel.org>
2026-04-20 13:16 ` [PATCH AUTOSEL 7.0-6.12] drm/amdgpu: fix DF NULL pointer issue for soc24 Sasha Levin
2026-04-20 13:16 ` [PATCH AUTOSEL 7.0-6.18] drm/ttm: Avoid invoking the OOM killer when reading back swapped content Sasha Levin
2026-04-20 13:16 ` [PATCH AUTOSEL 6.18] drm/vc4: Release runtime PM reference after binding V3D Sasha Levin
2026-04-20 13:16 ` [PATCH AUTOSEL 7.0-6.19] drm/xe/vf: Wait for all fixups before using default LRCs Sasha Levin
2026-04-20 13:16 ` [PATCH AUTOSEL 7.0-6.12] drm/amd/display: remove duplicate format modifier Sasha Levin
2026-04-20 13:17 ` [PATCH AUTOSEL 7.0] drm/amdgpu/userq: unlock cancel_delayed_work_sync for hang_detect_work Sasha Levin
2026-04-20 13:17 ` [PATCH AUTOSEL 7.0-6.1] drm/amd/display: Merge pipes for validate Sasha Levin
2026-04-20 13:17 ` [PATCH AUTOSEL 6.18] drm/xe: Fix bug in idledly unit conversion Sasha Levin
2026-04-20 13:17 ` [PATCH AUTOSEL 7.0] drm/xe: Skip adding PRL entry to NULL VMA Sasha Levin
2026-04-20 13:17 ` [PATCH AUTOSEL 6.18] drm/vc4: Fix a memory leak in hang state error path Sasha Levin
2026-04-20 13:17 ` [PATCH AUTOSEL 6.18] drm/vc4: Protect madv read in vc4_gem_object_mmap() with madv_lock Sasha Levin
2026-04-20 13:17 ` [PATCH AUTOSEL 7.0-6.12] drm/amd/display: Fix cursor pos at overlay plane edges on DCN4 Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0-6.1] drm/msm/dpu: fix vblank IRQ registration before atomic_mode_set Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 6.18] drm/amdgpu: Handle GPU page faults correctly on non-4K page systems Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0-5.10] drm/amd/display: bios_parser: fix GPIO I2C line off-by-one Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0] drm/amdgpu: Handle IH v7_1 reg offset differences Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0-6.18] drm/amdgpu/vcn4.0.3: gate per-queue reset by PSP SOS program version Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0-6.18] drm/imx: parallel-display: add DRM_DISPLAY_HELPER for DRM_IMX_PARALLEL_DISPLAY Sasha Levin
2026-04-20 13:18 ` Sasha Levin [this message]
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0-5.10] drm/amdgpu: validate fence_count in wait_fences ioctl Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0-6.6] drm/amdgpu: fix shift-out-of-bounds when updating umc active mask Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0] drm/amdgpu/userq: remove queue from doorbell xa during clean up Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0] drm/amdkfd: fix kernel crash on releasing NULL sysfs entry Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0-6.18] drm/xe/guc: Add Wa_14025883347 for GuC DMA failure on reset Sasha Levin
2026-04-20 13:18 ` [PATCH AUTOSEL 7.0-6.18] drm/amdgpu: clear related counter after RAS eeprom reset Sasha Levin
2026-04-20 13:19 ` [PATCH AUTOSEL 7.0-6.19] drm/amd/display: Restore full update for tiling change to linear Sasha Levin
2026-04-20 13:19 ` [PATCH AUTOSEL 7.0] drm/amdgpu: fix array out of bounds accesses for mes sw_fini Sasha Levin
2026-04-20 13:19 ` [PATCH AUTOSEL 7.0-6.12] drm/amd/display: Exit IPS w/ DC helper for all dc_set_power_state cases Sasha Levin
2026-04-20 13:19 ` [PATCH AUTOSEL 7.0-6.18] drm/amdgpu: fix syncobj leak for amdgpu_gem_va_ioctl() Sasha Levin
2026-04-20 13:19 ` [PATCH AUTOSEL 7.0-6.18] drm/amdgpu: Check for multiplication overflow in checkpoint stack size Sasha Levin
2026-04-20 13:19 ` [PATCH AUTOSEL 7.0-6.18] drm/prime: Limit scatter list size with dedicated DMA device Sasha Levin
2026-04-20 13:20 ` [PATCH AUTOSEL 7.0-6.19] drm/amd/display: Clamp dc_cursor_position x_hotspot to prevent integer overflow Sasha Levin
2026-04-20 13:20 ` [PATCH AUTOSEL 7.0] drm/amdgpu/userq: defer queue publication until create completes Sasha Levin
2026-04-20 13:20 ` [PATCH AUTOSEL 7.0-6.18] drm/amdgpu/userq: fix dma_fence refcount underflow in userq path Sasha Levin
2026-04-20 13:20 ` [PATCH AUTOSEL 7.0-6.12] drm/amdgpu: guard atom_context in devcoredump VBIOS dump Sasha Levin
2026-04-20 13:20 ` [PATCH AUTOSEL 7.0-6.18] drm/amd/display: Avoid turning off the PHY when OTG is running for DVI Sasha Levin
2026-04-20 13:20 ` [PATCH AUTOSEL 7.0] drm/amdgpu: Revert setting up Retry based Thrashing on GFX 12.1 Sasha Levin
2026-04-20 13:20 ` [PATCH AUTOSEL 7.0] drm/amd/pm: Avoid overflow when sorting pp_feature list Sasha Levin
2026-04-20 13:20 ` [PATCH AUTOSEL 7.0-6.19] drm/amd/display: Fix number of opp Sasha Levin
2026-04-20 13:20 ` [PATCH AUTOSEL 7.0-6.19] drm/panel-edp: Change BOE NV140WUM-N64 timings Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0] drm/amd/display: Fix HWSS v3 fast path determination Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0-5.10] drm/mediatek: mtk_dsi: enable hs clock during pre-enable Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 6.18] drm/vc4: Fix memory leak of BO array in hang state Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0-6.12] drm/amd/display: Remove invalid DPSTREAMCLK mask usage Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0-6.18] drm/panel-edp: Add CMN N116BCL-EAK (C2) Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0] drm/amdgpu: Add default reset method for soc_v1_0 Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0] drm/amdgpu/userq: cleanup amdgpu_userq_get/put where not needed Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0-6.18] drm/amdgpu: fix some more bug in amdgpu_gem_va_ioctl Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0-5.10] fbdev: omap2: fix inconsistent lock returns in omapfb_mmap Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0-6.18] drm: gpu: msm: forbid mem reclaim from reset Sasha Levin
2026-04-20 13:21 ` [PATCH AUTOSEL 7.0-6.18] drm/panel-edp: Add AUO B116XAT04.1 (HW: 1A) Sasha Levin
2026-04-20 13:22 ` [PATCH AUTOSEL 7.0-6.6] drm/gem-dma: set VM_DONTDUMP for mmap Sasha Levin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260420132314.1023554-114-sashal@kernel.org \
--to=sashal@kernel.org \
--cc=airlied@gmail.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=patches@lists.linux.dev \
--cc=simona@ffwll.ch \
--cc=stable@vger.kernel.org \
--cc=sunil.khatri@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox