From: Wayne Lin <Wayne.Lin@amd.com>
To: <dri-devel@lists.freedesktop.org>, <amd-gfx@lists.freedesktop.org>
Cc: <lyude@redhat.com>, <imre.deak@intel.com>,
<jani.nikula@intel.com>, <ville.syrjala@linux.intel.com>,
<harry.wentland@amd.com>, <jerry.zuo@amd.com>,
Wayne Lin <Wayne.Lin@amd.com>, <stable@vger.kernel.org>
Subject: [PATCH] drm/dp_mst: Clear MSG_RDY flag before sending new message
Date: Tue, 18 Apr 2023 14:09:05 +0800 [thread overview]
Message-ID: <20230418060905.4078976-1-Wayne.Lin@amd.com> (raw)
[Why & How]
The sequence for collecting down_reply/up_request from source
perspective should be:
Request_n->repeat (get partial reply of Request_n->clear message ready
flag to ack DPRX that the message is received) till all partial
replies for Request_n are received->new Request_n+1.
While assembling partial reply packets, reading out DPCD DOWN_REP
Sideband MSG buffer + clearing DOWN_REP_MSG_RDY flag should be
wrapped up as a complete operation for reading out a reply packet.
Kicking off a new request before clearing DOWN_REP_MSG_RDY flag might
be risky. e.g. If the reply of the new request has overwritten the
DPRX DOWN_REP Sideband MSG buffer before source writing ack to clear
DOWN_REP_MSG_RDY flag, source then unintentionally flushes the reply
for the new request. Should handle the up request in the same way.
In drm_dp_mst_hpd_irq(), we don't clear MSG_RDY flag before caliing
drm_dp_mst_kick_tx(). Fix that.
Signed-off-by: Wayne Lin <Wayne.Lin@amd.com>
Cc: stable@vger.kernel.org
---
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 ++
drivers/gpu/drm/display/drm_dp_mst_topology.c | 22 +++++++++++++++++++
drivers/gpu/drm/i915/display/intel_dp.c | 3 +++
drivers/gpu/drm/nouveau/dispnv50/disp.c | 2 ++
4 files changed, 29 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 77277d90b6e2..5313a5656598 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3166,6 +3166,8 @@ static void dm_handle_mst_sideband_msg(struct amdgpu_dm_connector *aconnector)
for (retry = 0; retry < 3; retry++) {
uint8_t wret;
+ /* MSG_RDY ack is done in drm*/
+ esi[1] &= ~(DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
wret = drm_dp_dpcd_write(
&aconnector->dm_dp_aux.aux,
dpcd_addr + 1,
diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
index 51a46689cda7..02aad713c67c 100644
--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
@@ -4054,6 +4054,9 @@ int drm_dp_mst_hpd_irq(struct drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handl
{
int ret = 0;
int sc;
+ const int tosend = 1;
+ int retries = 0;
+ u8 buf = 0;
*handled = false;
sc = DP_GET_SINK_COUNT(esi[0]);
@@ -4072,6 +4075,25 @@ int drm_dp_mst_hpd_irq(struct drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handl
*handled = true;
}
+ if (*handled) {
+ buf = esi[1] & (DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
+ do {
+ ret = drm_dp_dpcd_write(mgr->aux,
+ DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0,
+ &buf,
+ tosend);
+
+ if (ret == tosend)
+ break;
+
+ retries++;
+ } while (retries < 5);
+
+ if (ret != tosend)
+ drm_dbg_kms(mgr->dev, "failed to write dpcd 0x%x\n",
+ DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0);
+ }
+
drm_dp_mst_kick_tx(mgr);
return ret;
}
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index bf80f296a8fd..abec3de38b66 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -3939,6 +3939,9 @@ intel_dp_check_mst_status(struct intel_dp *intel_dp)
if (!memchr_inv(ack, 0, sizeof(ack)))
break;
+ /* MSG_RDY ack is done in drm*/
+ ack[1] &= ~(DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
+
if (!intel_dp_ack_sink_irq_esi(intel_dp, ack))
drm_dbg_kms(&i915->drm, "Failed to ack ESI\n");
}
diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
index edcb2529b402..e905987104ed 100644
--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
+++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
@@ -1336,6 +1336,8 @@ nv50_mstm_service(struct nouveau_drm *drm,
if (!handled)
break;
+ /* MSG_RDY ack is done in drm*/
+ esi[1] &= ~(DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
rc = drm_dp_dpcd_write(aux, DP_SINK_COUNT_ESI + 1, &esi[1],
3);
if (rc != 3) {
--
2.37.3
next reply other threads:[~2023-04-18 6:09 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-18 6:09 Wayne Lin [this message]
2023-04-18 8:52 ` [PATCH] drm/dp_mst: Clear MSG_RDY flag before sending new message Jani Nikula
2023-04-18 9:42 ` Lin, Wayne
2023-04-18 11:58 ` Jani Nikula
2023-04-18 14:01 ` Ville Syrjälä
2023-04-21 13:13 ` Lin, Wayne
-- strict thread matches above, loose matches on Subject: below --
2023-05-31 4:00 Wayne Lin
2023-05-31 12:35 ` Alex Deucher
2023-06-06 22:02 ` Lyude Paul
2023-06-07 12:35 ` Lin, Wayne
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230418060905.4078976-1-Wayne.Lin@amd.com \
--to=wayne.lin@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=harry.wentland@amd.com \
--cc=imre.deak@intel.com \
--cc=jani.nikula@intel.com \
--cc=jerry.zuo@amd.com \
--cc=lyude@redhat.com \
--cc=stable@vger.kernel.org \
--cc=ville.syrjala@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).