From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92E3AC433DF for ; Mon, 22 Jun 2020 18:05:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 70CC320732 for ; Mon, 22 Jun 2020 18:05:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592849143; bh=eyZXMH3iK3dmfSfvX5OqrLAglQ3hdj1/uUrtEXR0pOY=; h=Subject:To:Cc:From:Date:List-ID:From; b=lahbHrfPdY3481/V3kMZPf0weDp5fwrVkzlh3Gvvzw8u9xAxujsPaHUQbwFNHruie 2ReOK21C+RgqaYHbWKBSs8+FFKtUJcWqma+0+uD4wE7aT8A0zIemv/2Ba2Vu7Q4waF 9bno40otwjnxw8ic9HrPHQgKMGj5ubH5/LBXSSpw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730361AbgFVSFm (ORCPT ); Mon, 22 Jun 2020 14:05:42 -0400 Received: from wforward5-smtp.messagingengine.com ([64.147.123.35]:42991 "EHLO wforward5-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730118AbgFVSFm (ORCPT ); Mon, 22 Jun 2020 14:05:42 -0400 Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailforward.west.internal (Postfix) with ESMTP id A87461295; Mon, 22 Jun 2020 14:05:40 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Mon, 22 Jun 2020 14:05:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:message-id:mime-version:subject:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=C9Wg/L NsrYhMiok1UDlqcxIhOR6ZF0fy7fV4tA/C1iQ=; b=XBxnM4LWdq2Hi9l4PnClCI WbjXTIPj4Il80SipgbtJeL4DeJqhziFAAUaTnl3KqRuN0MHWE2qN8WM3rSW68tyb +CQYogPXTTixMzMwaGV+p9EFXBuJttjXOpCF+OL0pPlq5PZMXFcgBSC3q81bmKy1 kif6hBCcsEE6iU+rzNkv9sq8H32PCSclPXgtcZz6gIgyt1+MODUOyMoeHlBgECDz 2WI6kC62ljmIHHqFoLzPHhCW2GHqLBXqOjpcVvvFNtzKZ7RfgXaIZ0EQMTXHqZ9x lqsVFpEUIkaYkWFXVpOOYRKgV3aJNUmn9sP+TvMAsKLI5Pb9MvXlZevKEZwcS61w == X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduhedrudekvddguddvvdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecunecujfgurhepuffvhfffkfggtgfgsehtkeertd dttdejnecuhfhrohhmpeeoghhrvghgkhhhsehlihhnuhigfhhouhhnuggrthhiohhnrdho rhhgqeenucggtffrrghtthgvrhhnpedvffegjeejiedtieffjeeijeffgfehvdeiudejhe efgeevhffhvedvfeeuheekleenucffohhmrghinhepfhhrvggvuggvshhkthhophdrohhr ghenucfkphepkeefrdekiedrkeelrddutdejnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepghhrvghgsehkrhhorghhrdgtohhm X-ME-Proxy: Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) by mail.messagingengine.com (Postfix) with ESMTPA id C3E4830673CE; Mon, 22 Jun 2020 14:05:39 -0400 (EDT) Subject: FAILED: patch "[PATCH] drm/dp_mst: Increase ACT retry timeout to 3s" failed to apply to 5.4-stable tree To: lyude@redhat.com, sean@poorly.run, stable@vger.kernel.org Cc: From: Date: Mon, 22 Jun 2020 20:05:31 +0200 Message-ID: <15928491317149@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch below does not apply to the 5.4-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From 873a95e0d59ac06901ae261dda0b7165ffd002b8 Mon Sep 17 00:00:00 2001 From: Lyude Paul Date: Fri, 3 Apr 2020 15:47:15 -0400 Subject: [PATCH] drm/dp_mst: Increase ACT retry timeout to 3s MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Currently we only poll for an ACT up to 30 times, with a busy-wait delay of 100µs between each attempt - giving us a timeout of 2900µs. While this might seem sensible, it would appear that in certain scenarios it can take dramatically longer then that for us to receive an ACT. On one of the EVGA MST hubs that I have available, I observed said hub sometimes taking longer then a second before signalling the ACT. These delays mostly seem to occur when previous sideband messages we've sent are NAKd by the hub, however it wouldn't be particularly surprising if it's possible to reproduce times like this simply by introducing branch devices with large LCTs since payload allocations have to take effect on every downstream device up to the payload's target. So, instead of just retrying 30 times we poll for the ACT for up to 3ms, and additionally use usleep_range() to avoid a very long and rude busy-wait. Note that the previous retry count of 30 appears to have been arbitrarily chosen, as I can't find any mention of a recommended timeout or retry count for ACTs in the DisplayPort 2.0 specification. This also goes for the range we were previously using for udelay(), although I suspect that was just copied from the recommended delay for link training on SST devices. Changes since v1: * Use readx_poll_timeout() instead of open-coding timeout loop - Sean Paul Changes since v2: * Increase poll interval to 200us - Sean Paul * Print status in hex when we timeout waiting for ACT - Sean Paul Signed-off-by: Lyude Paul Fixes: ad7f8a1f9ced ("drm/helper: add Displayport multi-stream helper (v0.6)") Cc: Sean Paul Cc: # v3.17+ Reviewed-by: Sean Paul Link: https://patchwork.freedesktop.org/patch/msgid/20200406221253.1307209-4-lyude@redhat.com diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c index e7a5bd3e6015..8942ab98ab64 100644 --- a/drivers/gpu/drm/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/drm_dp_mst_topology.c @@ -27,6 +27,7 @@ #include #include #include +#include #if IS_ENABLED(CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS) #include @@ -4438,43 +4439,53 @@ static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr, return ret; } +static int do_get_act_status(struct drm_dp_aux *aux) +{ + int ret; + u8 status; + + ret = drm_dp_dpcd_readb(aux, DP_PAYLOAD_TABLE_UPDATE_STATUS, &status); + if (ret < 0) + return ret; + + return status; +} /** * drm_dp_check_act_status() - Polls for ACT handled status. * @mgr: manager to use * * Tries waiting for the MST hub to finish updating it's payload table by - * polling for the ACT handled bit. + * polling for the ACT handled bit for up to 3 seconds (yes-some hubs really + * take that long). * * Returns: * 0 if the ACT was handled in time, negative error code on failure. */ int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr) { - int count = 0, ret; - u8 status; - - do { - ret = drm_dp_dpcd_readb(mgr->aux, - DP_PAYLOAD_TABLE_UPDATE_STATUS, - &status); - if (ret < 0) { - DRM_DEBUG_KMS("failed to read payload table status %d\n", - ret); - return ret; - } - - if (status & DP_PAYLOAD_ACT_HANDLED) - break; - count++; - udelay(100); - } while (count < 30); - - if (!(status & DP_PAYLOAD_ACT_HANDLED)) { - DRM_DEBUG_KMS("failed to get ACT bit %d after %d retries\n", - status, count); + /* + * There doesn't seem to be any recommended retry count or timeout in + * the MST specification. Since some hubs have been observed to take + * over 1 second to update their payload allocations under certain + * conditions, we use a rather large timeout value. + */ + const int timeout_ms = 3000; + int ret, status; + + ret = readx_poll_timeout(do_get_act_status, mgr->aux, status, + status & DP_PAYLOAD_ACT_HANDLED || status < 0, + 200, timeout_ms * USEC_PER_MSEC); + if (ret < 0 && status >= 0) { + DRM_DEBUG_KMS("Failed to get ACT after %dms, last status: %02x\n", + timeout_ms, status); return -EINVAL; + } else if (status < 0) { + DRM_DEBUG_KMS("Failed to read payload table status: %d\n", + status); + return status; } + return 0; } EXPORT_SYMBOL(drm_dp_check_act_status);