netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Schmidt <mschmidt@redhat.com>
To: intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Karol Kolacinski <karol.kolacinski@intel.com>,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	Simon Horman <simon.horman@corigine.com>,
	Michal Michalik <michal.michalik@intel.com>,
	Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com>,
	Petr Oros <poros@redhat.com>, Andrew Lunn <andrew@lunn.ch>,
	Brent Rowsell <browsell@redhat.com>
Subject: [PATCH net-next v2 4/6] ice: sleep, don't busy-wait, for ICE_CTL_Q_SQ_CMD_TIMEOUT
Date: Wed, 12 Apr 2023 10:19:27 +0200	[thread overview]
Message-ID: <20230412081929.173220-5-mschmidt@redhat.com> (raw)
In-Reply-To: <20230412081929.173220-1-mschmidt@redhat.com>

The driver polls for ice_sq_done() with a 100 µs period for up to 1 s
and it uses udelay to do that.

Let's use usleep_range instead. We know sleeping is allowed here,
because we're holding a mutex (cq->sq_lock). To preserve the total
max waiting time, measure the timeout in jiffies.

ICE_CTL_Q_SQ_CMD_TIMEOUT is used also in ice_release_res(), but there
the polling period is 1 ms (i.e. 10 times longer). Since the timeout was
expressed in terms of the number of loops, the total timeout in this
function is 10 s. I do not know if this is intentional. This patch keeps
it.

The patch lowers the CPU usage of the ice-gnss-<dev_name> kernel thread
on my system from ~8 % to less than 1 %.

I received a report of high CPU usage with ptp4l where the busy-waiting
in ice_sq_send_cmd dominated the profile. This patch has been tested in
that usecase too and it made a huge improvement there.

Tested-by: Brent Rowsell <browsell@redhat.com>
Signed-off-by: Michal Schmidt <mschmidt@redhat.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c   | 14 +++++++-------
 drivers/net/ethernet/intel/ice/ice_controlq.c |  9 +++++----
 drivers/net/ethernet/intel/ice/ice_controlq.h |  2 +-
 3 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index f4c256563248..3638598d732b 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1992,19 +1992,19 @@ ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
  */
 void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
 {
-	u32 total_delay = 0;
+	unsigned long timeout;
 	int status;
 
-	status = ice_aq_release_res(hw, res, 0, NULL);
-
 	/* there are some rare cases when trying to release the resource
 	 * results in an admin queue timeout, so handle them correctly
 	 */
-	while ((status == -EIO) && (total_delay < ICE_CTL_Q_SQ_CMD_TIMEOUT)) {
-		mdelay(1);
+	timeout = jiffies + 10 * ICE_CTL_Q_SQ_CMD_TIMEOUT;
+	do {
 		status = ice_aq_release_res(hw, res, 0, NULL);
-		total_delay++;
-	}
+		if (status != -EIO)
+			break;
+		usleep_range(1000, 2000);
+	} while (time_before(jiffies, timeout));
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
index c8fb10106ec3..d2faf1baad2f 100644
--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
+++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
@@ -964,7 +964,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	struct ice_aq_desc *desc_on_ring;
 	bool cmd_completed = false;
 	struct ice_sq_cd *details;
-	u32 total_delay = 0;
+	unsigned long timeout;
 	int status = 0;
 	u16 retval = 0;
 	u32 val = 0;
@@ -1057,13 +1057,14 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		cq->sq.next_to_use = 0;
 	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
 
+	timeout = jiffies + ICE_CTL_Q_SQ_CMD_TIMEOUT;
 	do {
 		if (ice_sq_done(hw, cq))
 			break;
 
-		udelay(ICE_CTL_Q_SQ_CMD_USEC);
-		total_delay++;
-	} while (total_delay < ICE_CTL_Q_SQ_CMD_TIMEOUT);
+		usleep_range(ICE_CTL_Q_SQ_CMD_USEC,
+			     ICE_CTL_Q_SQ_CMD_USEC * 3 / 2);
+	} while (time_before(jiffies, timeout));
 
 	/* if ready, copy the desc back to temp */
 	if (ice_sq_done(hw, cq)) {
diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.h b/drivers/net/ethernet/intel/ice/ice_controlq.h
index e790b2f4e437..950b7f4a7a05 100644
--- a/drivers/net/ethernet/intel/ice/ice_controlq.h
+++ b/drivers/net/ethernet/intel/ice/ice_controlq.h
@@ -34,7 +34,7 @@ enum ice_ctl_q {
 };
 
 /* Control Queue timeout settings - max delay 1s */
-#define ICE_CTL_Q_SQ_CMD_TIMEOUT	10000 /* Count 10000 times */
+#define ICE_CTL_Q_SQ_CMD_TIMEOUT	HZ    /* Wait max 1s */
 #define ICE_CTL_Q_SQ_CMD_USEC		100   /* Check every 100usec */
 #define ICE_CTL_Q_ADMIN_INIT_TIMEOUT	10    /* Count 10 times */
 #define ICE_CTL_Q_ADMIN_INIT_MSEC	100   /* Check every 100msec */
-- 
2.39.2


  parent reply	other threads:[~2023-04-12  8:21 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-12  8:19 [PATCH net-next v2 0/6] ice: lower CPU usage with GNSS Michal Schmidt
2023-04-12  8:19 ` [PATCH net-next v2 1/6] ice: do not busy-wait to read GNSS data Michal Schmidt
2023-04-14 17:29   ` Kubalewski, Arkadiusz
2023-04-17 11:58   ` Simon Horman
2023-04-20 20:59   ` [Intel-wired-lan] " Mekala, SunithaX D
2023-04-12  8:19 ` [PATCH net-next v2 2/6] ice: increase the GNSS data polling interval to 20 ms Michal Schmidt
2023-04-14 17:28   ` Kubalewski, Arkadiusz
2023-04-17 11:58   ` Simon Horman
2023-04-20 20:59   ` [Intel-wired-lan] " Mekala, SunithaX D
2023-04-12  8:19 ` [PATCH net-next v2 3/6] ice: remove ice_ctl_q_info::sq_cmd_timeout Michal Schmidt
2023-04-14 17:29   ` Kubalewski, Arkadiusz
2023-04-17 11:52   ` Simon Horman
2023-04-20 20:59   ` [Intel-wired-lan] " Mekala, SunithaX D
2023-04-12  8:19 ` Michal Schmidt [this message]
2023-04-14 17:29   ` [PATCH net-next v2 4/6] ice: sleep, don't busy-wait, for ICE_CTL_Q_SQ_CMD_TIMEOUT Kubalewski, Arkadiusz
2023-04-17 12:01   ` Simon Horman
2023-04-20 21:07   ` [Intel-wired-lan] " Mekala, SunithaX D
2023-04-12  8:19 ` [PATCH net-next v2 5/6] ice: remove unused buffer copy code in ice_sq_send_cmd_retry() Michal Schmidt
2023-04-14 17:29   ` Kubalewski, Arkadiusz
2023-04-17 12:02   ` Simon Horman
2023-04-20 20:59   ` [Intel-wired-lan] " Mekala, SunithaX D
2023-04-12  8:19 ` [PATCH net-next v2 6/6] ice: sleep, don't busy-wait, in the SQ send retry loop Michal Schmidt
2023-04-14 17:29   ` Kubalewski, Arkadiusz
2023-04-17 12:03   ` Simon Horman
2023-04-20 20:59   ` [Intel-wired-lan] " Mekala, SunithaX D
2023-04-14 17:29 ` [PATCH net-next v2 0/6] ice: lower CPU usage with GNSS Kubalewski, Arkadiusz
2023-04-20 20:59 ` [Intel-wired-lan] " Mekala, SunithaX D

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230412081929.173220-5-mschmidt@redhat.com \
    --to=mschmidt@redhat.com \
    --cc=andrew@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=arkadiusz.kubalewski@intel.com \
    --cc=browsell@redhat.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jesse.brandeburg@intel.com \
    --cc=karol.kolacinski@intel.com \
    --cc=michal.michalik@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=poros@redhat.com \
    --cc=simon.horman@corigine.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).