From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: Mohamed Khalfella <mkhalfella@purestorage.com>,
Moshe Shemesh <moshe@nvidia.com>
Cc: Yuanyuan Zhong <yzhong@purestorage.com>,
Saeed Mahameed <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Shay Drori <shayd@nvidia.com>, <netdev@vger.kernel.org>,
<linux-rdma@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
Date: Fri, 23 Aug 2024 06:08:19 +0200 [thread overview]
Message-ID: <1d9d555f-33b7-4d95-8fbd-87709386583c@intel.com> (raw)
In-Reply-To: <Zsdwe0lAl9xldLHK@apollo.purestorage.com>
On 8/22/24 19:08, Mohamed Khalfella wrote:
> On 2024-08-22 09:40:21 +0300, Moshe Shemesh wrote:
>>
>>
>> On 8/21/2024 1:27 AM, Mohamed Khalfella wrote:
>>>
>>> On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote:
>>>> On 8/19/24 23:42, Mohamed Khalfella wrote:
>>>
>>> Putting a cond_resched() every 16 register reads, similar to
>>> mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this
>>> will result in cond_resched() every ~0.56ms, which is okay IMO.
>>
>> Sorry for the late response, I just got back from vacation.
>> All your measures looks right.
>> crdump is the devlink health dump of mlx5 FW fatal health reporter.
>> In the common case since auto-dump and auto-recover are default for this
>> health reporter, the crdump will be collected on fatal error of the mlx5
>> device and the recovery flow waits for it and run right after crdump
>> finished.
>> I agree with adding cond_resched(), but I would reduce the frequency,
>> like once in 1024 iterations of register read.
>> mlx5_vsc_wait_on_flag() is a bit different case as the usleep there is
>> after 16 retries waiting for the value to change.
>> Thanks.
>
> Thanks for taking a look. Once in every 1024 iterations approximately
> translates to 35284.4ns * 1024 ~= 36.1ms, which is relatively long time
> IMO. How about any power-of-two <= 128 (~4.51ms)?
Such tune-up would matter for interactive use of the machine with very
little cores, is that the case? Otherwise I see no point [to make it
overall a little slower, as that is the tradeoff].
next prev parent reply other threads:[~2024-08-23 4:08 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-19 21:42 [PATCH] net/mlx5: Added cond_resched() to crdump collection Mohamed Khalfella
2024-08-20 10:09 ` Przemek Kitszel
2024-08-20 22:27 ` Mohamed Khalfella
2024-08-22 6:40 ` Moshe Shemesh
2024-08-22 17:08 ` Mohamed Khalfella
2024-08-23 4:08 ` Przemek Kitszel [this message]
2024-08-23 5:16 ` Moshe Shemesh
2024-08-23 17:41 ` Mohamed Khalfella
2024-08-25 5:11 ` Moshe Shemesh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1d9d555f-33b7-4d95-8fbd-87709386583c@intel.com \
--to=przemyslaw.kitszel@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=mkhalfella@purestorage.com \
--cc=moshe@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=saeedm@nvidia.com \
--cc=shayd@nvidia.com \
--cc=tariqt@nvidia.com \
--cc=yzhong@purestorage.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox