netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] net/mlx5: Added cond_resched() to crdump collection
@ 2024-08-19 21:42 Mohamed Khalfella
  2024-08-20 10:09 ` Przemek Kitszel
  0 siblings, 1 reply; 9+ messages in thread
From: Mohamed Khalfella @ 2024-08-19 21:42 UTC (permalink / raw)
  To: Saeed Mahameed, Leon Romanovsky, Tariq Toukan, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Mohamed Khalfella, Yuanyuan Zhong, Shay Drori, Moshe Shemesh,
	netdev, linux-rdma, linux-kernel

Collecting crdump involves dumping vsc registers from pci config space
of mlx device. The code can run for long time starving other threads
want to run on the cpu. Added conditional reschedule between register
reads and while waiting for register value to release the cpu more
often.

Reviewed-by: Yuanyuan Zhong <yzhong@purestorage.com>
Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
index d0b595ba6110..377cc39643b4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
@@ -191,6 +191,7 @@ static int mlx5_vsc_wait_on_flag(struct mlx5_core_dev *dev, u8 expected_val)
 		if ((retries & 0xf) == 0)
 			usleep_range(1000, 2000);
 
+		cond_resched();
 	} while (flag != expected_val);
 
 	return 0;
@@ -280,6 +281,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
 			return read_addr;
 
 		read_addr = next_read_addr;
+		cond_resched();
 	}
 	return length;
 }
-- 
2.45.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
  2024-08-19 21:42 [PATCH] net/mlx5: Added cond_resched() to crdump collection Mohamed Khalfella
@ 2024-08-20 10:09 ` Przemek Kitszel
  2024-08-20 22:27   ` Mohamed Khalfella
  0 siblings, 1 reply; 9+ messages in thread
From: Przemek Kitszel @ 2024-08-20 10:09 UTC (permalink / raw)
  To: Mohamed Khalfella
  Cc: Yuanyuan Zhong, Saeed Mahameed, Leon Romanovsky, Tariq Toukan,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Shay Drori, Moshe Shemesh, netdev, linux-rdma, linux-kernel

On 8/19/24 23:42, Mohamed Khalfella wrote:
> Collecting crdump involves dumping vsc registers from pci config space
> of mlx device. The code can run for long time starving other threads
> want to run on the cpu. Added conditional reschedule between register
> reads and while waiting for register value to release the cpu more
> often.
> 
> Reviewed-by: Yuanyuan Zhong <yzhong@purestorage.com>
> Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> ---
>   drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> index d0b595ba6110..377cc39643b4 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> @@ -191,6 +191,7 @@ static int mlx5_vsc_wait_on_flag(struct mlx5_core_dev *dev, u8 expected_val)
>   		if ((retries & 0xf) == 0)
>   			usleep_range(1000, 2000);
>   
> +		cond_resched();

the sleeping logic above (including what is out of git diff context) is
a bit weird (tight loop with a sleep after each 16 attempts, with an
upper bound of 2k attempts!)

My understanding of usleep_range() is that it puts process to sleep
(and even leads to sched() call).
So cond_resched() looks redundant here.

>   	} while (flag != expected_val);
>   
>   	return 0;
> @@ -280,6 +281,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
>   			return read_addr;
>   
>   		read_addr = next_read_addr;
> +		cond_resched();

Would be great to see how many registers there are/how long it takes to
dump them in commit message.
My guess is that a single mlx5_vsc_gw_read_fast() call is very short and
there are many. With that cond_resched() should be rather under some
if (iterator % XXX == 0) condition.

>   	}
>   	return length;
>   }


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
  2024-08-20 10:09 ` Przemek Kitszel
@ 2024-08-20 22:27   ` Mohamed Khalfella
  2024-08-22  6:40     ` Moshe Shemesh
  0 siblings, 1 reply; 9+ messages in thread
From: Mohamed Khalfella @ 2024-08-20 22:27 UTC (permalink / raw)
  To: Przemek Kitszel
  Cc: Yuanyuan Zhong, Saeed Mahameed, Leon Romanovsky, Tariq Toukan,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Shay Drori, Moshe Shemesh, netdev, linux-rdma, linux-kernel

On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote:
> On 8/19/24 23:42, Mohamed Khalfella wrote:
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> > index d0b595ba6110..377cc39643b4 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> > @@ -191,6 +191,7 @@ static int mlx5_vsc_wait_on_flag(struct mlx5_core_dev *dev, u8 expected_val)
> >   		if ((retries & 0xf) == 0)
> >   			usleep_range(1000, 2000);
> >   
> > +		cond_resched();
> 
> the sleeping logic above (including what is out of git diff context) is
> a bit weird (tight loop with a sleep after each 16 attempts, with an
> upper bound of 2k attempts!)
> 
> My understanding of usleep_range() is that it puts process to sleep
> (and even leads to sched() call).
> So cond_resched() looks redundant here.

This matches my understanding too. usleep_range() should put the thread
to sleep, effectively releasing the cpu to do other work. The reason I
put cond_resched() here is that pci_read_config_dword() might take long
time when that card sees fatal errors. I was not able to reproduce this
so I am okay with removing this cond_resched().

> 
> >   	} while (flag != expected_val);
> >   
> >   	return 0;
> > @@ -280,6 +281,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
> >   			return read_addr;
> >   
> >   		read_addr = next_read_addr;
> > +		cond_resched();
> 
> Would be great to see how many registers there are/how long it takes to
> dump them in commit message.
> My guess is that a single mlx5_vsc_gw_read_fast() call is very short and
> there are many. With that cond_resched() should be rather under some

I did some testing on ConnectX-5 Ex MCX516A-CDAT and here is what I saw:

- mlx5_vsc_gw_read_block_fast() was called with length = 1310716
- mlx5_vsc_gw_read_fast() does 4 bytes at a time but the did not read
  full 1310716 bytes. Instead it was called 53813 times only. There are
  jumps in read_addr.
- On average mlx5_vsc_gw_read_fast() took 35284.4ns
- In total mlx5_vsc_wait_on_flag() called vsc_read() 54707 times with
  average runtime of 17548.3ns for each call. In some instances vsc_read()
  was called more than once until mlx5_vsc_wait_on_flag() returned. Mostly
  one time, but I saw 5, 8, and in one instance 16 times. As expected,
  the thread released the cpu after 16 iterations.
- Total time to read the dump was 35284.4ns * 53813 ~= 1.898s

> if (iterator % XXX == 0) condition.

Putting a cond_resched() every 16 register reads, similar to
mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this
will result in cond_resched() every ~0.56ms, which is okay IMO.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
  2024-08-20 22:27   ` Mohamed Khalfella
@ 2024-08-22  6:40     ` Moshe Shemesh
  2024-08-22 17:08       ` Mohamed Khalfella
  0 siblings, 1 reply; 9+ messages in thread
From: Moshe Shemesh @ 2024-08-22  6:40 UTC (permalink / raw)
  To: Mohamed Khalfella, Przemek Kitszel
  Cc: Yuanyuan Zhong, Saeed Mahameed, Leon Romanovsky, Tariq Toukan,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Shay Drori, netdev, linux-rdma, linux-kernel



On 8/21/2024 1:27 AM, Mohamed Khalfella wrote:
> 
> On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote:
>> On 8/19/24 23:42, Mohamed Khalfella wrote:
>>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
>>> index d0b595ba6110..377cc39643b4 100644
>>> --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
>>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
>>> @@ -191,6 +191,7 @@ static int mlx5_vsc_wait_on_flag(struct mlx5_core_dev *dev, u8 expected_val)
>>>              if ((retries & 0xf) == 0)
>>>                      usleep_range(1000, 2000);
>>>
>>> +           cond_resched();
>>
>> the sleeping logic above (including what is out of git diff context) is
>> a bit weird (tight loop with a sleep after each 16 attempts, with an
>> upper bound of 2k attempts!)
>>
>> My understanding of usleep_range() is that it puts process to sleep
>> (and even leads to sched() call).
>> So cond_resched() looks redundant here.
> 
> This matches my understanding too. usleep_range() should put the thread
> to sleep, effectively releasing the cpu to do other work. The reason I
> put cond_resched() here is that pci_read_config_dword() might take long
> time when that card sees fatal errors. I was not able to reproduce this
> so I am okay with removing this cond_resched().
> 
>>
>>>      } while (flag != expected_val);
>>>
>>>      return 0;
>>> @@ -280,6 +281,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
>>>                      return read_addr;
>>>
>>>              read_addr = next_read_addr;
>>> +           cond_resched();
>>
>> Would be great to see how many registers there are/how long it takes to
>> dump them in commit message.
>> My guess is that a single mlx5_vsc_gw_read_fast() call is very short and
>> there are many. With that cond_resched() should be rather under some
> 
> I did some testing on ConnectX-5 Ex MCX516A-CDAT and here is what I saw:
> 
> - mlx5_vsc_gw_read_block_fast() was called with length = 1310716
> - mlx5_vsc_gw_read_fast() does 4 bytes at a time but the did not read
>    full 1310716 bytes. Instead it was called 53813 times only. There are
>    jumps in read_addr.
> - On average mlx5_vsc_gw_read_fast() took 35284.4ns
> - In total mlx5_vsc_wait_on_flag() called vsc_read() 54707 times with
>    average runtime of 17548.3ns for each call. In some instances vsc_read()
>    was called more than once until mlx5_vsc_wait_on_flag() returned. Mostly
>    one time, but I saw 5, 8, and in one instance 16 times. As expected,
>    the thread released the cpu after 16 iterations.
> - Total time to read the dump was 35284.4ns * 53813 ~= 1.898s
> 
>> if (iterator % XXX == 0) condition.
> 
> Putting a cond_resched() every 16 register reads, similar to
> mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this
> will result in cond_resched() every ~0.56ms, which is okay IMO.

Sorry for the late response, I just got back from vacation.
All your measures looks right.
crdump is the devlink health dump of mlx5 FW fatal health reporter.
In the common case since auto-dump and auto-recover are default for this 
health reporter, the crdump will be collected on fatal error of the mlx5 
device and the recovery flow waits for it and run right after crdump 
finished.
I agree with adding cond_resched(), but I would reduce the frequency, 
like once in 1024 iterations of register read.
mlx5_vsc_wait_on_flag() is a bit different case as the usleep there is 
after 16 retries waiting for the value to change.
Thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
  2024-08-22  6:40     ` Moshe Shemesh
@ 2024-08-22 17:08       ` Mohamed Khalfella
  2024-08-23  4:08         ` Przemek Kitszel
  0 siblings, 1 reply; 9+ messages in thread
From: Mohamed Khalfella @ 2024-08-22 17:08 UTC (permalink / raw)
  To: Moshe Shemesh
  Cc: Przemek Kitszel, Yuanyuan Zhong, Saeed Mahameed, Leon Romanovsky,
	Tariq Toukan, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Shay Drori, netdev, linux-rdma, linux-kernel

On 2024-08-22 09:40:21 +0300, Moshe Shemesh wrote:
> 
> 
> On 8/21/2024 1:27 AM, Mohamed Khalfella wrote:
> > 
> > On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote:
> >> On 8/19/24 23:42, Mohamed Khalfella wrote:
> >>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> >>> index d0b595ba6110..377cc39643b4 100644
> >>> --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> >>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> >>> @@ -191,6 +191,7 @@ static int mlx5_vsc_wait_on_flag(struct mlx5_core_dev *dev, u8 expected_val)
> >>>              if ((retries & 0xf) == 0)
> >>>                      usleep_range(1000, 2000);
> >>>
> >>> +           cond_resched();
> >>
> >> the sleeping logic above (including what is out of git diff context) is
> >> a bit weird (tight loop with a sleep after each 16 attempts, with an
> >> upper bound of 2k attempts!)
> >>
> >> My understanding of usleep_range() is that it puts process to sleep
> >> (and even leads to sched() call).
> >> So cond_resched() looks redundant here.
> > 
> > This matches my understanding too. usleep_range() should put the thread
> > to sleep, effectively releasing the cpu to do other work. The reason I
> > put cond_resched() here is that pci_read_config_dword() might take long
> > time when that card sees fatal errors. I was not able to reproduce this
> > so I am okay with removing this cond_resched().
> > 
> >>
> >>>      } while (flag != expected_val);
> >>>
> >>>      return 0;
> >>> @@ -280,6 +281,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
> >>>                      return read_addr;
> >>>
> >>>              read_addr = next_read_addr;
> >>> +           cond_resched();
> >>
> >> Would be great to see how many registers there are/how long it takes to
> >> dump them in commit message.
> >> My guess is that a single mlx5_vsc_gw_read_fast() call is very short and
> >> there are many. With that cond_resched() should be rather under some
> > 
> > I did some testing on ConnectX-5 Ex MCX516A-CDAT and here is what I saw:
> > 
> > - mlx5_vsc_gw_read_block_fast() was called with length = 1310716
> > - mlx5_vsc_gw_read_fast() does 4 bytes at a time but the did not read
> >    full 1310716 bytes. Instead it was called 53813 times only. There are
> >    jumps in read_addr.
> > - On average mlx5_vsc_gw_read_fast() took 35284.4ns
> > - In total mlx5_vsc_wait_on_flag() called vsc_read() 54707 times with
> >    average runtime of 17548.3ns for each call. In some instances vsc_read()
> >    was called more than once until mlx5_vsc_wait_on_flag() returned. Mostly
> >    one time, but I saw 5, 8, and in one instance 16 times. As expected,
> >    the thread released the cpu after 16 iterations.
> > - Total time to read the dump was 35284.4ns * 53813 ~= 1.898s
> > 
> >> if (iterator % XXX == 0) condition.
> > 
> > Putting a cond_resched() every 16 register reads, similar to
> > mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this
> > will result in cond_resched() every ~0.56ms, which is okay IMO.
> 
> Sorry for the late response, I just got back from vacation.
> All your measures looks right.
> crdump is the devlink health dump of mlx5 FW fatal health reporter.
> In the common case since auto-dump and auto-recover are default for this 
> health reporter, the crdump will be collected on fatal error of the mlx5 
> device and the recovery flow waits for it and run right after crdump 
> finished.
> I agree with adding cond_resched(), but I would reduce the frequency, 
> like once in 1024 iterations of register read.
> mlx5_vsc_wait_on_flag() is a bit different case as the usleep there is 
> after 16 retries waiting for the value to change.
> Thanks.

Thanks for taking a look. Once in every 1024 iterations approximately
translates to 35284.4ns * 1024 ~= 36.1ms, which is relatively long time
IMO. How about any power-of-two <= 128 (~4.51ms)?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
  2024-08-22 17:08       ` Mohamed Khalfella
@ 2024-08-23  4:08         ` Przemek Kitszel
  2024-08-23  5:16           ` Moshe Shemesh
  0 siblings, 1 reply; 9+ messages in thread
From: Przemek Kitszel @ 2024-08-23  4:08 UTC (permalink / raw)
  To: Mohamed Khalfella, Moshe Shemesh
  Cc: Yuanyuan Zhong, Saeed Mahameed, Leon Romanovsky, Tariq Toukan,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Shay Drori, netdev, linux-rdma, linux-kernel

On 8/22/24 19:08, Mohamed Khalfella wrote:
> On 2024-08-22 09:40:21 +0300, Moshe Shemesh wrote:
>>
>>
>> On 8/21/2024 1:27 AM, Mohamed Khalfella wrote:
>>>
>>> On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote:
>>>> On 8/19/24 23:42, Mohamed Khalfella wrote:


>>>
>>> Putting a cond_resched() every 16 register reads, similar to
>>> mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this
>>> will result in cond_resched() every ~0.56ms, which is okay IMO.
>>
>> Sorry for the late response, I just got back from vacation.
>> All your measures looks right.
>> crdump is the devlink health dump of mlx5 FW fatal health reporter.
>> In the common case since auto-dump and auto-recover are default for this
>> health reporter, the crdump will be collected on fatal error of the mlx5
>> device and the recovery flow waits for it and run right after crdump
>> finished.
>> I agree with adding cond_resched(), but I would reduce the frequency,
>> like once in 1024 iterations of register read.
>> mlx5_vsc_wait_on_flag() is a bit different case as the usleep there is
>> after 16 retries waiting for the value to change.
>> Thanks.
> 
> Thanks for taking a look. Once in every 1024 iterations approximately
> translates to 35284.4ns * 1024 ~= 36.1ms, which is relatively long time
> IMO. How about any power-of-two <= 128 (~4.51ms)?

Such tune-up would matter for interactive use of the machine with very
little cores, is that the case? Otherwise I see no point [to make it
overall a little slower, as that is the tradeoff].

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
  2024-08-23  4:08         ` Przemek Kitszel
@ 2024-08-23  5:16           ` Moshe Shemesh
  2024-08-23 17:41             ` Mohamed Khalfella
  0 siblings, 1 reply; 9+ messages in thread
From: Moshe Shemesh @ 2024-08-23  5:16 UTC (permalink / raw)
  To: Przemek Kitszel, Mohamed Khalfella
  Cc: Yuanyuan Zhong, Saeed Mahameed, Leon Romanovsky, Tariq Toukan,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Shay Drori, netdev, linux-rdma, linux-kernel



On 8/23/2024 7:08 AM, Przemek Kitszel wrote:
> 
> On 8/22/24 19:08, Mohamed Khalfella wrote:
>> On 2024-08-22 09:40:21 +0300, Moshe Shemesh wrote:
>>>
>>>
>>> On 8/21/2024 1:27 AM, Mohamed Khalfella wrote:
>>>>
>>>> On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote:
>>>>> On 8/19/24 23:42, Mohamed Khalfella wrote:
> 
> 
>>>>
>>>> Putting a cond_resched() every 16 register reads, similar to
>>>> mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this
>>>> will result in cond_resched() every ~0.56ms, which is okay IMO.
>>>
>>> Sorry for the late response, I just got back from vacation.
>>> All your measures looks right.
>>> crdump is the devlink health dump of mlx5 FW fatal health reporter.
>>> In the common case since auto-dump and auto-recover are default for this
>>> health reporter, the crdump will be collected on fatal error of the mlx5
>>> device and the recovery flow waits for it and run right after crdump
>>> finished.
>>> I agree with adding cond_resched(), but I would reduce the frequency,
>>> like once in 1024 iterations of register read.
>>> mlx5_vsc_wait_on_flag() is a bit different case as the usleep there is
>>> after 16 retries waiting for the value to change.
>>> Thanks.
>>
>> Thanks for taking a look. Once in every 1024 iterations approximately
>> translates to 35284.4ns * 1024 ~= 36.1ms, which is relatively long time
>> IMO. How about any power-of-two <= 128 (~4.51ms)?

OK
> 
> Such tune-up would matter for interactive use of the machine with very
> little cores, is that the case? Otherwise I see no point [to make it
> overall a little slower, as that is the tradeoff].

Yes, as I see it, the point here is host with very few cores.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
  2024-08-23  5:16           ` Moshe Shemesh
@ 2024-08-23 17:41             ` Mohamed Khalfella
  2024-08-25  5:11               ` Moshe Shemesh
  0 siblings, 1 reply; 9+ messages in thread
From: Mohamed Khalfella @ 2024-08-23 17:41 UTC (permalink / raw)
  To: Moshe Shemesh
  Cc: Przemek Kitszel, Yuanyuan Zhong, Saeed Mahameed, Leon Romanovsky,
	Tariq Toukan, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Shay Drori, netdev, linux-rdma, linux-kernel

On 2024-08-23 08:16:32 +0300, Moshe Shemesh wrote:
> 
> 
> On 8/23/2024 7:08 AM, Przemek Kitszel wrote:
> > 
> > On 8/22/24 19:08, Mohamed Khalfella wrote:
> >> On 2024-08-22 09:40:21 +0300, Moshe Shemesh wrote:
> >>>
> >>>
> >>> On 8/21/2024 1:27 AM, Mohamed Khalfella wrote:
> >>>>
> >>>> On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote:
> >>>>> On 8/19/24 23:42, Mohamed Khalfella wrote:
> > 
> > 
> >>>>
> >>>> Putting a cond_resched() every 16 register reads, similar to
> >>>> mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this
> >>>> will result in cond_resched() every ~0.56ms, which is okay IMO.
> >>>
> >>> Sorry for the late response, I just got back from vacation.
> >>> All your measures looks right.
> >>> crdump is the devlink health dump of mlx5 FW fatal health reporter.
> >>> In the common case since auto-dump and auto-recover are default for this
> >>> health reporter, the crdump will be collected on fatal error of the mlx5
> >>> device and the recovery flow waits for it and run right after crdump
> >>> finished.
> >>> I agree with adding cond_resched(), but I would reduce the frequency,
> >>> like once in 1024 iterations of register read.
> >>> mlx5_vsc_wait_on_flag() is a bit different case as the usleep there is
> >>> after 16 retries waiting for the value to change.
> >>> Thanks.
> >>
> >> Thanks for taking a look. Once in every 1024 iterations approximately
> >> translates to 35284.4ns * 1024 ~= 36.1ms, which is relatively long time
> >> IMO. How about any power-of-two <= 128 (~4.51ms)?
> 
> OK
> > 
> > Such tune-up would matter for interactive use of the machine with very
> > little cores, is that the case? Otherwise I see no point [to make it
> > overall a little slower, as that is the tradeoff].
> 
> Yes, as I see it, the point here is host with very few cores.

It should make a difference for systems with few cores. Add to that the
numbers above is what I was able to get from the lab. It has been seen
in the field that collecting crdump takes more than 5 seconds causing
issues. If this makes sense I will submit v2 with the updated commit
message and cond_resched() every 128 iterations of register read.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
  2024-08-23 17:41             ` Mohamed Khalfella
@ 2024-08-25  5:11               ` Moshe Shemesh
  0 siblings, 0 replies; 9+ messages in thread
From: Moshe Shemesh @ 2024-08-25  5:11 UTC (permalink / raw)
  To: Mohamed Khalfella
  Cc: Przemek Kitszel, Yuanyuan Zhong, Saeed Mahameed, Leon Romanovsky,
	Tariq Toukan, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Shay Drori, netdev, linux-rdma, linux-kernel



On 8/23/2024 8:41 PM, Mohamed Khalfella wrote:
> 
> On 2024-08-23 08:16:32 +0300, Moshe Shemesh wrote:
>>
>>
>> On 8/23/2024 7:08 AM, Przemek Kitszel wrote:
>>>
>>> On 8/22/24 19:08, Mohamed Khalfella wrote:
>>>> On 2024-08-22 09:40:21 +0300, Moshe Shemesh wrote:
>>>>>
>>>>>
>>>>> On 8/21/2024 1:27 AM, Mohamed Khalfella wrote:
>>>>>>
>>>>>> On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote:
>>>>>>> On 8/19/24 23:42, Mohamed Khalfella wrote:
>>>
>>>
>>>>>>
>>>>>> Putting a cond_resched() every 16 register reads, similar to
>>>>>> mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this
>>>>>> will result in cond_resched() every ~0.56ms, which is okay IMO.
>>>>>
>>>>> Sorry for the late response, I just got back from vacation.
>>>>> All your measures looks right.
>>>>> crdump is the devlink health dump of mlx5 FW fatal health reporter.
>>>>> In the common case since auto-dump and auto-recover are default for this
>>>>> health reporter, the crdump will be collected on fatal error of the mlx5
>>>>> device and the recovery flow waits for it and run right after crdump
>>>>> finished.
>>>>> I agree with adding cond_resched(), but I would reduce the frequency,
>>>>> like once in 1024 iterations of register read.
>>>>> mlx5_vsc_wait_on_flag() is a bit different case as the usleep there is
>>>>> after 16 retries waiting for the value to change.
>>>>> Thanks.
>>>>
>>>> Thanks for taking a look. Once in every 1024 iterations approximately
>>>> translates to 35284.4ns * 1024 ~= 36.1ms, which is relatively long time
>>>> IMO. How about any power-of-two <= 128 (~4.51ms)?
>>
>> OK
>>>
>>> Such tune-up would matter for interactive use of the machine with very
>>> little cores, is that the case? Otherwise I see no point [to make it
>>> overall a little slower, as that is the tradeoff].
>>
>> Yes, as I see it, the point here is host with very few cores.
> 
> It should make a difference for systems with few cores. Add to that the
> numbers above is what I was able to get from the lab. It has been seen
> in the field that collecting crdump takes more than 5 seconds causing
> issues. If this makes sense I will submit v2 with the updated commit
> message and cond_resched() every 128 iterations of register read.

Fine with me.
Thanks.


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-08-25  5:12 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-19 21:42 [PATCH] net/mlx5: Added cond_resched() to crdump collection Mohamed Khalfella
2024-08-20 10:09 ` Przemek Kitszel
2024-08-20 22:27   ` Mohamed Khalfella
2024-08-22  6:40     ` Moshe Shemesh
2024-08-22 17:08       ` Mohamed Khalfella
2024-08-23  4:08         ` Przemek Kitszel
2024-08-23  5:16           ` Moshe Shemesh
2024-08-23 17:41             ` Mohamed Khalfella
2024-08-25  5:11               ` Moshe Shemesh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).