From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F29F1CDFBF for ; Thu, 22 Aug 2024 17:08:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724346496; cv=none; b=aVZ6Xao2CUynG5vg+sWFsAH7NgwcDEIVYXTpvb7midUI6oP8zVd5Vvppn62lXNH5gUSChkcnPH8tI7/Ix6FxuJ7GzgSlGr4Oesh+vQOMYMeZCO14GWKhaYink+HatOdbZ45r2TT5l2t7oNHPLHXrQbWJgj2oGkpNQiC1vsYGqIA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724346496; c=relaxed/simple; bh=BMc+FJl8erOZwXJDoGjXhPQwr6aeokc1pIgEX4Vo6vs=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=CJyHEV8xeY+Asgw3Iroqag4ReacLWYeF1UlTiveuEQG/CuUYXU2t6tM7vdLLxVyU1PvDZh0JAD1n8rbNCsG1mXOetUAxv7CIffFIeaNs1R8YCr48rJIyw82F2FYuZDCOZIadSsNhJhW/kN7hlaVBSsirWhjtiXN+Qpx5lRBO/00= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=QEgObGye; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="QEgObGye" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-2020b730049so9461515ad.3 for ; Thu, 22 Aug 2024 10:08:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1724346494; x=1724951294; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=vPskSxp3DwFGJ51OvKab2oMYd9Bt/ArGDf3ppqCaMdg=; b=QEgObGye/a1hz+W/5NuFpRUx+Nm6bBOYNmah2pi0eLSG0nGIMd9dwYPm4iLeZFqgzx 6PE2DwpDVB6EgVxwCbH5aJ87IVUdTiJ5BtFg2un5B31mBCjz766SfQU9svXqEVTMJfXo DtrjJd98NL65JJvvgPM3+8DKCTSFFbaqN3CvkpBElpbddiSiDFCNgSG2OnmtiEj23Xja RlLMMdDtztGGVS9VMRTB4GTtb031AZCmImtrSUzaORTMdU0jQvYpBsPPz4SBEIUO3eEZ am09xG1eNd3ekZYhotrRfZ8jhaXuOmF9icfJ73dOjULwd+dukTke7CSi+bBYj01x3AXw EeyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724346494; x=1724951294; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=vPskSxp3DwFGJ51OvKab2oMYd9Bt/ArGDf3ppqCaMdg=; b=JnCEoxUcTEkLCp5opz+Q9DbVLZcjGJTVcFBGH2Lqu+CniYF3yBWDzcyIb5fggTgD5P TpswNoVzxB9wsaY9QVJsnW5O5sXevajCedP3Rg3Ro+f5V8ldC2orL8NPsQo/6AnQ+V1l 1jJ00/SqltqK+scnnUAwy9dpEV3iymJYNP+vmrhI1+EOimPhYqEiSUnado4+1/B3MbG9 KK7N65lfKLpohyLDBFVxvfCY4GebzzdwvTWXsGET74mjy6l0y3vVa5rEzLJNpPqOOKDB 5a6XP+3SfIyZiY4nuctlOe1SJ7uv5T90A0pxyD+I2S/XzfXeyBpsW07+2rHQbZTwFPSC OiLQ== X-Forwarded-Encrypted: i=1; AJvYcCUhnJ0wRKk77hW9dqVctbo81O1Yy66qDOEAmK+/fwZ6+fu8MZjhSeSa1UFWrrXBg62Xp0k/3uY=@vger.kernel.org X-Gm-Message-State: AOJu0YzbaEmCh/7gpR1/NytMKd36MyLNVrDDyKxFvlVUXRKuwUDW8hK+ E5lHl9GMyxa3PVkLYa9ZQ2UoCnEuwlWymlHz/WJqp5yXaWdkua6X4SbeEwUqhBc= X-Google-Smtp-Source: AGHT+IFZeMlN3uhYsXoA4fG5+9UylUMv56IDXgYgY+U3CGQbvR4WiqN638xyKpy2em4NymF6ZQp1HQ== X-Received: by 2002:a17:902:ce85:b0:202:4a24:ee with SMTP id d9443c01a7336-20388bddbe0mr30180865ad.55.1724346493569; Thu, 22 Aug 2024 10:08:13 -0700 (PDT) Received: from medusa.lab.kspace.sh ([208.88.152.253]) by smtp.googlemail.com with ESMTPSA id d9443c01a7336-2038560df7dsm14839665ad.225.2024.08.22.10.08.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Aug 2024 10:08:13 -0700 (PDT) Date: Thu, 22 Aug 2024 10:08:11 -0700 From: Mohamed Khalfella To: Moshe Shemesh Cc: Przemek Kitszel , Yuanyuan Zhong , Saeed Mahameed , Leon Romanovsky , Tariq Toukan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Shay Drori , netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection Message-ID: References: <20240819214259.38259-1-mkhalfella@purestorage.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On 2024-08-22 09:40:21 +0300, Moshe Shemesh wrote: > > > On 8/21/2024 1:27 AM, Mohamed Khalfella wrote: > > > > On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote: > >> On 8/19/24 23:42, Mohamed Khalfella wrote: > >>> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c > >>> index d0b595ba6110..377cc39643b4 100644 > >>> --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c > >>> +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c > >>> @@ -191,6 +191,7 @@ static int mlx5_vsc_wait_on_flag(struct mlx5_core_dev *dev, u8 expected_val) > >>> if ((retries & 0xf) == 0) > >>> usleep_range(1000, 2000); > >>> > >>> + cond_resched(); > >> > >> the sleeping logic above (including what is out of git diff context) is > >> a bit weird (tight loop with a sleep after each 16 attempts, with an > >> upper bound of 2k attempts!) > >> > >> My understanding of usleep_range() is that it puts process to sleep > >> (and even leads to sched() call). > >> So cond_resched() looks redundant here. > > > > This matches my understanding too. usleep_range() should put the thread > > to sleep, effectively releasing the cpu to do other work. The reason I > > put cond_resched() here is that pci_read_config_dword() might take long > > time when that card sees fatal errors. I was not able to reproduce this > > so I am okay with removing this cond_resched(). > > > >> > >>> } while (flag != expected_val); > >>> > >>> return 0; > >>> @@ -280,6 +281,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data, > >>> return read_addr; > >>> > >>> read_addr = next_read_addr; > >>> + cond_resched(); > >> > >> Would be great to see how many registers there are/how long it takes to > >> dump them in commit message. > >> My guess is that a single mlx5_vsc_gw_read_fast() call is very short and > >> there are many. With that cond_resched() should be rather under some > > > > I did some testing on ConnectX-5 Ex MCX516A-CDAT and here is what I saw: > > > > - mlx5_vsc_gw_read_block_fast() was called with length = 1310716 > > - mlx5_vsc_gw_read_fast() does 4 bytes at a time but the did not read > > full 1310716 bytes. Instead it was called 53813 times only. There are > > jumps in read_addr. > > - On average mlx5_vsc_gw_read_fast() took 35284.4ns > > - In total mlx5_vsc_wait_on_flag() called vsc_read() 54707 times with > > average runtime of 17548.3ns for each call. In some instances vsc_read() > > was called more than once until mlx5_vsc_wait_on_flag() returned. Mostly > > one time, but I saw 5, 8, and in one instance 16 times. As expected, > > the thread released the cpu after 16 iterations. > > - Total time to read the dump was 35284.4ns * 53813 ~= 1.898s > > > >> if (iterator % XXX == 0) condition. > > > > Putting a cond_resched() every 16 register reads, similar to > > mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this > > will result in cond_resched() every ~0.56ms, which is okay IMO. > > Sorry for the late response, I just got back from vacation. > All your measures looks right. > crdump is the devlink health dump of mlx5 FW fatal health reporter. > In the common case since auto-dump and auto-recover are default for this > health reporter, the crdump will be collected on fatal error of the mlx5 > device and the recovery flow waits for it and run right after crdump > finished. > I agree with adding cond_resched(), but I would reduce the frequency, > like once in 1024 iterations of register read. > mlx5_vsc_wait_on_flag() is a bit different case as the usleep there is > after 16 retries waiting for the value to change. > Thanks. Thanks for taking a look. Once in every 1024 iterations approximately translates to 35284.4ns * 1024 ~= 36.1ms, which is relatively long time IMO. How about any power-of-two <= 128 (~4.51ms)?