From: Cosmin Ratiu <cratiu@nvidia.com>
To: Tariq Toukan <tariqt@nvidia.com>, "horms@kernel.org" <horms@kernel.org>
Cc: "davem@davemloft.net" <davem@davemloft.net>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
Gal Pressman <gal@nvidia.com>,
Leon Romanovsky <leonro@nvidia.com>,
"kuba@kernel.org" <kuba@kernel.org>,
"edumazet@google.com" <edumazet@google.com>,
Saeed Mahameed <saeedm@nvidia.com>,
"pabeni@redhat.com" <pabeni@redhat.com>
Subject: Re: [PATCH net-next V2 3/6] net/mlx5: hw counters: Replace IDR+lists with xarray
Date: Fri, 4 Oct 2024 09:32:11 +0000 [thread overview]
Message-ID: <66ccbb841794c98b91d9e8aba48b90c63caa45e7.camel@nvidia.com> (raw)
In-Reply-To: <20241004085844.GA1310185@kernel.org>
On Fri, 2024-10-04 at 09:58 +0100, Simon Horman wrote:
> On Tue, Oct 01, 2024 at 01:37:06PM +0300, Tariq Toukan wrote:
> > From: Cosmin Ratiu <cratiu@nvidia.com>
>
> ...
>
> > +/* Synchronization notes
> > + *
> > + * Access to counter array:
> > + * - create - mlx5_fc_create() (user context)
> > + * - inserts the counter into the xarray.
> > + *
> > + * - destroy - mlx5_fc_destroy() (user context)
> > + * - erases the counter from the xarray and releases it.
> > + *
> > + * - query mlx5_fc_query(), mlx5_fc_query_cached{,_raw}() (user context)
> > + * - user should not access a counter after destroy.
> > + *
> > + * - bulk query (single thread workqueue context)
> > + * - create: query relies on 'lastuse' to avoid updating counters added
> > + * around the same time as the current bulk cmd.
> > + * - destroy: destroyed counters will not be accessed, even if they are
> > + * destroyed during a bulk query command.
> > + */
> > +static void mlx5_fc_stats_query_all_counters(struct mlx5_core_dev *dev)
> > {
> > struct mlx5_fc_stats *fc_stats = dev->priv.fc_stats;
> > - bool query_more_counters = (first->id <= last_id);
> > - int cur_bulk_len = fc_stats->bulk_query_len;
> > + u32 bulk_len = fc_stats->bulk_query_len;
> > + XA_STATE(xas, &fc_stats->counters, 0);
> > u32 *data = fc_stats->bulk_query_out;
> > - struct mlx5_fc *counter = first;
> > + struct mlx5_fc *counter;
> > + u32 last_bulk_id = 0;
> > + u64 bulk_query_time;
> > u32 bulk_base_id;
> > - int bulk_len;
> > int err;
> >
> > - while (query_more_counters) {
> > - /* first id must be aligned to 4 when using bulk query */
> > - bulk_base_id = counter->id & ~0x3;
> > -
> > - /* number of counters to query inc. the last counter */
> > - bulk_len = min_t(int, cur_bulk_len,
> > - ALIGN(last_id - bulk_base_id + 1, 4));
> > -
> > - err = mlx5_cmd_fc_bulk_query(dev, bulk_base_id, bulk_len,
> > - data);
> > - if (err) {
> > - mlx5_core_err(dev, "Error doing bulk query: %d\n", err);
> > - return;
> > - }
> > - query_more_counters = false;
> > -
> > - list_for_each_entry_from(counter, &fc_stats->counters, list) {
> > - int counter_index = counter->id - bulk_base_id;
> > - struct mlx5_fc_cache *cache = &counter->cache;
> > -
> > - if (counter->id >= bulk_base_id + bulk_len) {
> > - query_more_counters = true;
> > - break;
> > + xas_lock(&xas);
> > + xas_for_each(&xas, counter, U32_MAX) {
> > + if (xas_retry(&xas, counter))
> > + continue;
> > + if (unlikely(counter->id >= last_bulk_id)) {
> > + /* Start new bulk query. */
> > + /* First id must be aligned to 4 when using bulk query. */
> > + bulk_base_id = counter->id & ~0x3;
> > + last_bulk_id = bulk_base_id + bulk_len;
> > + /* The lock is released while querying the hw and reacquired after. */
> > + xas_unlock(&xas);
> > + /* The same id needs to be processed again in the next loop iteration. */
> > + xas_reset(&xas);
> > + bulk_query_time = jiffies;
> > + err = mlx5_cmd_fc_bulk_query(dev, bulk_base_id, bulk_len, data);
> > + if (err) {
> > + mlx5_core_err(dev, "Error doing bulk query: %d\n", err);
> > + return;
> > }
> > -
> > - update_counter_cache(counter_index, data, cache);
> > + xas_lock(&xas);
> > + continue;
> > }
> > + /* Do not update counters added after bulk query was started. */
>
> Hi Cosmin and Tariq,
>
> I'm sorry if it is obvious, but I'm wondering if you could explain further
> the relationship between the if block above, where bulk_query_time (and
> bulk_base_id) is initialised and if block below, which is conditional on
> bulk_query_time.
>
> > + if (time_after64(bulk_query_time, counter->cache.lastuse))
> > + update_counter_cache(counter->id - bulk_base_id, data,
> > + &counter->cache);
> > }
> > + xas_unlock(&xas);
> > }
>
> ...
Hi Simon. Of course.
The first if (with 'unlikely') is the one that starts a bulk query.
The second if is the one that updates a counter's cached value with the
output from the bulk query. Bulks are usually ~32K counters, if I
remember correctly. In any case, a large number.
The first if sets up the bulk query params and executes it without the
lock held. During that time, counters could be added/removed. We don't
want to update counter values for counters added between when the bulk
query was executed and when the lock was reacquired. bulk_query_time
with jiffy granularity is used for that purpose. When a counter is
added, its 'cache.lastuse' is initialized to jiffies. Only counters
with ids between [bulk_base_id, last_bulk_id) added strictly before the
jiffy when bulk_query_time was set will be updated because the hw might
not have set newer counter values in the bulk result and values might
be garbage.
I also have this blurb in the commit description, but it is probably
lost in the wall of text:
"
Counters could be added/deleted while the HW is queried. This is safe,
as the HW API simply returns unknown values for counters not in HW, but
those values won't be accessed. Only counters present in xarray before
bulk query will actually read queried cache values.
"
There's also a comment bit in the "Synchronization notes" section:
* - bulk query (single thread workqueue context)
* - create: query relies on 'lastuse' to avoid updating counters
added
* around the same time as the current bulk cmd.
Hope this clears things out, let us know if you'd like something
improved.
Cosmin.
next prev parent reply other threads:[~2024-10-04 9:32 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-01 10:37 [PATCH net-next V2 0/6] net/mlx5: hw counters refactor Tariq Toukan
2024-10-01 10:37 ` [PATCH net-next V2 1/6] net/mlx5: hw counters: Make fc_stats & fc_pool private Tariq Toukan
2024-10-01 10:37 ` [PATCH net-next V2 2/6] net/mlx5: hw counters: Use kvmalloc for bulk query buffer Tariq Toukan
2024-10-01 10:37 ` [PATCH net-next V2 3/6] net/mlx5: hw counters: Replace IDR+lists with xarray Tariq Toukan
2024-10-04 8:58 ` Simon Horman
2024-10-04 9:32 ` Cosmin Ratiu [this message]
2024-10-04 10:34 ` Simon Horman
2024-10-01 10:37 ` [PATCH net-next V2 4/6] net/mlx5: hw counters: Drop unneeded cacheline alignment Tariq Toukan
2024-10-01 10:37 ` [PATCH net-next V2 5/6] net/mlx5: hw counters: Don't maintain a counter count Tariq Toukan
2024-10-01 10:37 ` [PATCH net-next V2 6/6] net/mlx5: hw counters: Remove mlx5_fc_create_ex Tariq Toukan
2024-10-04 18:50 ` [PATCH net-next V2 0/6] net/mlx5: hw counters refactor patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=66ccbb841794c98b91d9e8aba48b90c63caa45e7.camel@nvidia.com \
--to=cratiu@nvidia.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=gal@nvidia.com \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=leonro@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).