From: Leon Romanovsky <leon@kernel.org>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Doug Ledford <dledford@redhat.com>,
RDMA mailing list <linux-rdma@vger.kernel.org>,
Majd Dibbiny <majd@mellanox.com>, Mark Zhang <markz@mellanox.com>,
Saeed Mahameed <saeedm@mellanox.com>,
linux-netdev <netdev@vger.kernel.org>
Subject: Re: [PATCH rdma-next v2 13/17] RDMA/core: Get sum value of all counters when perform a sysfs stat read
Date: Wed, 29 May 2019 14:15:44 +0300 [thread overview]
Message-ID: <20190529111544.GV4633@mtr-leonro.mtl.com> (raw)
In-Reply-To: <20190522171042.GA15023@ziepe.ca>
On Wed, May 22, 2019 at 02:10:42PM -0300, Jason Gunthorpe wrote:
> On Mon, Apr 29, 2019 at 11:34:49AM +0300, Leon Romanovsky wrote:
> > From: Mark Zhang <markz@mellanox.com>
> >
> > Since a QP can only be bound to one counter, then if it is bound to a
> > separate counter, for backward compatibility purpose, the statistic
> > value must be:
> > * stat of default counter
> > + stat of all running allocated counters
> > + stat of all deallocated counters (history stats)
> >
> > Signed-off-by: Mark Zhang <markz@mellanox.com>
> > Reviewed-by: Majd Dibbiny <majd@mellanox.com>
> > Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> > drivers/infiniband/core/counters.c | 99 +++++++++++++++++++++++++++++-
> > drivers/infiniband/core/device.c | 8 ++-
> > drivers/infiniband/core/sysfs.c | 10 ++-
> > include/rdma/rdma_counter.h | 5 +-
> > 4 files changed, 113 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/infiniband/core/counters.c b/drivers/infiniband/core/counters.c
> > index 36cd9eca1e46..f598b1cdb241 100644
> > +++ b/drivers/infiniband/core/counters.c
> > @@ -146,6 +146,20 @@ static int __rdma_counter_bind_qp(struct rdma_counter *counter,
> > return ret;
> > }
> >
> > +static void counter_history_stat_update(const struct rdma_counter *counter)
> > +{
> > + struct ib_device *dev = counter->device;
> > + struct rdma_port_counter *port_counter;
> > + int i;
> > +
> > + port_counter = &dev->port_data[counter->port].port_counter;
> > + if (!port_counter->hstats)
> > + return;
> > +
> > + for (i = 0; i < counter->stats->num_counters; i++)
> > + port_counter->hstats->value[i] += counter->stats->value[i];
> > +}
> > +
> > static int __rdma_counter_unbind_qp(struct ib_qp *qp, bool force)
> > {
> > struct rdma_counter *counter = qp->counter;
> > @@ -285,8 +299,10 @@ int rdma_counter_unbind_qp(struct ib_qp *qp, bool force)
> > return ret;
> >
> > rdma_restrack_put(&counter->res);
> > - if (atomic_dec_and_test(&counter->usecnt))
> > + if (atomic_dec_and_test(&counter->usecnt)) {
> > + counter_history_stat_update(counter);
> > rdma_counter_dealloc(counter);
> > + }
> >
> > return 0;
> > }
> > @@ -307,21 +323,98 @@ int rdma_counter_query_stats(struct rdma_counter *counter)
> > return ret;
> > }
> >
> > -void rdma_counter_init(struct ib_device *dev)
> > +static u64 get_running_counters_hwstat_sum(struct ib_device *dev,
> > + u8 port, u32 index)
> > +{
> > + struct rdma_restrack_entry *res;
> > + struct rdma_restrack_root *rt;
> > + struct rdma_counter *counter;
> > + unsigned long id = 0;
> > + u64 sum = 0;
> > +
> > + rt = &dev->res[RDMA_RESTRACK_COUNTER];
> > + xa_lock(&rt->xa);
> > + xa_for_each(&rt->xa, id, res) {
> > + if (!rdma_restrack_get(res))
> > + continue;
>
> Why do we need to get refcounts if we are holding the xa_lock?
Don't we need to protect an entry itself from disappearing?
>
> > +
> > + counter = container_of(res, struct rdma_counter, res);
> > + if ((counter->device != dev) || (counter->port != port))
> > + goto next;
> > +
> > + if (rdma_counter_query_stats(counter))
> > + goto next;
>
> And rdma_counter_query_stats does
>
> + mutex_lock(&counter->lock);
>
> So this was never tested as it will insta-crash with lockdep.
>
> Presumably this is why it is using xa_for_each and restrack_get - but
> it needs to drop the lock after successful get.
>
> This sort of comment applies to nearly evey place in this series that
> uses xa_for_each.
>
> This needs to be tested with lockdep.
I use LOCKDEP.
>
> Jason
next prev parent reply other threads:[~2019-05-29 11:15 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-29 8:34 [PATCH rdma-next v2 00/17] Statistics counter support Leon Romanovsky
2019-04-29 8:34 ` [PATCH mlx5-next v2 01/17] net/mlx5: Add rts2rts_qp_counters_set_id field in hca cap Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 02/17] RDMA/restrack: Introduce statistic counter Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 03/17] RDMA/restrack: Add an API to attach a task to a resource Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 04/17] RDMA/restrack: Make is_visible_in_pid_ns() as an API Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 05/17] RDMA/counter: Add set/clear per-port auto mode support Leon Romanovsky
2019-05-22 16:56 ` Jason Gunthorpe
2019-05-29 10:12 ` Leon Romanovsky
2019-05-29 10:43 ` Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 06/17] RDMA/counter: Add "auto" configuration " Leon Romanovsky
2019-05-22 17:11 ` Jason Gunthorpe
2019-05-22 17:15 ` Jason Gunthorpe
2019-04-29 8:34 ` [PATCH mlx5-next v2 07/17] IB/mlx5: Support set qp counter Leon Romanovsky
2019-04-29 18:22 ` Saeed Mahameed
2019-04-29 18:38 ` Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 08/17] IB/mlx5: Add counter set id as a parameter for mlx5_ib_query_q_counters() Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 09/17] IB/mlx5: Support statistic q counter configuration Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 10/17] RDMA/nldev: Allow counter auto mode configration through RDMA netlink Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 11/17] RDMA/netlink: Implement counter dumpit calback Leon Romanovsky
2019-05-22 17:21 ` Jason Gunthorpe
2019-05-29 11:31 ` Leon Romanovsky
2019-05-22 17:22 ` Jason Gunthorpe
2019-04-29 8:34 ` [PATCH rdma-next v2 12/17] IB/mlx5: Add counter_alloc_stats() and counter_update_stats() support Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 13/17] RDMA/core: Get sum value of all counters when perform a sysfs stat read Leon Romanovsky
2019-05-22 17:10 ` Jason Gunthorpe
2019-05-29 11:15 ` Leon Romanovsky [this message]
2019-05-29 15:41 ` Jason Gunthorpe
2019-05-22 17:26 ` Jason Gunthorpe
2019-05-29 11:05 ` Leon Romanovsky
2019-05-29 15:44 ` Jason Gunthorpe
2019-05-30 6:01 ` Mark Zhang
2019-05-30 7:04 ` Leon Romanovsky
2019-05-29 11:17 ` Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 14/17] RDMA/counter: Allow manual mode configuration support Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 15/17] RDMA/nldev: Allow counter manual mode configration through RDMA netlink Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 16/17] RDMA/nldev: Allow get counter mode " Leon Romanovsky
2019-04-29 8:34 ` [PATCH rdma-next v2 17/17] RDMA/nldev: Allow get default counter statistics " Leon Romanovsky
2019-05-22 17:30 ` Jason Gunthorpe
2019-05-29 11:54 ` Leon Romanovsky
2019-05-22 17:31 ` [PATCH rdma-next v2 00/17] Statistics counter support Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190529111544.GV4633@mtr-leonro.mtl.com \
--to=leon@kernel.org \
--cc=dledford@redhat.com \
--cc=jgg@ziepe.ca \
--cc=linux-rdma@vger.kernel.org \
--cc=majd@mellanox.com \
--cc=markz@mellanox.com \
--cc=netdev@vger.kernel.org \
--cc=saeedm@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).