From: Jason Gunthorpe <jgg@nvidia.com>
To: Bob Pearson <rpearsonhpe@gmail.com>
Cc: zyjzyj2000@gmail.com, linux-rdma@vger.kernel.org
Subject: Re: [RFC PATCH v9 04/26] RDMA/rxe: Enforce IBA o10-2.2.3
Date: Fri, 28 Jan 2022 12:42:36 -0400 [thread overview]
Message-ID: <20220128164236.GB1786498@nvidia.com> (raw)
In-Reply-To: <14f2d91e-729d-6be0-a2c7-0175db27d293@gmail.com>
On Fri, Jan 28, 2022 at 10:18:45AM -0600, Bob Pearson wrote:
> On 1/28/22 06:53, Jason Gunthorpe wrote:
> > On Thu, Jan 27, 2022 at 03:37:33PM -0600, Bob Pearson wrote:
> >> Add code to check if a QP is attached to one or more multicast groups
> >> when destroy_qp is called and return an error if so.
> >
> > The core code already does some of this anyhow..
> >
> >> diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c
> >> index 949784198d80..34e3c52f0b72 100644
> >> +++ b/drivers/infiniband/sw/rxe/rxe_mcast.c
> >> @@ -114,6 +114,7 @@ static int rxe_mcast_add_grp_elem(struct rxe_dev *rxe, struct rxe_qp *qp,
> >> grp->num_qp++;
> >> elem->qp = qp;
> >> elem->grp = grp;
> >> + atomic_inc(&qp->mcg_num);
> >
> > eg what prevents qp from being concurrently destroyed here?
> >
> > The core code because it doesn't allow a multicast group to be added
> > concurrently with destruction of a QP.
> >
> >> +int rxe_qp_chk_destroy(struct rxe_qp *qp)
> >> +{
> >> + /* See IBA o10-2.2.3
> >> + * An attempt to destroy a QP while attached to a mcast group
> >> + * will fail immediately.
> >> + */
> >> + if (atomic_read(&qp->mcg_num)) {
> >> + pr_warn_once("Attempt to destroy QP while attached to multicast group\n");
> >> + return -EBUSY;
> >
> > Don't print
> >
> > But yes, I think drivers are expected to do this, though most likely
> > this is already happening for other reasons and this is mearly
> > protective against bugs.
> >
> > Jason
>
> The real reason for this patch becomes apparent in the next one or two. With this no longer an issue half the complexity of rxe_mcast goes away. I'll get rid of the print.
> Personally I find them helpful when debugging user code. Maybe a
> pr_debug?
Sure
Jason
next prev parent reply other threads:[~2022-01-28 16:42 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-27 21:37 [RFC PATCH v9 00/26] Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 01/26] RDMA/rxe: Move rxe_mcast_add/delete to rxe_mcast.c Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 02/26] RDMA/rxe: Move rxe_mcast_attach/detach " Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 03/26] RDMA/rxe: Rename rxe_mc_grp and rxe_mc_elem Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 04/26] RDMA/rxe: Enforce IBA o10-2.2.3 Bob Pearson
2022-01-28 12:53 ` Jason Gunthorpe
2022-01-28 16:18 ` Bob Pearson
2022-01-28 16:42 ` Jason Gunthorpe [this message]
2022-01-27 21:37 ` [RFC PATCH v9 05/26] RDMA/rxe: Remove rxe_drop_all_macst_groups Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 06/26] RDMA/rxe: Remove qp->grp_lock and qp->grp_list Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 07/26] RDMA/rxe: Use kzmalloc/kfree for mca Bob Pearson
2022-01-28 18:00 ` Jason Gunthorpe
2022-01-27 21:37 ` [RFC PATCH v9 08/26] RDMA/rxe: Rename grp to mcg and mce to mca Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 09/26] RDMA/rxe: Introduce RXECB(skb) Bob Pearson
2022-01-28 18:29 ` Jason Gunthorpe
2022-01-30 17:47 ` Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 10/26] RDMA/rxe: Split rxe_rcv_mcast_pkt into two phases Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 11/26] RDMA/rxe: Replace locks by rxe->mcg_lock Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 12/26] RDMA/rxe: Replace pool key by rxe->mcg_tree Bob Pearson
2022-01-28 18:32 ` Jason Gunthorpe
2022-01-30 23:23 ` Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 13/26] RDMA/rxe: Remove key'ed object support Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 14/26] RDMA/rxe: Remove mcg from rxe pools Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 15/26] RDMA/rxe: Add code to cleanup mcast memory Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 16/26] RDMA/rxe: Add comments to rxe_mcast.c Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 17/26] RDMA/rxe: Separate code into subroutines Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 18/26] RDMA/rxe: Convert mca read locking to RCU Bob Pearson
2022-01-28 18:39 ` Jason Gunthorpe
2022-01-27 21:37 ` [RFC PATCH v9 19/26] RDMA/rxe: Reverse the sense of RXE_POOL_NO_ALLOC Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 20/26] RDMA/rxe: Delete _locked() APIs for pool objects Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 21/26] RDMA/rxe: Replace obj by elem in declaration Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 22/26] RDMA/rxe: Replace red-black trees by xarrays Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 23/26] RDMA/rxe: Change pool locking to RCU Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 24/26] RDMA/rxe: Add wait_for_completion to pool objects Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 25/26] RDMA/rxe: Fix ref error in rxe_av.c Bob Pearson
2022-01-27 21:37 ` [RFC PATCH v9 26/26] RDMA/rxe: Replace mr by rkey in responder resources Bob Pearson
2022-01-28 18:42 ` [RFC PATCH v9 00/26] Jason Gunthorpe
2022-02-07 19:20 ` Bob Pearson
2022-02-07 19:38 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220128164236.GB1786498@nvidia.com \
--to=jgg@nvidia.com \
--cc=linux-rdma@vger.kernel.org \
--cc=rpearsonhpe@gmail.com \
--cc=zyjzyj2000@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).