From: Leon Romanovsky <leon@kernel.org>
To: Jinpu Wang <jinpu.wang@cloud.ionos.com>
Cc: linux-rdma@vger.kernel.org, Bart Van Assche <bvanassche@acm.org>,
Doug Ledford <dledford@redhat.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Danil Kipnis <danil.kipnis@cloud.ionos.com>,
Md Haris Iqbal <haris.iqbal@cloud.ionos.com>,
Lutz Pogrell <lutz.pogrell@cloud.ionos.com>
Subject: Re: [PATCH for-next 2/4] RDMA/rtrs: Only allow addition of path to an already established session
Date: Thu, 11 Feb 2021 11:36:58 +0200 [thread overview]
Message-ID: <20210211093658.GE1275163@unreal> (raw)
In-Reply-To: <CAMGffEnxs4NXODsYKyXyRvfUUwyF+bX_XO7JfA_pyFSawwo-fQ@mail.gmail.com>
On Thu, Feb 11, 2021 at 10:23:54AM +0100, Jinpu Wang wrote:
> On Thu, Feb 11, 2021 at 9:43 AM Leon Romanovsky <leon@kernel.org> wrote:
> >
> > On Thu, Feb 11, 2021 at 07:55:24AM +0100, Jack Wang wrote:
> > > From: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> > >
> > > While adding a path from the client side to an already established
> > > session, it was possible to provide the destination IP to a different
> > > server. This is dangerous.
> > >
> > > This commit adds an extra member to the rtrs_msg_conn_req structure, named
> > > first_conn; which is supposed to notify if the connection request is the
> > > first for that session or not.
> > >
> > > On the server side, if a session does not exist but the first_conn
> > > received inside the rtrs_msg_conn_req structure is 1, the connection
> > > request is failed. This signifies that the connection request is for an
> > > already existing session, and since the server did not find one, it is an
> > > wrong connection request.
> > >
> > > Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality")
> > > Fixes: 9cb837480424 ("RDMA/rtrs: server: main functionality")
> > > Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
> > > Reviewed-by: Lutz Pogrell <lutz.pogrell@cloud.ionos.com>
> > > Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
> > > ---
> > > drivers/infiniband/ulp/rtrs/rtrs-clt.c | 5 +++++
> > > drivers/infiniband/ulp/rtrs/rtrs-clt.h | 1 +
> > > drivers/infiniband/ulp/rtrs/rtrs-pri.h | 4 +++-
> > > drivers/infiniband/ulp/rtrs/rtrs-srv.c | 21 ++++++++++++++++-----
> > > 4 files changed, 25 insertions(+), 6 deletions(-)
<...>
> > >
> > > mutex_lock(&ctx->srv_mutex);
> > > list_for_each_entry(srv, &ctx->srv_list, ctx_list) {
> > > @@ -1346,12 +1348,20 @@ static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx,
> > > return srv;
> > > }
> > > }
> > > + /*
> > > + * If this request is not the first connection request from the
> > > + * client for this session then fail and return error.
> > > + */
> > > + if (!first_conn) {
> > > + err = -ENXIO;
> > > + goto err;
> > > + }
> >
> > Are you sure that this check not racy?
> I can't see how a function parameter check can be racy, can you elaborate?
get_or_create_srv() itself is protected with mutex_lock, but it can be called
in parallel by rtrs_rdma_connect(), this is why I asked.
Thanks
> >
> > Thanks
> Thanks for the review.!
> >
> > >
> > > /* need to allocate a new srv */
> > > srv = kzalloc(sizeof(*srv), GFP_KERNEL);
> > > if (!srv) {
> > > mutex_unlock(&ctx->srv_mutex);
> > > - return NULL;
> > > + goto err;
> > > }
> > >
> > > INIT_LIST_HEAD(&srv->paths_list);
> > > @@ -1386,7 +1396,8 @@ static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx,
> > >
> > > err_free_srv:
> > > kfree(srv);
> > > - return NULL;
> > > +err:
> > > + return ERR_PTR(err);
> > > }
> > >
> > > static void put_srv(struct rtrs_srv *srv)
> > > @@ -1787,12 +1798,12 @@ static int rtrs_rdma_connect(struct rdma_cm_id *cm_id,
> > > goto reject_w_econnreset;
> > > }
> > > recon_cnt = le16_to_cpu(msg->recon_cnt);
> > > - srv = get_or_create_srv(ctx, &msg->paths_uuid);
> > > + srv = get_or_create_srv(ctx, &msg->paths_uuid, msg->first_conn);
> > > /*
> > > * "refcount == 0" happens if a previous thread calls get_or_create_srv
> > > * allocate srv, but chunks of srv are not allocated yet.
> > > */
> > > - if (!srv || refcount_read(&srv->refcount) == 0) {
> > > + if (IS_ERR(srv) || refcount_read(&srv->refcount) == 0) {
> > > err = -ENOMEM;
> > > goto reject_w_err;
> > > }
> > > --
> > > 2.25.1
> > >
next prev parent reply other threads:[~2021-02-11 9:47 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-11 6:55 [PATCH for-next 0/4] A few bugfix for RTRS Jack Wang
2021-02-11 6:55 ` [PATCH for-next 1/4] RDMA/rtrs-srv: Fix BUG: KASAN: stack-out-of-bounds Jack Wang
2021-02-11 8:33 ` Leon Romanovsky
2021-02-11 6:55 ` [PATCH for-next 2/4] RDMA/rtrs: Only allow addition of path to an already established session Jack Wang
2021-02-11 8:43 ` Leon Romanovsky
2021-02-11 9:23 ` Jinpu Wang
2021-02-11 9:36 ` Leon Romanovsky [this message]
2021-02-11 9:48 ` Jinpu Wang
2021-02-11 9:51 ` Leon Romanovsky
2021-02-11 6:55 ` [PATCH for-next 3/4] RDMA/rtrs-srv: fix memory leak by missing kobject free Jack Wang
2021-02-11 6:55 ` [PATCH for-next 4/4] RDMA/rtrs-srv-sysfs: fix missing put_device Jack Wang
2021-02-11 8:48 ` Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210211093658.GE1275163@unreal \
--to=leon@kernel.org \
--cc=bvanassche@acm.org \
--cc=danil.kipnis@cloud.ionos.com \
--cc=dledford@redhat.com \
--cc=haris.iqbal@cloud.ionos.com \
--cc=jgg@ziepe.ca \
--cc=jinpu.wang@cloud.ionos.com \
--cc=linux-rdma@vger.kernel.org \
--cc=lutz.pogrell@cloud.ionos.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox