public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Kees Cook <keescook@chromium.org>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leon@kernel.org>,
	linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org,
	linux-hardening@vger.kernel.org
Subject: Re: [PATCH] RDMA/mlx5: Use memset_after() to zero struct mlx5_ib_mr
Date: Tue, 7 Dec 2021 11:41:07 -0800	[thread overview]
Message-ID: <202112071138.64C168D@keescook> (raw)
In-Reply-To: <20211207184729.GA118570@nvidia.com>

On Tue, Dec 07, 2021 at 02:47:29PM -0400, Jason Gunthorpe wrote:
> On Sun, Nov 21, 2021 at 03:54:55PM +0200, Leon Romanovsky wrote:
> > On Thu, Nov 18, 2021 at 12:31:38PM -0800, Kees Cook wrote:
> > > In preparation for FORTIFY_SOURCE performing compile-time and run-time
> > > field bounds checking for memset(), avoid intentionally writing across
> > > neighboring fields.
> > > 
> > > Use memset_after() to zero the end of struct mlx5_ib_mr that should
> > > be initialized.
> > > 
> > > Signed-off-by: Kees Cook <keescook@chromium.org>
> > >  drivers/infiniband/hw/mlx5/mlx5_ib.h | 5 ++---
> > >  1 file changed, 2 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> > > index e636e954f6bf..af94c9fe8753 100644
> > > +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> > > @@ -665,8 +665,7 @@ struct mlx5_ib_mr {
> > >  	/* User MR data */
> > >  	struct mlx5_cache_ent *cache_ent;
> > >  	struct ib_umem *umem;
> > > -
> > > -	/* This is zero'd when the MR is allocated */
> > > +	/* Everything after umem is zero'd when the MR is allocated */
> > >  	union {
> > >  		/* Used only while the MR is in the cache */
> > >  		struct {
> > > @@ -718,7 +717,7 @@ struct mlx5_ib_mr {
> > >  /* Zero the fields in the mr that are variant depending on usage */
> > >  static inline void mlx5_clear_mr(struct mlx5_ib_mr *mr)
> > >  {
> > > -	memset(mr->out, 0, sizeof(*mr) - offsetof(struct mlx5_ib_mr, out));
> > > +	memset_after(mr, 0, umem);
> > 
> > I think that it is not equivalent change and you need "memset_after(mr, 0, cache_ent);"
> > to clear umem pointer too.
> 
> Kees?

Oops, sorry, I missed the ealrier reply!

I don't think that matches -- the original code wipes from the start of
"out" to the end of the struct. "out" is the first thing in the union
after "umem", so "umem" was not wiped before. I retained that behavior
("wipe everything after umem").

Am I misunderstanding the desired behavior here?

Thanks!

-- 
Kees Cook

  reply	other threads:[~2021-12-07 19:41 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-18 20:31 [PATCH] RDMA/mlx5: Use memset_after() to zero struct mlx5_ib_mr Kees Cook
2021-11-21 13:54 ` Leon Romanovsky
2021-12-07 18:47   ` Jason Gunthorpe
2021-12-07 19:41     ` Kees Cook [this message]
2021-12-07 19:45       ` Jason Gunthorpe
2022-01-12 20:49         ` Kees Cook
2022-01-13  0:32           ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202112071138.64C168D@keescook \
    --to=keescook@chromium.org \
    --cc=jgg@nvidia.com \
    --cc=leon@kernel.org \
    --cc=linux-hardening@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox