netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni@redhat.com>
To: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
Cc: Jordan Rife <jrife@google.com>,
	davem@davemloft.net, edumazet@google.com,  kuba@kernel.org,
	netdev@vger.kernel.org, dborkman@kernel.org,
	 philipp.reisner@linbit.com, lars.ellenberg@linbit.com,
	 christoph.boehmwalder@linbit.com, axboe@kernel.dk,
	airlied@redhat.com,  chengyou@linux.alibaba.com,
	kaishen@linux.alibaba.com, jgg@ziepe.ca,  leon@kernel.org,
	bmt@zurich.ibm.com, isdn@linux-pingi.de, ccaulfie@redhat.com,
	 teigland@redhat.com, mark@fasheh.com, jlbec@evilplan.org,
	 joseph.qi@linux.alibaba.com, sfrench@samba.org,
	pc@manguebit.com,  lsahlber@redhat.com, sprasad@microsoft.com,
	tom@talpey.com, horms@verge.net.au,  ja@ssi.bg,
	pablo@netfilter.org, kadlec@netfilter.org, fw@strlen.de,
	 santosh.shilimkar@oracle.com, stable@vger.kernel.org
Subject: Re: [PATCH net v4 3/3] net: prevent address rewrite in kernel_bind()
Date: Thu, 21 Sep 2023 17:25:56 +0200	[thread overview]
Message-ID: <b822f1246a35682ad6f2351d451191825416af58.camel@redhat.com> (raw)
In-Reply-To: <CAF=yD-K3oLn++V_zJMjGRXdiPh2qi+Fit6uOh4z4HxuuyCOyog@mail.gmail.com>

On Thu, 2023-09-21 at 09:30 -0400, Willem de Bruijn wrote:
> On Thu, Sep 21, 2023 at 4:35 AM Paolo Abeni <pabeni@redhat.com> wrote:
> > 
> > On Wed, 2023-09-20 at 09:30 -0400, Willem de Bruijn wrote:
> > > Jordan Rife wrote:
> > > > Similar to the change in commit 0bdf399342c5("net: Avoid address
> > > > overwrite in kernel_connect"), BPF hooks run on bind may rewrite the
> > > > address passed to kernel_bind(). This change
> > > > 
> > > > 1) Makes a copy of the bind address in kernel_bind() to insulate
> > > >    callers.
> > > > 2) Replaces direct calls to sock->ops->bind() with kernel_bind()
> > > > 
> > > > Link: https://lore.kernel.org/netdev/20230912013332.2048422-1-jrife@google.com/
> > > > Fixes: 4fbac77d2d09 ("bpf: Hooks for sys_bind")
> > > > Cc: stable@vger.kernel.org
> > > > Signed-off-by: Jordan Rife <jrife@google.com>
> > > 
> > > Reviewed-by: Willem de Bruijn <willemb@google.com>
> > 
> > I fear this is going to cause a few conflicts with other trees. We can
> > still take it, but at very least we will need some acks from the
> > relevant maintainers.
> > 
> > I *think* it would be easier split this and patch 1/3 in individual
> > patches targeting the different trees, hopefully not many additional
> > patches will be required. What do you think?
> 
> Roughly how many patches would result from this one patch. From the
> stat line I count { block/drbd, char/agp, infiniband, isdn, fs/dlm,
> fs/ocfs2, fs/smb, netfilter, rds }. That's worst case nine callers
> plus the core patch to net/socket.c?

I think there should not be problems taking directly changes for rds
and nf/ipvs.

Additionally, I think the non network changes could consolidate the
bind and connect changes in a single patch.

It should be 7 not-network patches overall.

> If logistically simpler and you prefer the approach, we can also
> revisit Jordan's original approach, which embedded the memcpy inside
> the BPF branches.
> 
> That has the slight benefit to in-kernel callers that it limits the
> cost of the memcpy to cgroup_bpf_enabled. But adds a superfluous
> second copy to the more common userspace callers, again at least only
> if cgroup_bpf_enabled.
> 
> If so, it should at least move the whole logic around those BPF hooks
> into helper functions.

IMHO the approach implemented here is preferable, I suggest going
forward with it.

Thanks,

Paolo


  reply	other threads:[~2023-09-21 17:49 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-19 17:53 [PATCH net v4 3/3] net: prevent address rewrite in kernel_bind() Jordan Rife
2023-09-20 13:30 ` Willem de Bruijn
2023-09-21  8:35   ` Paolo Abeni
2023-09-21 13:30     ` Willem de Bruijn
2023-09-21 15:25       ` Paolo Abeni [this message]
2023-09-21 17:01         ` Jordan Rife
2023-09-21 18:08           ` Paolo Abeni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b822f1246a35682ad6f2351d451191825416af58.camel@redhat.com \
    --to=pabeni@redhat.com \
    --cc=airlied@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=bmt@zurich.ibm.com \
    --cc=ccaulfie@redhat.com \
    --cc=chengyou@linux.alibaba.com \
    --cc=christoph.boehmwalder@linbit.com \
    --cc=davem@davemloft.net \
    --cc=dborkman@kernel.org \
    --cc=edumazet@google.com \
    --cc=fw@strlen.de \
    --cc=horms@verge.net.au \
    --cc=isdn@linux-pingi.de \
    --cc=ja@ssi.bg \
    --cc=jgg@ziepe.ca \
    --cc=jlbec@evilplan.org \
    --cc=joseph.qi@linux.alibaba.com \
    --cc=jrife@google.com \
    --cc=kadlec@netfilter.org \
    --cc=kaishen@linux.alibaba.com \
    --cc=kuba@kernel.org \
    --cc=lars.ellenberg@linbit.com \
    --cc=leon@kernel.org \
    --cc=lsahlber@redhat.com \
    --cc=mark@fasheh.com \
    --cc=netdev@vger.kernel.org \
    --cc=pablo@netfilter.org \
    --cc=pc@manguebit.com \
    --cc=philipp.reisner@linbit.com \
    --cc=santosh.shilimkar@oracle.com \
    --cc=sfrench@samba.org \
    --cc=sprasad@microsoft.com \
    --cc=stable@vger.kernel.org \
    --cc=teigland@redhat.com \
    --cc=tom@talpey.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).