netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Cong Wang <xiyou.wangcong@gmail.com>
To: Gao Feng <gfree.wind@vip.163.com>
Cc: xeb@mail.ru, David Miller <davem@davemloft.net>,
	Linux Kernel Network Developers <netdev@vger.kernel.org>
Subject: Re: Re:Re: Re: [PATCH net] ppp: Fix a scheduling-while-atomic bug in del_chan
Date: Tue, 8 Aug 2017 12:45:53 -0700	[thread overview]
Message-ID: <CAM_iQpW8W24=2atSyStwKPYJ9zmOO5XiznktT3V_3qn00R7r=Q@mail.gmail.com> (raw)
In-Reply-To: <16ae6009.7a67.15dbf64b398.Coremail.gfree.wind@vip.163.com>

On Mon, Aug 7, 2017 at 6:10 PM, Gao Feng <gfree.wind@vip.163.com> wrote:
>
> Sorry, I don't get you clearly. Why the sock_hold() isn't helpful?

I already told you, the dereference happends before sock_hold().

        sock = rcu_dereference(callid_sock[call_id]);
        if (sock) {
                opt = &sock->proto.pptp;
                if (opt->dst_addr.sin_addr.s_addr != s_addr) <=== HERE
                        sock = NULL;
                else
                        sock_hold(sk_pppox(sock));
        }

If we don't wait for readers properly, sock could be freed at the
same time when deference it.

> The pptp_release invokes synchronize_rcu after del_chan, it could make sure the others has increased the sock refcnt if necessary
> and the lookup is over.
> There is no one could get the sock after synchronize_rcu in pptp_release.


If this were true, then this code in pptp_sock_destruct()
would be unneeded:

        if (!(sk->sk_state & PPPOX_DEAD)) {
                del_chan(pppox_sk(sk));
                pppox_unbind_sock(sk);
        }


>
>
> But I think about another problem.
> It seems the pptp_sock_destruct should not invoke del_chan and pppox_unbind_sock.
> Because when the sock refcnt is 0, the pptp_release must have be invoked already.
>


I don't know. Looks like sock_orphan() is only called
in pptp_release(), but I am not sure if there is a case
we call sock destructor before release.

Also note, this socket is very special, it doesn't support
poll(), sendmsg() or recvmsg()..

  reply	other threads:[~2017-08-08 19:46 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-31 10:07 [PATCH net] ppp: Fix a scheduling-while-atomic bug in del_chan gfree.wind
2017-08-01  4:59 ` David Miller
2017-08-01 20:39 ` Cong Wang
2017-08-02 17:13   ` Cong Wang
2017-08-07  1:32     ` Gao Feng
2017-08-07 17:17       ` Cong Wang
2017-08-07 17:34         ` Cong Wang
     [not found]         ` <697dbbd.7911.15dbf5ca3a6.Coremail.gfree.wind@vip.163.com>
2017-08-08  1:10           ` Gao Feng
2017-08-08 19:45             ` Cong Wang [this message]
2017-08-09  5:13               ` Re:Re: " Gao Feng
2017-08-09  7:17                 ` Gao Feng
2017-08-09 21:00                   ` Cong Wang
2017-08-10  2:41                     ` Gao Feng
2017-08-09 18:18                 ` Cong Wang
2017-08-10  1:25                   ` Gao Feng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAM_iQpW8W24=2atSyStwKPYJ9zmOO5XiznktT3V_3qn00R7r=Q@mail.gmail.com' \
    --to=xiyou.wangcong@gmail.com \
    --cc=davem@davemloft.net \
    --cc=gfree.wind@vip.163.com \
    --cc=netdev@vger.kernel.org \
    --cc=xeb@mail.ru \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).