From: David Miller <davem@davemloft.net>
To: gorcunov@gmail.com
Cc: xiyou.wangcong@gmail.com, alexei.starovoitov@gmail.com,
eric.dumazet@gmail.com, netdev@vger.kernel.org,
solar@openwall.com, vvs@virtuozzo.com, avagin@virtuozzo.com,
xemul@virtuozzo.com, vdavydov@virtuozzo.com,
khorenko@virtuozzo.com, pablo@netfilter.org,
netfilter-devel@vger.kernel.org
Subject: Re: [RFC] net: ipv4 -- Introduce ifa limit per net
Date: Thu, 10 Mar 2016 16:05:21 -0500 (EST) [thread overview]
Message-ID: <20160310.160521.1642655131932337300.davem@davemloft.net> (raw)
In-Reply-To: <20160310201351.GB1989@uranus.lan>
From: Cyrill Gorcunov <gorcunov@gmail.com>
Date: Thu, 10 Mar 2016 23:13:51 +0300
> On Thu, Mar 10, 2016 at 03:03:11PM -0500, David Miller wrote:
>> From: Cyrill Gorcunov <gorcunov@gmail.com>
>> Date: Thu, 10 Mar 2016 23:01:34 +0300
>>
>> > On Thu, Mar 10, 2016 at 02:55:43PM -0500, David Miller wrote:
>> >> >
>> >> > Hmm, but inetdev_destroy() is only called when NETDEV_UNREGISTER
>> >> > is happening and masq already registers a netdev notifier...
>> >>
>> >> Indeed, good catch. Therefore:
>> >>
>> >> 1) Keep the masq netdev notifier. That will flush the conntrack table
>> >> for the inetdev_destroy event.
>> >>
>> >> 2) Make the inetdev notifier only do something if inetdev->dead is
>> >> false. (ie. we are flushing an individual address)
>> >>
>> >> And then we don't need the NETDEV_UNREGISTER thing at all:
>> >>
>> >> diff --git a/net/ipv4/netfilter/nf_nat_masquerade_ipv4.c b/net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
>> >> index c6eb421..f71841a 100644
>> >> --- a/net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
>> >> +++ b/net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
>> >> @@ -108,10 +108,20 @@ static int masq_inet_event(struct notifier_block *this,
>> >> unsigned long event,
>> >> void *ptr)
>> >> {
>> >> - struct net_device *dev = ((struct in_ifaddr *)ptr)->ifa_dev->dev;
>> >> struct netdev_notifier_info info;
>> >> + struct in_ifaddr *ifa = ptr;
>> >> + struct in_device *idev;
>> >>
>> >> - netdev_notifier_info_init(&info, dev);
>> >> + /* The masq_dev_notifier will catch the case of the device going
>> >> + * down. So if the inetdev is dead and being destroyed we have
>> >> + * no work to do. Otherwise this is an individual address removal
>> >> + * and we have to perform the flush.
>> >> + */
>> >> + idev = ifa->ifa_dev;
>> >> + if (idev->dead)
>> >> + return NOTIFY_DONE;
>> >> +
>> >> + netdev_notifier_info_init(&info, idev->dev);
>> >> return masq_device_event(this, event, &info);
>> >> }
>> >
>> > Guys, I'm lost. Currently masq_device_event calls for conntrack
>> > cleanup with device index, so that once device is going down, the
>> > appropriate conntracks gonna be dropped off. Now if device is dead
>> > nobody will cleanup the conntracks?
>>
>> Both notifiers are run in the inetdev_destroy() case.
>>
>> Maybe that's what you are missing.
>
> No :) Look, here is what I mean. Previously with your two patches
> we've been calling nf-cleanup for every address, so we had to make
> code call for cleanup for one time only. Now with the patch above
> the code flow is the following
>
> inetdev_destroy
> in_dev->dead = 1;
> ...
> inet_del_ifa
> ...
> blocking_notifier_call_chain(&inetaddr_chain, NETDEV_DOWN, ifa1);
> ...
> masq_inet_event
> ...
> masq_device_event
> if (idev->dead)
> return NOTIFY_DONE;
>
> and nobody calls for nf_ct_iterate_cleanup, no?
Oh yes they do, from masq's non-inet notifier. masq registers two
notifiers, one for generic netdev and one for inetdev.
next prev parent reply other threads:[~2016-03-10 21:05 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20160309175307.GM2207@uranus.lan>
[not found] ` <20160309.152730.691838022304871697.davem@davemloft.net>
[not found] ` <20160309204158.GO2207@uranus.lan>
2016-03-09 20:47 ` [RFC] net: ipv4 -- Introduce ifa limit per net David Miller
2016-03-09 20:57 ` Cyrill Gorcunov
2016-03-09 21:10 ` David Miller
2016-03-09 21:16 ` Cyrill Gorcunov
2016-03-10 10:20 ` Cyrill Gorcunov
2016-03-10 11:03 ` Cyrill Gorcunov
2016-03-10 15:09 ` Cyrill Gorcunov
2016-03-10 18:01 ` David Miller
2016-03-10 18:48 ` Cyrill Gorcunov
2016-03-10 19:02 ` Cong Wang
2016-03-10 19:55 ` David Miller
2016-03-10 20:01 ` Cyrill Gorcunov
2016-03-10 20:03 ` David Miller
2016-03-10 20:13 ` Cyrill Gorcunov
2016-03-10 20:19 ` Cyrill Gorcunov
2016-03-10 21:05 ` David Miller [this message]
2016-03-10 21:19 ` Cyrill Gorcunov
2016-03-10 21:59 ` Cyrill Gorcunov
2016-03-10 22:36 ` David Miller
2016-03-10 22:40 ` Cyrill Gorcunov
2016-03-11 20:40 ` David Miller
2016-03-11 20:58 ` Florian Westphal
2016-03-11 21:00 ` Cyrill Gorcunov
2016-03-11 21:22 ` Cyrill Gorcunov
2016-03-11 21:59 ` Cyrill Gorcunov
2016-03-14 3:29 ` David Miller
2016-03-10 21:09 ` Cong Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160310.160521.1642655131932337300.davem@davemloft.net \
--to=davem@davemloft.net \
--cc=alexei.starovoitov@gmail.com \
--cc=avagin@virtuozzo.com \
--cc=eric.dumazet@gmail.com \
--cc=gorcunov@gmail.com \
--cc=khorenko@virtuozzo.com \
--cc=netdev@vger.kernel.org \
--cc=netfilter-devel@vger.kernel.org \
--cc=pablo@netfilter.org \
--cc=solar@openwall.com \
--cc=vdavydov@virtuozzo.com \
--cc=vvs@virtuozzo.com \
--cc=xemul@virtuozzo.com \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).