* [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup
@ 2015-12-09 17:30 Florian Westphal
2015-12-11 11:42 ` Pablo Neira Ayuso
2015-12-11 14:43 ` Florian Westphal
0 siblings, 2 replies; 8+ messages in thread
From: Florian Westphal @ 2015-12-09 17:30 UTC (permalink / raw)
To: netfilter-devel; +Cc: Florian Westphal
Ulrich reports soft lockup with following (shortened) callchain:
NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s!
__netif_receive_skb_core+0x6e4/0x774
process_backlog+0x94/0x160
net_rx_action+0x88/0x178
call_do_softirq+0x24/0x3c
do_softirq+0x54/0x6c
__local_bh_enable_ip+0x7c/0xbc
nf_ct_iterate_cleanup+0x11c/0x22c [nf_conntrack]
masq_inet_event+0x20/0x30 [nf_nat_masquerade_ipv6]
atomic_notifier_call_chain+0x1c/0x2c
ipv6_del_addr+0x1bc/0x220 [ipv6]
Problem is that nf_ct_iterate_cleanup can run for a very long time
since it can be interrupted by softirq processing.
Moreover, atomic_notifier_call_chain runs with rcu readlock held.
So lets call cond_resched() in nf_ct_iterate_cleanup loop and defer
the call to a work queue for the atomic_notifier_call_chain case.
Reported-by: Ulrich Weber <uw@ocedo.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
---
Patch also applies to nf-next tree in case you think nf tree isn't appropriate.
net/ipv6/netfilter/nf_nat_masquerade_ipv6.c | 46 +++++++++++++++++++++++++++--
net/netfilter/nf_conntrack_core.c | 3 ++
2 files changed, 46 insertions(+), 3 deletions(-)
diff --git a/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c b/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
index 31ba7ca..a877dee 100644
--- a/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
+++ b/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
@@ -78,14 +78,54 @@ static struct notifier_block masq_dev_notifier = {
.notifier_call = masq_device_event,
};
+struct masq_dev_slow_work {
+ struct work_struct work;
+ struct net *net;
+ int ifindex;
+};
+
+static void iterate_cleanup_work(struct work_struct *work)
+{
+ struct masq_dev_slow_work *w;
+ struct net *net;
+ int ifindex;
+
+ w = container_of(work, struct masq_dev_slow_work, work);
+
+ net = w->net;
+ ifindex = w->ifindex;
+ kfree(w);
+
+ nf_ct_iterate_cleanup(net, device_cmp, (void *)(long)ifindex, 0, 0);
+
+ put_net(net);
+ module_put(THIS_MODULE);
+}
+
static int masq_inet_event(struct notifier_block *this,
unsigned long event, void *ptr)
{
struct inet6_ifaddr *ifa = ptr;
- struct netdev_notifier_info info;
+ struct masq_dev_slow_work *w;
+
+ if (event != NETDEV_DOWN || !try_module_get(THIS_MODULE))
+ return NOTIFY_DONE;
+
+ /* can't call nf_ct_iterate_cleanup in atomic context */
+ w = kmalloc(sizeof(*w), GFP_ATOMIC);
+ if (w) {
+ const struct net_device *dev = ifa->idev->dev;
- netdev_notifier_info_init(&info, ifa->idev->dev);
- return masq_device_event(this, event, &info);
+ INIT_WORK(&w->work, iterate_cleanup_work);
+
+ w->ifindex = dev->ifindex;
+ w->net = get_net(dev_net(dev));
+ schedule_work(&w->work);
+ } else {
+ module_put(THIS_MODULE);
+ }
+
+ return NOTIFY_DONE;
}
static struct notifier_block masq_inet_notifier = {
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 3cb3cb8..cffeb68 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -1422,6 +1422,8 @@ void nf_ct_iterate_cleanup(struct net *net,
struct nf_conn *ct;
unsigned int bucket = 0;
+ might_sleep();
+
while ((ct = get_next_corpse(net, iter, data, &bucket)) != NULL) {
/* Time to push up daises... */
if (del_timer(&ct->timeout))
@@ -1430,6 +1432,7 @@ void nf_ct_iterate_cleanup(struct net *net,
/* ... else the timer will get him soon. */
nf_ct_put(ct);
+ cond_resched();
}
}
EXPORT_SYMBOL_GPL(nf_ct_iterate_cleanup);
--
2.4.10
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup
2015-12-09 17:30 [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup Florian Westphal
@ 2015-12-11 11:42 ` Pablo Neira Ayuso
2015-12-11 11:53 ` Florian Westphal
2015-12-11 14:43 ` Florian Westphal
1 sibling, 1 reply; 8+ messages in thread
From: Pablo Neira Ayuso @ 2015-12-11 11:42 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel
On Wed, Dec 09, 2015 at 06:30:09PM +0100, Florian Westphal wrote:
> Ulrich reports soft lockup with following (shortened) callchain:
>
> NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s!
> __netif_receive_skb_core+0x6e4/0x774
> process_backlog+0x94/0x160
> net_rx_action+0x88/0x178
> call_do_softirq+0x24/0x3c
> do_softirq+0x54/0x6c
> __local_bh_enable_ip+0x7c/0xbc
> nf_ct_iterate_cleanup+0x11c/0x22c [nf_conntrack]
> masq_inet_event+0x20/0x30 [nf_nat_masquerade_ipv6]
> atomic_notifier_call_chain+0x1c/0x2c
> ipv6_del_addr+0x1bc/0x220 [ipv6]
>
> Problem is that nf_ct_iterate_cleanup can run for a very long time
> since it can be interrupted by softirq processing.
> Moreover, atomic_notifier_call_chain runs with rcu readlock held.
>
> So lets call cond_resched() in nf_ct_iterate_cleanup loop and defer
> the call to a work queue for the atomic_notifier_call_chain case.
Don't we potentially have the same problem in IPv4? If so, then it's
probably a good idea to add a nf_ct_iterate_cleanup_defered().
Thanks!
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup
2015-12-11 11:42 ` Pablo Neira Ayuso
@ 2015-12-11 11:53 ` Florian Westphal
0 siblings, 0 replies; 8+ messages in thread
From: Florian Westphal @ 2015-12-11 11:53 UTC (permalink / raw)
To: Pablo Neira Ayuso; +Cc: Florian Westphal, netfilter-devel
Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> On Wed, Dec 09, 2015 at 06:30:09PM +0100, Florian Westphal wrote:
> > Ulrich reports soft lockup with following (shortened) callchain:
> >
> > NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s!
> > __netif_receive_skb_core+0x6e4/0x774
> > process_backlog+0x94/0x160
> > net_rx_action+0x88/0x178
> > call_do_softirq+0x24/0x3c
> > do_softirq+0x54/0x6c
> > __local_bh_enable_ip+0x7c/0xbc
> > nf_ct_iterate_cleanup+0x11c/0x22c [nf_conntrack]
> > masq_inet_event+0x20/0x30 [nf_nat_masquerade_ipv6]
> > atomic_notifier_call_chain+0x1c/0x2c
> > ipv6_del_addr+0x1bc/0x220 [ipv6]
> >
> > Problem is that nf_ct_iterate_cleanup can run for a very long time
> > since it can be interrupted by softirq processing.
> > Moreover, atomic_notifier_call_chain runs with rcu readlock held.
> >
> > So lets call cond_resched() in nf_ct_iterate_cleanup loop and defer
> > the call to a work queue for the atomic_notifier_call_chain case.
>
> Don't we potentially have the same problem in IPv4?
No, the inet notifier appears to be fine (blocking notifier).
The only nf_ct_iterate_cleanup callsite that I found to be problematic
(i.e., not preemptible) is the ipv6 address
deletion notifier in ipv6 masquarading.
I also tried with nf-next + CONFIG_DEBUG_ATOMIC_SLEEP + this patch
and I saw no error on ipv4 address deletion w. ip4 masquerade
module loaded.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup
2015-12-09 17:30 [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup Florian Westphal
2015-12-11 11:42 ` Pablo Neira Ayuso
@ 2015-12-11 14:43 ` Florian Westphal
2015-12-11 17:16 ` Pablo Neira Ayuso
1 sibling, 1 reply; 8+ messages in thread
From: Florian Westphal @ 2015-12-11 14:43 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel
Florian Westphal <fw@strlen.de> wrote:
> Ulrich reports soft lockup with following (shortened) callchain:
>
> NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s!
> __netif_receive_skb_core+0x6e4/0x774
> process_backlog+0x94/0x160
> net_rx_action+0x88/0x178
> call_do_softirq+0x24/0x3c
> do_softirq+0x54/0x6c
> __local_bh_enable_ip+0x7c/0xbc
> nf_ct_iterate_cleanup+0x11c/0x22c [nf_conntrack]
> masq_inet_event+0x20/0x30 [nf_nat_masquerade_ipv6]
> atomic_notifier_call_chain+0x1c/0x2c
> ipv6_del_addr+0x1bc/0x220 [ipv6]
>
> Problem is that nf_ct_iterate_cleanup can run for a very long time
> since it can be interrupted by softirq processing.
> Moreover, atomic_notifier_call_chain runs with rcu readlock held.
Ulrich just reported another softlockup even with this patch applied.
One explanation would be non-matching iter(), in this case
get_next_corpse can take forever since it will walk the entire conntrack
table, rendering the cond_resched moot.
A V2 patch will be coming to also add a lock break + resched to
get_next_corpse.
I'll mark it as 'changes requested' in patchwork.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup
2015-12-11 14:43 ` Florian Westphal
@ 2015-12-11 17:16 ` Pablo Neira Ayuso
2015-12-11 17:21 ` Florian Westphal
0 siblings, 1 reply; 8+ messages in thread
From: Pablo Neira Ayuso @ 2015-12-11 17:16 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel
On Fri, Dec 11, 2015 at 03:43:13PM +0100, Florian Westphal wrote:
> Florian Westphal <fw@strlen.de> wrote:
> > Ulrich reports soft lockup with following (shortened) callchain:
> >
> > NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s!
> > __netif_receive_skb_core+0x6e4/0x774
> > process_backlog+0x94/0x160
> > net_rx_action+0x88/0x178
> > call_do_softirq+0x24/0x3c
> > do_softirq+0x54/0x6c
> > __local_bh_enable_ip+0x7c/0xbc
> > nf_ct_iterate_cleanup+0x11c/0x22c [nf_conntrack]
> > masq_inet_event+0x20/0x30 [nf_nat_masquerade_ipv6]
> > atomic_notifier_call_chain+0x1c/0x2c
> > ipv6_del_addr+0x1bc/0x220 [ipv6]
> >
> > Problem is that nf_ct_iterate_cleanup can run for a very long time
> > since it can be interrupted by softirq processing.
> > Moreover, atomic_notifier_call_chain runs with rcu readlock held.
>
> Ulrich just reported another softlockup even with this patch applied.
>
> One explanation would be non-matching iter(), in this case
> get_next_corpse can take forever since it will walk the entire conntrack
> table, rendering the cond_resched moot.
Probably another reincarnation of 0838aa7fcfcd? Is Ulrich using
conntrack templates?
> A V2 patch will be coming to also add a lock break + resched to
> get_next_corpse.
BTW, the atomic chain notifier in IPv6 seems to be there to handle
this update from the packet path:
ndisc_rcv()
ndisc_router_discovery()
addrconf_prefix_rcv()
manage_tempaddrs()
ipv6_add_addr()
inet6addr_notifier_call_chain()
Probably we can get Hannes have a look into this, I think we can
convert this chain to blocking one through workqueue since
addrconf_prefix_rcv() returns void. The remaining call sites of
inet6addr_notifier_call_chain() that I could tracked come from paths
where I can see ASSERT_RTNL(), so user context is guaranteed.
I'm telling this become I remember that we discussed in netconf'14
Chicago that it would be good to get rid of this kind og asymmetries
between IPv4 and IPv6.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup
2015-12-11 17:16 ` Pablo Neira Ayuso
@ 2015-12-11 17:21 ` Florian Westphal
0 siblings, 0 replies; 8+ messages in thread
From: Florian Westphal @ 2015-12-11 17:21 UTC (permalink / raw)
To: Pablo Neira Ayuso; +Cc: Florian Westphal, netfilter-devel
Pablo Neira Ayuso <pablo@netfilter.org> wrote:
> BTW, the atomic chain notifier in IPv6 seems to be there to handle
> this update from the packet path:
>
> ndisc_rcv()
> ndisc_router_discovery()
> addrconf_prefix_rcv()
> manage_tempaddrs()
> ipv6_add_addr()
> inet6addr_notifier_call_chain()
>
> Probably we can get Hannes have a look into this, I think we can
> convert this chain to blocking one through workqueue since
> addrconf_prefix_rcv() returns void. The remaining call sites of
> inet6addr_notifier_call_chain() that I could tracked come from paths
> where I can see ASSERT_RTNL(), so user context is guaranteed.
>
> I'm telling this become I remember that we discussed in netconf'14
> Chicago that it would be good to get rid of this kind og asymmetries
> between IPv4 and IPv6.
Ok, I agree, that would be a lot nicer.
I'll have a look.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup
@ 2016-01-20 10:16 Florian Westphal
2016-02-01 17:38 ` Pablo Neira Ayuso
0 siblings, 1 reply; 8+ messages in thread
From: Florian Westphal @ 2016-01-20 10:16 UTC (permalink / raw)
To: netfilter-devel; +Cc: Florian Westphal
Ulrich reports soft lockup with following (shortened) callchain:
NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s!
__netif_receive_skb_core+0x6e4/0x774
process_backlog+0x94/0x160
net_rx_action+0x88/0x178
call_do_softirq+0x24/0x3c
do_softirq+0x54/0x6c
__local_bh_enable_ip+0x7c/0xbc
nf_ct_iterate_cleanup+0x11c/0x22c [nf_conntrack]
masq_inet_event+0x20/0x30 [nf_nat_masquerade_ipv6]
atomic_notifier_call_chain+0x1c/0x2c
ipv6_del_addr+0x1bc/0x220 [ipv6]
Problem is that nf_ct_iterate_cleanup can run for a very long time
since it can be interrupted by softirq processing.
Moreover, atomic_notifier_call_chain runs with rcu readlock held.
So lets call cond_resched() in nf_ct_iterate_cleanup and defer
the call to a work queue for the atomic_notifier_call_chain case.
We also need another cond_resched in get_next_corpse, since we
have to deal with iter() always returning false, in that case
get_next_corpse will walk entire conntrack table.
Reported-by: Ulrich Weber <uw@ocedo.com>
Tested-by: Ulrich Weber <uw@ocedo.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
---
I had a look at converting the ipv6 notifier to a blocking one
but I found this too difficult (RTNL held? How to defer notifier calls
from packet path)? Just doing it for masquerade is a lot simpler:
- we only care about NETDEV_DOWN, so no extra work needed in most cases
- can just ignore the notification if too much work is already queued
net/ipv6/netfilter/nf_nat_masquerade_ipv6.c | 74 +++++++++++++++++++++++++++--
net/netfilter/nf_conntrack_core.c | 5 ++
2 files changed, 76 insertions(+), 3 deletions(-)
diff --git a/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c b/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
index 31ba7ca..3878ac2 100644
--- a/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
+++ b/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
@@ -21,6 +21,10 @@
#include <net/ipv6.h>
#include <net/netfilter/ipv6/nf_nat_masquerade.h>
+#define MAX_WORK_COUNT 16
+
+static atomic_t v6_worker_count;
+
unsigned int
nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range *range,
const struct net_device *out)
@@ -78,14 +82,78 @@ static struct notifier_block masq_dev_notifier = {
.notifier_call = masq_device_event,
};
+struct masq_dev_work {
+ struct work_struct work;
+ struct net *net;
+ int ifindex;
+};
+
+static void iterate_cleanup_work(struct work_struct *work)
+{
+ struct masq_dev_work *w;
+ long index;
+
+ w = container_of(work, struct masq_dev_work, work);
+
+ index = w->ifindex;
+ nf_ct_iterate_cleanup(w->net, device_cmp, (void *)index, 0, 0);
+
+ put_net(w->net);
+ kfree(w);
+ atomic_dec(&v6_worker_count);
+ module_put(THIS_MODULE);
+}
+
+/* ipv6 inet notifier is an atomic notifier, i.e. we cannot
+ * schedule.
+ *
+ * Unfortunately, nf_ct_iterate_cleanup can run for a long
+ * time if there are lots of conntracks and the system
+ * handles high softirq load, so it frequently calls cond_resched
+ * while iterating the conntrack table.
+ *
+ * So we defer nf_ct_iterate_cleanup walk to the system workqueue.
+ *
+ * As we can have 'a lot' of inet_events (depending on amount
+ * of ipv6 addresses being deleted), we also need to add an upper
+ * limit to the number of queued work items.
+ */
static int masq_inet_event(struct notifier_block *this,
unsigned long event, void *ptr)
{
struct inet6_ifaddr *ifa = ptr;
- struct netdev_notifier_info info;
+ const struct net_device *dev;
+ struct masq_dev_work *w;
+ struct net *net;
+
+ if (event != NETDEV_DOWN ||
+ atomic_read(&v6_worker_count) >= MAX_WORK_COUNT)
+ return NOTIFY_DONE;
+
+ dev = ifa->idev->dev;
+ net = maybe_get_net(dev_net(dev));
+ if (!net)
+ return NOTIFY_DONE;
- netdev_notifier_info_init(&info, ifa->idev->dev);
- return masq_device_event(this, event, &info);
+ if (!try_module_get(THIS_MODULE))
+ goto err_module;
+
+ w = kmalloc(sizeof(*w), GFP_ATOMIC);
+ if (w) {
+ atomic_inc(&v6_worker_count);
+
+ INIT_WORK(&w->work, iterate_cleanup_work);
+ w->ifindex = dev->ifindex;
+ w->net = net;
+ schedule_work(&w->work);
+
+ return NOTIFY_DONE;
+ }
+
+ module_put(THIS_MODULE);
+ err_module:
+ put_net(net);
+ return NOTIFY_DONE;
}
static struct notifier_block masq_inet_notifier = {
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 3cb3cb8..25f1696 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -1394,6 +1394,7 @@ get_next_corpse(struct net *net, int (*iter)(struct nf_conn *i, void *data),
}
spin_unlock(lockp);
local_bh_enable();
+ cond_resched();
}
for_each_possible_cpu(cpu) {
@@ -1406,6 +1407,7 @@ get_next_corpse(struct net *net, int (*iter)(struct nf_conn *i, void *data),
set_bit(IPS_DYING_BIT, &ct->status);
}
spin_unlock_bh(&pcpu->lock);
+ cond_resched();
}
return NULL;
found:
@@ -1422,6 +1424,8 @@ void nf_ct_iterate_cleanup(struct net *net,
struct nf_conn *ct;
unsigned int bucket = 0;
+ might_sleep();
+
while ((ct = get_next_corpse(net, iter, data, &bucket)) != NULL) {
/* Time to push up daises... */
if (del_timer(&ct->timeout))
@@ -1430,6 +1434,7 @@ void nf_ct_iterate_cleanup(struct net *net,
/* ... else the timer will get him soon. */
nf_ct_put(ct);
+ cond_resched();
}
}
EXPORT_SYMBOL_GPL(nf_ct_iterate_cleanup);
--
2.4.10
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup
2016-01-20 10:16 Florian Westphal
@ 2016-02-01 17:38 ` Pablo Neira Ayuso
0 siblings, 0 replies; 8+ messages in thread
From: Pablo Neira Ayuso @ 2016-02-01 17:38 UTC (permalink / raw)
To: Florian Westphal; +Cc: netfilter-devel
On Wed, Jan 20, 2016 at 11:16:43AM +0100, Florian Westphal wrote:
> Ulrich reports soft lockup with following (shortened) callchain:
>
> NMI watchdog: BUG: soft lockup - CPU#1 stuck for 22s!
> __netif_receive_skb_core+0x6e4/0x774
> process_backlog+0x94/0x160
> net_rx_action+0x88/0x178
> call_do_softirq+0x24/0x3c
> do_softirq+0x54/0x6c
> __local_bh_enable_ip+0x7c/0xbc
> nf_ct_iterate_cleanup+0x11c/0x22c [nf_conntrack]
> masq_inet_event+0x20/0x30 [nf_nat_masquerade_ipv6]
> atomic_notifier_call_chain+0x1c/0x2c
> ipv6_del_addr+0x1bc/0x220 [ipv6]
>
> Problem is that nf_ct_iterate_cleanup can run for a very long time
> since it can be interrupted by softirq processing.
> Moreover, atomic_notifier_call_chain runs with rcu readlock held.
>
> So lets call cond_resched() in nf_ct_iterate_cleanup and defer
> the call to a work queue for the atomic_notifier_call_chain case.
>
> We also need another cond_resched in get_next_corpse, since we
> have to deal with iter() always returning false, in that case
> get_next_corpse will walk entire conntrack table.
Applied, thanks.
> Reported-by: Ulrich Weber <uw@ocedo.com>
> Tested-by: Ulrich Weber <uw@ocedo.com>
> Signed-off-by: Florian Westphal <fw@strlen.de>
> ---
> I had a look at converting the ipv6 notifier to a blocking one
> but I found this too difficult (RTNL held? How to defer notifier calls
> from packet path)? Just doing it for masquerade is a lot simpler:
> - we only care about NETDEV_DOWN, so no extra work needed in most cases
> - can just ignore the notification if too much work is already queued
Probably adding a defered notifier chain variant which allows
blocking, ie. moving this code to core infrastructure, just an idea.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2016-02-01 17:38 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-12-09 17:30 [PATCH nf] netfilter: conntrack: resched in nf_ct_iterate_cleanup Florian Westphal
2015-12-11 11:42 ` Pablo Neira Ayuso
2015-12-11 11:53 ` Florian Westphal
2015-12-11 14:43 ` Florian Westphal
2015-12-11 17:16 ` Pablo Neira Ayuso
2015-12-11 17:21 ` Florian Westphal
-- strict thread matches above, loose matches on Subject: below --
2016-01-20 10:16 Florian Westphal
2016-02-01 17:38 ` Pablo Neira Ayuso
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).