* [PATCH v2 net-next] veth: Free queues on link delete
@ 2018-08-15 1:04 dsahern
2018-08-15 1:16 ` Toshiaki Makita
0 siblings, 1 reply; 4+ messages in thread
From: dsahern @ 2018-08-15 1:04 UTC (permalink / raw)
To: netdev; +Cc: davem, makita.toshiaki, David Ahern
From: David Ahern <dsahern@gmail.com>
kmemleak reported new suspected memory leaks.
$ cat /sys/kernel/debug/kmemleak
unreferenced object 0xffff8800354d5c00 (size 1024):
comm "ip", pid 836, jiffies 4294722952 (age 25.904s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<(____ptrval____)>] kmemleak_alloc+0x70/0x94
[<(____ptrval____)>] slab_post_alloc_hook+0x42/0x52
[<(____ptrval____)>] __kmalloc+0x101/0x142
[<(____ptrval____)>] kmalloc_array.constprop.20+0x1e/0x26 [veth]
[<(____ptrval____)>] veth_newlink+0x147/0x3ac [veth]
...
unreferenced object 0xffff88002e009c00 (size 1024):
comm "ip", pid 836, jiffies 4294722958 (age 25.898s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<(____ptrval____)>] kmemleak_alloc+0x70/0x94
[<(____ptrval____)>] slab_post_alloc_hook+0x42/0x52
[<(____ptrval____)>] __kmalloc+0x101/0x142
[<(____ptrval____)>] kmalloc_array.constprop.20+0x1e/0x26 [veth]
[<(____ptrval____)>] veth_newlink+0x219/0x3ac [veth]
The allocations in question are veth_alloc_queues for the dev and its peer.
Free the queues on a delete.
Fixes: 638264dc90227 ("veth: Support per queue XDP ring")
Signed-off-by: David Ahern <dsahern@gmail.com>
---
v2
- free peer dev queues as well
drivers/net/veth.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index e3202af72df5..2a3ce60631ef 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -1205,6 +1205,7 @@ static void veth_dellink(struct net_device *dev, struct list_head *head)
struct veth_priv *priv;
struct net_device *peer;
+ veth_free_queues(dev);
priv = netdev_priv(dev);
peer = rtnl_dereference(priv->peer);
@@ -1216,6 +1217,7 @@ static void veth_dellink(struct net_device *dev, struct list_head *head)
unregister_netdevice_queue(dev, head);
if (peer) {
+ veth_free_queues(peer);
priv = netdev_priv(peer);
RCU_INIT_POINTER(priv->peer, NULL);
unregister_netdevice_queue(peer, head);
--
2.11.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v2 net-next] veth: Free queues on link delete
2018-08-15 1:04 [PATCH v2 net-next] veth: Free queues on link delete dsahern
@ 2018-08-15 1:16 ` Toshiaki Makita
2018-08-15 1:29 ` David Ahern
0 siblings, 1 reply; 4+ messages in thread
From: Toshiaki Makita @ 2018-08-15 1:16 UTC (permalink / raw)
To: dsahern, netdev; +Cc: davem, David Ahern
On 2018/08/15 10:04, dsahern@kernel.org wrote:
> From: David Ahern <dsahern@gmail.com>
>
> kmemleak reported new suspected memory leaks.
> $ cat /sys/kernel/debug/kmemleak
> unreferenced object 0xffff8800354d5c00 (size 1024):
> comm "ip", pid 836, jiffies 4294722952 (age 25.904s)
> hex dump (first 32 bytes):
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> backtrace:
> [<(____ptrval____)>] kmemleak_alloc+0x70/0x94
> [<(____ptrval____)>] slab_post_alloc_hook+0x42/0x52
> [<(____ptrval____)>] __kmalloc+0x101/0x142
> [<(____ptrval____)>] kmalloc_array.constprop.20+0x1e/0x26 [veth]
> [<(____ptrval____)>] veth_newlink+0x147/0x3ac [veth]
> ...
> unreferenced object 0xffff88002e009c00 (size 1024):
> comm "ip", pid 836, jiffies 4294722958 (age 25.898s)
> hex dump (first 32 bytes):
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> backtrace:
> [<(____ptrval____)>] kmemleak_alloc+0x70/0x94
> [<(____ptrval____)>] slab_post_alloc_hook+0x42/0x52
> [<(____ptrval____)>] __kmalloc+0x101/0x142
> [<(____ptrval____)>] kmalloc_array.constprop.20+0x1e/0x26 [veth]
> [<(____ptrval____)>] veth_newlink+0x219/0x3ac [veth]
>
> The allocations in question are veth_alloc_queues for the dev and its peer.
>
> Free the queues on a delete.
>
> Fixes: 638264dc90227 ("veth: Support per queue XDP ring")
> Signed-off-by: David Ahern <dsahern@gmail.com>
> ---
> v2
> - free peer dev queues as well
>
> drivers/net/veth.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index e3202af72df5..2a3ce60631ef 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -1205,6 +1205,7 @@ static void veth_dellink(struct net_device *dev, struct list_head *head)
> struct veth_priv *priv;
> struct net_device *peer;
>
> + veth_free_queues(dev);
> priv = netdev_priv(dev);
> peer = rtnl_dereference(priv->peer);
>
> @@ -1216,6 +1217,7 @@ static void veth_dellink(struct net_device *dev, struct list_head *head)
> unregister_netdevice_queue(dev, head);
>
> if (peer) {
> + veth_free_queues(peer);
> priv = netdev_priv(peer);
> RCU_INIT_POINTER(priv->peer, NULL);
> unregister_netdevice_queue(peer, head);
Hmm, on second thought this queues need to be freed after veth_close()
to make sure no packet will reference them. That means we need to free
them in .ndo_uninit() or destructor.
(rtnl_delete_link() calls dellink() before unregister_netdevice_many()
which calls dev_close_many() through rollback_registered_many())
Currently veth has destructor veth_dev_free() for vstats, so we can free
queues in the function.
To be in line with vstats, allocation also should be moved to
veth_dev_init().
--
Toshiaki Makita
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2 net-next] veth: Free queues on link delete
2018-08-15 1:16 ` Toshiaki Makita
@ 2018-08-15 1:29 ` David Ahern
2018-08-15 1:32 ` Toshiaki Makita
0 siblings, 1 reply; 4+ messages in thread
From: David Ahern @ 2018-08-15 1:29 UTC (permalink / raw)
To: Toshiaki Makita, dsahern, netdev; +Cc: davem
On 8/14/18 7:16 PM, Toshiaki Makita wrote:
> Hmm, on second thought this queues need to be freed after veth_close()
> to make sure no packet will reference them. That means we need to free
> them in .ndo_uninit() or destructor.
> (rtnl_delete_link() calls dellink() before unregister_netdevice_many()
> which calls dev_close_many() through rollback_registered_many())
>
> Currently veth has destructor veth_dev_free() for vstats, so we can free
> queues in the function.
> To be in line with vstats, allocation also should be moved to
> veth_dev_init().
given that, can you take care of the free in the proper location?
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2 net-next] veth: Free queues on link delete
2018-08-15 1:29 ` David Ahern
@ 2018-08-15 1:32 ` Toshiaki Makita
0 siblings, 0 replies; 4+ messages in thread
From: Toshiaki Makita @ 2018-08-15 1:32 UTC (permalink / raw)
To: David Ahern, dsahern, netdev; +Cc: davem
On 2018/08/15 10:29, David Ahern wrote:
> On 8/14/18 7:16 PM, Toshiaki Makita wrote:
>> Hmm, on second thought this queues need to be freed after veth_close()
>> to make sure no packet will reference them. That means we need to free
>> them in .ndo_uninit() or destructor.
>> (rtnl_delete_link() calls dellink() before unregister_netdevice_many()
>> which calls dev_close_many() through rollback_registered_many())
>>
>> Currently veth has destructor veth_dev_free() for vstats, so we can free
>> queues in the function.
>> To be in line with vstats, allocation also should be moved to
>> veth_dev_init().
>
> given that, can you take care of the free in the proper location?
Sure, will cook a patch.
Thanks!
--
Toshiaki Makita
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-08-15 4:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-08-15 1:04 [PATCH v2 net-next] veth: Free queues on link delete dsahern
2018-08-15 1:16 ` Toshiaki Makita
2018-08-15 1:29 ` David Ahern
2018-08-15 1:32 ` Toshiaki Makita
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).