dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] drm: fix resources leak in error handle code path
@ 2013-10-30 23:04 Wang YanQing
  2013-11-06  3:45 ` Dave Airlie
  0 siblings, 1 reply; 3+ messages in thread
From: Wang YanQing @ 2013-10-30 23:04 UTC (permalink / raw)
  To: airlied; +Cc: dri-devel, linux-kernel

This series patches fix resource leak
issue in error handle code path I meet
with drm, vmalloc leak, etc.

I meet below issue caused by drm haven't release 
resource(include 40K vmalloc per module insert failed) 
in error code path and buggy nvidia driver try to insert 
nvidia module cycled when it failed to probe device which
had been controlled by nouveau built-in kernel, then
drm leak all the vmalloc space finally:

[ 2075.508077] vmap allocation for size 9637888 failed: use vmalloc=<size> to increase size.
[ 2075.508083] vmalloc: allocation failure: 9633737 bytes
[ 2075.508085] modprobe: page allocation failure: order:0, mode:0xd2
[ 2075.508089] CPU: 0 PID: 11113 Comm: modprobe Tainted: P           O 3.10.16+ #13
[ 2075.508091] Hardware name: Acer            Aspire 4741                    /Aspire 4741                    , BIOS V1.04           03/02/2010
[ 2075.508093]  c17295c4 f5283eac c15df443 f5283edc c10b3311 c1727ec4 f2c15ac0 00000000
[ 2075.508097]  000000d2 c17295c4 f5283ecc f5283ef0 000000d2 0092ffc9 ffffffff f5283f0c
[ 2075.508101]  c10d86e7 000000d2 00000000 c17295c4 0092ffc9 c1077408 f56979c0 f5283f28
[ 2075.508106] Call Trace:
[ 2075.508113]  [<c15df443>] dump_stack+0x16/0x1b
[ 2075.508118]  [<c10b3311>] warn_alloc_failed+0xa1/0x100
[ 2075.508123]  [<c10d86e7>] __vmalloc_node_range+0x177/0x1e0
[ 2075.508128]  [<c1077408>] ? SyS_init_module+0x78/0xc0
[ 2075.508131]  [<c10d87b0>] __vmalloc_node+0x60/0x70
[ 2075.508134]  [<c1077408>] ? SyS_init_module+0x78/0xc0
[ 2075.508137]  [<c10d88c8>] vmalloc+0x38/0x40
[ 2075.508140]  [<c1077408>] ? SyS_init_module+0x78/0xc0
[ 2075.508142]  [<c1077408>] SyS_init_module+0x78/0xc0
[ 2075.508147]  [<c15e291e>] sysenter_do_call+0x12/0x26
[ 2075.508149] Mem-Info:
[ 2075.508151] DMA per-cpu:
[ 2075.508153] CPU    0: hi:    0, btch:   1 usd:   0
[ 2075.508154] CPU    1: hi:    0, btch:   1 usd:   0
[ 2075.508156] CPU    2: hi:    0, btch:   1 usd:   0
[ 2075.508157] CPU    3: hi:    0, btch:   1 usd:   0
[ 2075.508158] Normal per-cpu:
[ 2075.508160] CPU    0: hi:  186, btch:  31 usd: 125
[ 2075.508162] CPU    1: hi:  186, btch:  31 usd: 138
[ 2075.508163] CPU    2: hi:  186, btch:  31 usd: 155
[ 2075.508165] CPU    3: hi:  186, btch:  31 usd: 144
[ 2075.508166] HighMem per-cpu:
[ 2075.508167] CPU    0: hi:  186, btch:  31 usd: 132
[ 2075.508169] CPU    1: hi:  186, btch:  31 usd: 170
[ 2075.508171] CPU    2: hi:  186, btch:  31 usd: 155
[ 2075.508172] CPU    3: hi:  186, btch:  31 usd: 153
[ 2075.508177] active_anon:2889 inactive_anon:27 isolated_anon:0
[ 2075.508177]  active_file:66897 inactive_file:117009 isolated_file:0
[ 2075.508177]  unevictable:0 dirty:47 writeback:0 unstable:0
[ 2075.508177]  free:292466 slab_reclaimable:4051 slab_unreclaimable:2244
[ 2075.508177]  mapped:2241 shmem:206 pagetables:189 bounce:0
[ 2075.508177]  free_cma:0
[ 2075.508185] DMA free:15896kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15980kB managed:15904kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:8kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[ 2075.508186] lowmem_reserve[]: 0 842 1923 1923
[ 2075.508193] Normal free:803896kB min:3680kB low:4600kB high:5520kB active_anon:0kB inactive_anon:0kB active_file:14052kB inactive_file:18684kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:897016kB managed:863160kB mlocked:0kB dirty:72kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:16204kB slab_unreclaimable:8968kB kernel_stack:1272kB pagetables:756kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[ 2075.508195] lowmem_reserve[]: 0 0 8647 8647
[ 2075.508202] HighMem free:350072kB min:512kB low:1688kB high:2868kB active_anon:11556kB inactive_anon:108kB active_file:253536kB inactive_file:449352kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1106844kB managed:1106844kB mlocked:0kB dirty:116kB writeback:0kB mapped:8960kB shmem:824kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[ 2075.508203] lowmem_reserve[]: 0 0 0 0
[ 2075.508206] DMA: 0*4kB 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (R) 3*4096kB (M) = 15896kB
[ 2075.508216] Normal: 50*4kB (UM) 180*8kB (UM) 159*16kB (UEM) 65*32kB (UEM) 29*64kB (UEM) 1*128kB (M) 0*256kB 0*512kB 1*1024kB (M) 2*2048kB (UR) 193*4096kB (MR) = 803896kB
[ 2075.508228] HighMem: 1270*4kB (U) 942*8kB (UM) 467*16kB (UM) 1012*32kB (UM) 632*64kB (UM) 199*128kB (UM) 61*256kB (UM) 24*512kB (M) 5*1024kB (M) 3*2048kB (UM) 47*4096kB (MR) = 350072kB
[ 2075.508240] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 2075.508242] 184112 total pagecache pages
[ 2075.508243] 0 pages in swap cache
[ 2075.508245] Swap cache stats: add 0, delete 0, find 0/0
[ 2075.508246] Free swap  = 4551676kB
[ 2075.508247] Total swap = 4551676kB
[ 2075.512956] 505855 pages RAM
[ 2075.512959] 277506 pages HighMem
[ 2075.512961] 7415 pages reserved
[ 2075.512962] 1086176 pages shared
[ 2075.512964] 172994 pages non-shared

The reason is resource allocated in drm_fill_in_dev
haven't be freed when dev->driver->load return error,
we current relay on drm_put_dev to free resource
allocated in drm_fill_in_dev called in module_exit,
but in error code path in drm_get_[pci|platform|usb]_dev,
below line 

"list_add_tail(&dev->driver_item, &driver->device_list);"

will be skipped, and drm_[pci|platform|usb]_exit willn't
free resource.

This series patch fix it and it also ajust and improve 
error handle code path for drm_fill_in_dev, drm_get_[pci|platform|usb]_dev.

It include 5 patches:

1:drm:drm_stub: add new function drm_cleanup_in_dev
2:drm:drm_pci: fix resource leak in drm_get_pci_dev
3:drm:drm_platform: fix resource leak in drm_get_platform_dev
4:drm:drm_usb: fix resource leak in drm_get_usb_dev
5:drm:drm_stub: rewrite error handle code for drm_fill_in_dev

Thanks.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 0/5] drm: fix resources leak in error handle code path
  2013-10-30 23:04 [PATCH 0/5] drm: fix resources leak in error handle code path Wang YanQing
@ 2013-11-06  3:45 ` Dave Airlie
  2013-11-06 10:16   ` Daniel Vetter
  0 siblings, 1 reply; 3+ messages in thread
From: Dave Airlie @ 2013-11-06  3:45 UTC (permalink / raw)
  To: Wang YanQing, Dave Airlie, dri-devel, LKML, Daniel Vetter

On Thu, Oct 31, 2013 at 9:04 AM, Wang YanQing <udknight@gmail.com> wrote:
> This series patches fix resource leak
> issue in error handle code path I meet
> with drm, vmalloc leak, etc.

Daniel this seems to cross over a bit with your cleanups,

I also am not sure that the drm_cleanup_in_dev isn't going to cause
issues as it reorders the unload operation with respect to other
operations, and I wonder if legacy drivers might start crashing in
this situation.

Dave.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 0/5] drm: fix resources leak in error handle code path
  2013-11-06  3:45 ` Dave Airlie
@ 2013-11-06 10:16   ` Daniel Vetter
  0 siblings, 0 replies; 3+ messages in thread
From: Daniel Vetter @ 2013-11-06 10:16 UTC (permalink / raw)
  To: Dave Airlie; +Cc: Wang YanQing, Dave Airlie, dri-devel, LKML, Daniel Vetter

On Wed, Nov 06, 2013 at 01:45:31PM +1000, Dave Airlie wrote:
> On Thu, Oct 31, 2013 at 9:04 AM, Wang YanQing <udknight@gmail.com> wrote:
> > This series patches fix resource leak
> > issue in error handle code path I meet
> > with drm, vmalloc leak, etc.
> 
> Daniel this seems to cross over a bit with your cleanups,

Oops, missed this, thanks for the poking. This indeed conflicts rather
badly - my master plan will completely abolish drm_platform.c and
drm_usb.c. drm_pci.c needs to stick around for legacy drivers, I'm not
going to touch that (and imo also totally not worth the pain).

All modern (non-shadow-attach drivers) should eventually move to a driver
controlled load sequence. Once that's possible imo the best approach would
be to use devres.c for resource cleanups, everything else seems to be a
giant game of whack-a-mole.

One thing I haven't yet figured out how to combine the devres.c stuff with
legacy shadow attach mode in i915. Maybe I need to split out ums first ...

Oh, obligatory link to my current wip stuff:

http://cgit.freedesktop.org/~danvet/drm/log/?h=drm-init-cleanup

Atm doesn't work for tegra since I haven't finished the rebase yet, some
trivial stuff to fix up.

> I also am not sure that the drm_cleanup_in_dev isn't going to cause
> issues as it reorders the unload operation with respect to other
> operations, and I wonder if legacy drivers might start crashing in
> this situation.

Agreed, whacking the cleanup code for legacy drivers is imo pointless.
Better to hide this all in a dark corner somewhere and forget about it.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-11-06 10:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-30 23:04 [PATCH 0/5] drm: fix resources leak in error handle code path Wang YanQing
2013-11-06  3:45 ` Dave Airlie
2013-11-06 10:16   ` Daniel Vetter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).