linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* memcg Can't context between v1 and v2 because css->refcnt not released
@ 2017-08-09  7:06 wang Yu
  2017-08-10  7:10 ` Michal Hocko
  0 siblings, 1 reply; 9+ messages in thread
From: wang Yu @ 2017-08-09  7:06 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Tejun Heo, cgroups, linux-mm

Hello Johannes ,Michal,and Tejun:

  i using memcg v1,  but some reason  i want to context to  memcg v2,
but i can't, here is my step:
#cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
 memory 5 1 1
#cd /sys/fs/cgroup/memory
#mkdir a
#echo 0 > a/cgroup.procs
#sleep 1
#echo 0 > cgroup.procs
#cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
 memory 5 2 1
the num_cgroups not go to "1"
so it will lead to can't context to memcg 2
#cd ..
#umount memory
umount: /sys/fs/cgroup/memory: target is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))

  and i have tracked  the root cause, i found that "b2052564e66d mm:
memcontrol: continue cache reclaim from offlined groups"from Johannes
Weiner, remove mem_cgroup_reparent_charges when mem_cgroup_css_offline, so
the css->refcount  not go to "0", so the css_release not call when rmdir
cgroup, and nr_cgroups not released.
  so i want to ask does it reasonable can't context between v1 and v2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread
* memcg Can't context between v1 and v2 because css->refcnt not released
@ 2017-08-09  6:44 喻望
  0 siblings, 0 replies; 9+ messages in thread
From: 喻望 @ 2017-08-09  6:44 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Tejun Heo, cgroups, linux-mm

Hello Johannes ,Michal,and Tejun:

  i using memcg v1,  but some reason  i want to context to  memcg v2,
but i can't, here is my step:
#cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
 memory 5 1 1
#cd /sys/fs/cgroup/memory
#mkdir a
#echo 0 > a/cgroup.procs
#sleep 1
#echo 0 > cgroup.procs
#cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
 memory 5 2 1  
the num_cgroups not go to "1"
so it will lead to can't context to memcg 2
#cd ..
#umount memory
umount: /sys/fs/cgroup/memory: target is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))

  and i have tracked  the root cause, i found that "b2052564e66d mm:
memcontrol: continue cache reclaim from offlined groups"from Johannes
Weiner, remove mem_cgroup_reparent_charges when mem_cgroup_css_offline, so
the css->refcount  not go to "0", so the css_release not call when rmdir
cgroup, and nr_cgroups not released.
  so i want to ask does it reasonable can't context between v1 and v2


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-08-21 13:08 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-08-09  7:06 memcg Can't context between v1 and v2 because css->refcnt not released wang Yu
2017-08-10  7:10 ` Michal Hocko
2017-08-10  8:10   ` wang Yu
2017-08-10  8:19     ` Michal Hocko
2017-08-10  8:26       ` wang Yu
2017-08-10  9:28         ` wang Yu
     [not found]           ` <CADK2Bfwxp3gSDrYXAxhgoYne2T=1_RyPXqQt_cGHz86dfWgsqg@mail.gmail.com>
2017-08-10 10:34             ` Michal Hocko
2017-08-21 13:08               ` Johannes Weiner
  -- strict thread matches above, loose matches on Subject: below --
2017-08-09  6:44 喻望

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).