public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Gioh Kim <gioh.kim@lge.com>
To: gregkh@linuxfoundation.org, john.stultz@linaro.org,
	rebecca@android.com, lauraa@codeaurora.org,
	dan.carpenter@oracle.com, minchan@kernel.org,
	iamjoonsoo.kim@lge.com
Cc: devel@driverdev.osuosl.org, linux-kernel@vger.kernel.org,
	gunho.lee@lge.com
Subject: Re: [PATCHv2 0/3] staging: ion: enable pool shrinking in page unit
Date: Wed, 29 Oct 2014 17:40:48 +0900	[thread overview]
Message-ID: <5450A810.6030108@lge.com> (raw)
In-Reply-To: <1414560960-21130-1-git-send-email-gioh.kim@lge.com>



2014-10-29 오후 2:35에 Gioh Kim 이(가) 쓴 글:
> Hello,
> 
> Current pool shrinking is not page unit, block unit.
> But shrinker returns the pool size in page unit,
> so it is confused.
> 
> And there is no way to control pool size and shrink pool directly.
> 
> I have 3 patches like followings.
> 
> 1. Patch 1/3: make pool be shrinked by page unit
> This patch shrinks pool in page unit.
> 
> 2. Patch 2/3: limit pool size
> This patch specifies pool size limit via debugfs.
> The default value of limit is 0.
> cat /sys/kernel/debug/ion/heaps/system_limit returns 0.
> If you want to create 4 pools and limit each pool by 10MB,
> you can write 40MB(=41943040) at system_limit debugfs file
> like following:
> echo 41943040 > /sys/kernel/debug/ion/heaps/system_limit
> 
> 2. Patch 3/3: enable debugfs to shrink page directly
> This patch enables debugfs to specify shrink amount.
> For instance, this shrinks all pages in every pool.
> echo 0 > /sys/kernel/debug/ion/heaps/system_shrink
> And this shrinks 300-pages from entire pool.
> echo 30 > /sys/kernel/debug/ion/heaps/system_shrink
> It try to shrink high-order pool first because high-order pages is necessary
> more than low-order when the system has low memory.
> 
> This patchset is based on linux-next-20141023.
> 
> 
> Gioh Kim (3):
>    staging: ion: shrink page-pool by page unit
>    staging: ion: limit pool size
>    staging: ion: debugfs to shrink pool
> 
>   drivers/staging/android/ion/ion.c             |   62 ++++++++++++++++---------
>   drivers/staging/android/ion/ion_page_pool.c   |   32 ++++++++-----
>   drivers/staging/android/ion/ion_system_heap.c |   20 ++++++--
>   3 files changed, 75 insertions(+), 39 deletions(-)
> 

Following is my test result.
I set the orders as 4,3,2,0 for test.

# mount -t debugfs none /sys/kernel/debug

...... activate driver that calls ion-alloc ......

# cat /sys/kernel/debug/ion/heaps/system  ===================> no limit
          client              pid             size
----------------------------------------------------
----------------------------------------------------
orphaned allocations (info is from last known client):
----------------------------------------------------
  total orphaned                0
          total                 0
   deferred free                0
----------------------------------------------------
0 order 4 highmem pages in pool = 0 total
176 order 4 lowmem pages in pool = 11534336 total
0 order 3 highmem pages in pool = 0 total
0 order 3 lowmem pages in pool = 0 total
0 order 2 highmem pages in pool = 0 total
704 order 2 lowmem pages in pool = 11534336 total
0 order 0 highmem pages in pool = 0 total
2816 order 0 lowmem pages in pool = 11534336 total
# cat /sys/kernel/debug/ion/heaps/system_limit
0
# echo 41943040 > /sys/kernel/debug/ion/heaps/system_limit
# cat /sys/kernel/debug/ion/heaps/system_limit
41943040

...... activate driver that calls ion-alloc ......

# cat /sys/kernel/debug/ion/heaps/system     ====================> 10MB limit
          client              pid             size
----------------------------------------------------
----------------------------------------------------
orphaned allocations (info is from last known client):
----------------------------------------------------
  total orphaned                0
          total                 0
   deferred free                0
----------------------------------------------------
0 order 4 highmem pages in pool = 0 total
161 order 4 lowmem pages in pool = 10551296 total
0 order 3 highmem pages in pool = 0 total
0 order 3 lowmem pages in pool = 0 total
0 order 2 highmem pages in pool = 0 total
641 order 2 lowmem pages in pool = 10502144 total
0 order 0 highmem pages in pool = 0 total
2561 order 0 lowmem pages in pool = 10489856 total
# cat /sys/kernel/debug/ion/heaps/system_shrink ===============> count total pages
7701
# echo 0 > /sys/kernel/debug/ion/heaps/system_shrink  =========> shrink all pages
# cat /sys/kernel/debug/ion/heaps/system
          client              pid             size
----------------------------------------------------
----------------------------------------------------
orphaned allocations (info is from last known client):
----------------------------------------------------
  total orphaned                0
          total                 0
   deferred free                0
----------------------------------------------------
0 order 4 highmem pages in pool = 0 total
0 order 4 lowmem pages in pool = 0 total
0 order 3 highmem pages in pool = 0 total
0 order 3 lowmem pages in pool = 0 total
0 order 2 highmem pages in pool = 0 total
0 order 2 lowmem pages in pool = 0 total
0 order 0 highmem pages in pool = 0 total
0 order 0 lowmem pages in pool = 0 total
# cat /sys/kernel/debug/ion/heaps/system_shrink
0



      parent reply	other threads:[~2014-10-29  8:40 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-29  5:35 [PATCHv2 0/3] staging: ion: enable pool shrinking in page unit Gioh Kim
2014-10-29  5:35 ` [PATCHv2 1/3] staging: ion: shrink page-pool by " Gioh Kim
2014-10-29  6:45   ` Gioh Kim
2014-10-29  5:35 ` [PATCHv2 2/3] staging: ion: limit pool size Gioh Kim
2014-10-29  5:36 ` [PATCHv2 3/3] staging: ion: debugfs to shrink pool Gioh Kim
2014-10-29  8:40 ` Gioh Kim [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5450A810.6030108@lge.com \
    --to=gioh.kim@lge.com \
    --cc=dan.carpenter@oracle.com \
    --cc=devel@driverdev.osuosl.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=gunho.lee@lge.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=john.stultz@linaro.org \
    --cc=lauraa@codeaurora.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=minchan@kernel.org \
    --cc=rebecca@android.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox