linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Theodore Ts'o <tytso@mit.edu>, Gioh Kim <gioh.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	jack@suse.cz, linux-fsdevel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-kernel@vger.kernel.org,
	viro@zeniv.linux.org.uk, paulmck@linux.vnet.ibm.com,
	peterz@infradead.org, adilger.kernel@dilger.ca,
	gunho.lee@lge.com, Mel Gorman <mgorman@suse.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Michal Nazarewicz <mina86@mina86.com>
Subject: Re: [PATCHv4 0/3] new APIs to allocate buffer-cache with user specific flag
Date: Mon, 15 Sep 2014 15:37:54 +0900	[thread overview]
Message-ID: <20140915063754.GK2160@bbox> (raw)
In-Reply-To: <20140915011018.GA2676@js1304-P5Q-DELUXE>

On Mon, Sep 15, 2014 at 10:10:18AM +0900, Joonsoo Kim wrote:
> On Fri, Sep 05, 2014 at 10:14:16AM -0400, Theodore Ts'o wrote:
> > On Fri, Sep 05, 2014 at 04:32:48PM +0900, Joonsoo Kim wrote:
> > > I also test another approach, such as allocate freepage in CMA
> > > reserved region as late as possible, which is also similar to your
> > > suggestion and this doesn't works well. When reclaim is started,
> > > too many pages reclaim at once, because lru list has successive pages
> > > in CMA region and these doesn't help kswapd's reclaim. kswapd stop
> > > reclaiming when freepage count is recovered. But, CMA pages isn't
> > > counted for freepage for kswapd because they can't be usable for
> > > unmovable, reclaimable allocation. So kswap reclaim too many pages
> > > at once unnecessarilly.
> > 
> > Have you considered putting the pages in a CMA region in a separate
> > zone?  After all, that's what we originally did with brain-damaged
> > hardware that could only DMA into the low 16M of memory.  We just
> > reserved a separate zone for that?  That way, we could do
> > zone-directed reclaim and free pages in that zone, if that was what
> > was actually needed.
> 
> Sorry for long delay. It was long holidays.
> 
> No, I haven't consider it. It sounds good idea to place the pages in a
> CMA region into a separate zone. Perhaps we can remove one of
> migratetype, MIGRATE_CMA, with this way and it would be a good long-term
> architecture for CMA.

IIRC, Mel suggested two options, ZONE_MOVABLE zone and MIGRATE_ISOLATE.
Absolutely, movable zone option is better solution if we consider
interacting with reclaim but one problem was CMA had a specific
requirement for memory in the middle of an existing zone.
And his concern comes true.
Look at https://lkml.org/lkml/2014/5/28/64.
It starts to add more stuff in allocator's fast path to overcome the
problem. :(

Let's rethink. We already have a logic to handle overlapping nodes/zones
in compaction.c so isn't it possible to have discrete address ranges
in a movable zone? If so, movable zone can include specific ranges horrible
devices want and it could make allocation/reclaim logic simple than now and
add overheads to slow path(ie, linear pfn scanning logic of zone like
compaction).

> 
> I don't know exact history and reason why CMA is implemented in current
> form. Ccing some experts in this area.
> 
> Thanks.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Kind regards,
Minchan Kim

  reply	other threads:[~2014-09-15  6:37 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1409815781-28011-1-git-send-email-gioh.kim@lge.com>
2014-09-04 22:16 ` [PATCHv4 0/3] new APIs to allocate buffer-cache with user specific flag Andrew Morton
2014-09-05  0:37   ` Gioh Kim
2014-09-05  1:14     ` Theodore Ts'o
2014-09-05  1:48       ` Joonsoo Kim
2014-09-05  3:17         ` Theodore Ts'o
2014-09-05  7:32           ` Joonsoo Kim
2014-09-05 14:14             ` Theodore Ts'o
2014-09-15  1:10               ` Joonsoo Kim
2014-09-15  6:37                 ` Minchan Kim [this message]
     [not found] ` <1409815781-28011-2-git-send-email-gioh.kim@lge.com>
2014-09-05  2:37   ` [PATCHv4 1/3] fs.c: support buffer cache allocations with gfp modifiers Theodore Ts'o
     [not found] ` <1409815781-28011-3-git-send-email-gioh.kim@lge.com>
2014-09-05  2:37   ` [PATCHv4 2/3] ext4: use non-movable memory for the ext4 superblock Theodore Ts'o
     [not found] ` <1409815781-28011-4-git-send-email-gioh.kim@lge.com>
2014-09-05  2:37   ` [PATCHv4 3/3] jbd/jbd2: use non-movable memory for the jbd superblock Theodore Ts'o

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140915063754.GK2160@bbox \
    --to=minchan@kernel.org \
    --cc=adilger.kernel@dilger.ca \
    --cc=akpm@linux-foundation.org \
    --cc=gioh.kim@lge.com \
    --cc=gunho.lee@lge.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jack@suse.cz \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=mgorman@suse.de \
    --cc=mina86@mina86.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=tytso@mit.edu \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).