linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Joonsoo Kim <js1304@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC][PATCH v2 2/3] zram: use zs_get_huge_class_size_watermark()
Date: Mon, 22 Feb 2016 13:54:58 +0900	[thread overview]
Message-ID: <20160222045458.GF27829@bbox> (raw)
In-Reply-To: <20160222035448.GB11961@swordfish>

On Mon, Feb 22, 2016 at 12:54:48PM +0900, Sergey Senozhatsky wrote:
> On (02/22/16 11:57), Minchan Kim wrote:
> [..]
> > > > Yes, I mean if we have backing storage, we could mitigate the problem
> > > > like the mentioned approach. Otherwise, we should solve it in allocator
> > > > itself and you suggested the idea and I commented first step.
> > > > What's the problem, now?
> > > 
> > > well, I didn't say I have problems.
> > > so you want a backing device that will keep only 'bad compression'
> > > objects and use zsmalloc to keep there only 'good compression' objects?
> > > IOW, no huge classes in zsmalloc at all? well, that can work out. it's
> > > a bit strange though that to solve zram-zsmalloc issues we would ask
> > > someone to create a additional device. it looks (at least for now) that
> > > we can address those issues in zram-zsmalloc entirely; w/o user
> > > intervention or a 3rd party device.
> > 
> > Agree. That's what I want. zram shouldn't be aware of allocator's
> > internal implementation. IOW, zsmalloc should handle it without
> > exposing any internal limitation.
> 
> well, at the same time zram must not dictate what to do. zram simply spoils
> zsmalloc; it does not offer guaranteed good compression, and it does not let
> zsmalloc to do it's job. zram has only excuses to be the way it is.
> the existing zram->zsmalloc dependency looks worse than zsmalloc->zram to me.

I don't get it why you think it's zram->zsmalloc dependency.
I already explained. Here it goes, again.

Long time ago, zram(i.e, ramzswap) can fallback incompressible page to
backed device if it presents and the size was PAGE_SIZE / 2.
IOW, if compress ratio is bad than 50%, zram passes the page to backed
storage to make memory efficiency.
If zram doesn't have backed storage and compress ratio under 25%(ie,
short of memory saving) it store pages as uncompressible for avoiding
additional *decompress* overhead.
Of course, it's arguable whether memory efficiency VS. CPU consumption
so we should handle it as another topic.
What I want to say in here is it's not dependency between zram and
zsmalloc but it was a zram policy for a long time.
If it's not good, we can fix it.
 
> 
> > Backing device issue is orthogonal but what I said about thing
> > was it could solve the issue too without exposing zsmalloc's
> > limitation to the zram.
> 
> well, backing device would not reduce the amount of pages we request.
> and that's the priority issue, especially if we are talking about
> embedded system with a low free pages capability. we would just move huge
> objects from zsmalloc to backing device. other than that we would still
> request 1000 (for example) pages to store 1000 objects. it's zsmalloc's
> "page sharing" that permits us to request less than 1000 pages to store
> 1000 objects.
> 
> so yes, I agree, increasing ZS_MAX_ZSPAGE_ORDER and do more tests is
> the step #1 to take.
> 
> > Let's summary my points in here.
> > 
> > Let's make zsmalloc smarter to reduce wasted space. One of option is
> > dynamic page creation which I agreed.
> >
> > Before the feature, we should test how memory footprint is bigger
> > without the feature if we increase ZS_MAX_ZSPAGE_ORDER.
> > If it's not big, we could go with your patch easily without adding
> > more complex stuff(i.e, dynamic page creation).
> 
> yes, agree. alloc_zspage()/init_zspage() and friends must be the last
> thing to touch. only if increased ZS_MAX_ZSPAGE_ORDER will turn out not
> to be good enough.
> 
> > Please, check max_used_pages rather than mem_used_total for seeing
> > memory footprint at the some moment and test very fragmented scenario
> > (creating files and free part of files) rather than just full coping.
> 
> sure, more tests will follow.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-02-22  4:54 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-21 13:27 [RFC][PATCH v2 0/3] mm/zsmalloc: increase objects density and reduce memory wastage Sergey Senozhatsky
2016-02-21 13:27 ` [RFC][PATCH v2 1/3] mm/zsmalloc: introduce zs_get_huge_class_size_watermark() Sergey Senozhatsky
2016-02-21 13:27 ` [RFC][PATCH v2 2/3] zram: use zs_get_huge_class_size_watermark() Sergey Senozhatsky
2016-02-22  0:04   ` Minchan Kim
2016-02-22  0:40     ` Sergey Senozhatsky
2016-02-22  1:27       ` Minchan Kim
2016-02-22  1:59         ` Sergey Senozhatsky
2016-02-22  2:05           ` Sergey Senozhatsky
2016-02-22  2:57           ` Minchan Kim
2016-02-22  3:54             ` Sergey Senozhatsky
2016-02-22  4:54               ` Minchan Kim [this message]
2016-02-22  5:05                 ` Sergey Senozhatsky
2016-02-21 13:27 ` [RFC][PATCH v2 3/3] mm/zsmalloc: increase ZS_MAX_PAGES_PER_ZSPAGE Sergey Senozhatsky
2016-02-22  0:25   ` Minchan Kim
2016-02-22  0:47     ` Sergey Senozhatsky
2016-02-22  1:34       ` Minchan Kim
2016-02-22  2:01         ` Sergey Senozhatsky
2016-02-22  2:34           ` Minchan Kim
2016-02-22  3:59             ` Sergey Senozhatsky
2016-02-22  4:41               ` Minchan Kim
2016-02-22 10:43                 ` Sergey Senozhatsky
2016-02-23  8:25                   ` Minchan Kim
2016-02-23 10:35                     ` Sergey Senozhatsky
2016-02-23 16:05                       ` Minchan Kim
2016-02-27  6:31                         ` Sergey Senozhatsky
2016-02-22  2:24         ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160222045458.GF27829@bbox \
    --to=minchan@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=js1304@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=sergey.senozhatsky.work@gmail.com \
    --cc=sergey.senozhatsky@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).