linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
To: Joonsoo Kim <js1304@gmail.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Minchan Kim <minchan@kernel.org>,
	Linux Memory Management List <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Subject: Re: [RFC PATCH 3/3] mm/zsmalloc: change ZS_MAX_PAGES_PER_ZSPAGE
Date: Thu, 18 Feb 2016 18:55:36 +0900	[thread overview]
Message-ID: <20160218095536.GA503@swordfish> (raw)
In-Reply-To: <CAAmzW4O-yQ5GBTE-6WvCL-hZeqyW=k3Fzn4_9G2qkMmp=ceuJg@mail.gmail.com>

Hello Joonsoo,

On (02/18/16 17:28), Joonsoo Kim wrote:
> 2016-02-18 12:02 GMT+09:00 Sergey Senozhatsky
> <sergey.senozhatsky.work@gmail.com>:
> > ZS_MAX_PAGES_PER_ZSPAGE does not have to be order or 2. The existing
> > limit of 4 pages per zspage sets a tight limit on ->huge classes, which
> > results in increased memory wastage and consumption.
> 
> There is a reason that it is order of 2. Increasing ZS_MAX_PAGES_PER_ZSPAGE
> is related to ZS_MIN_ALLOC_SIZE. If we don't have enough OBJ_INDEX_BITS,
> ZS_MIN_ALLOC_SIZE would be increase and it causes regression on some
> system.

Thanks!

do you mean PHYSMEM_BITS != BITS_PER_LONG systems? PAE/LPAE? isn't it
the case that on those systems ZS_MIN_ALLOC_SIZE already bigger than 32?

MAX_PHYSMEM_BITS	36
_PFN_BITS		36 - 12
OBJ_INDEX_BITS		(32 - (36 - 12) - 1)
ZS_MIN_ALLOC_SIZE	MAX(32, 4 << 12 >> (32 - (36 - 12) - 1))  !=  32

	-ss

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-02-18  9:54 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-18  3:02 [RFC PATCH 0/3] mm/zsmalloc: increase density and reduce memory wastage Sergey Senozhatsky
2016-02-18  3:02 ` [RFC PATCH 1/3] mm/zsmalloc: introduce zs_get_huge_class_size_watermark() Sergey Senozhatsky
2016-02-18  3:02 ` [RFC PATCH 2/3] zram: use zs_get_huge_class_size_watermark() Sergey Senozhatsky
2016-02-18  3:02 ` [RFC PATCH 3/3] mm/zsmalloc: change ZS_MAX_PAGES_PER_ZSPAGE Sergey Senozhatsky
2016-02-18  4:41   ` Sergey Senozhatsky
2016-02-18  4:46     ` Sergey Senozhatsky
2016-02-18  5:03     ` Sergey Senozhatsky
2016-02-18  8:28   ` Joonsoo Kim
2016-02-18  9:55     ` Sergey Senozhatsky [this message]
2016-02-18 10:19       ` Sergey Senozhatsky
2016-02-19  1:19         ` Joonsoo Kim
2016-02-19  4:16           ` Sergey Senozhatsky
2016-02-19  4:19             ` Sergey Senozhatsky
2016-02-19  4:46             ` Sergey Senozhatsky
2016-02-19  5:38               ` Sergey Senozhatsky
2016-02-19  5:55                 ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160218095536.GA503@swordfish \
    --to=sergey.senozhatsky.work@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=js1304@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=sergey.senozhatsky@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).