From: Minchan Kim <minchan@kernel.org>
To: zhouxianrong <zhouxianrong@huawei.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
akpm@linux-foundation.org, sergey.senozhatsky@gmail.com,
willy@infradead.org, iamjoonsoo.kim@lge.com, ngupta@vflare.org,
Mi.Sophia.Wang@huawei.com, zhouxiyu@huawei.com,
weidu.du@huawei.com, zhangshiming5@huawei.com,
won.ho.park@huawei.com
Subject: Re: [PATCH] mm: extend zero pages to same element pages for zram
Date: Tue, 7 Feb 2017 08:48:05 +0900 [thread overview]
Message-ID: <20170206234805.GA12188@bbox> (raw)
In-Reply-To: <2f6e188c-5358-eeab-44ab-7634014af651@huawei.com>
Hi
On Mon, Feb 06, 2017 at 09:28:18AM +0800, zhouxianrong wrote:
>
>
> On 2017/2/5 22:21, Minchan Kim wrote:
> >Hi zhouxianrong,
> >
> >On Fri, Feb 03, 2017 at 04:42:27PM +0800, zhouxianrong@huawei.com wrote:
> >>From: zhouxianrong <zhouxianrong@huawei.com>
> >>
> >>test result as listed below:
> >>
> >>zero pattern_char pattern_short pattern_int pattern_long total (unit)
> >>162989 14454 3534 23516 2769 3294399 (page)
> >>
> >>statistics for the result:
> >>
> >> zero pattern_char pattern_short pattern_int pattern_long
> >>AVERAGE 0.745696298 0.085937175 0.015957701 0.131874915 0.020533911
> >>STDEV 0.035623777 0.016892402 0.004454534 0.021657123 0.019420072
> >>MAX 0.973813421 0.222222222 0.021409518 0.211812245 0.176512625
> >>MIN 0.645431905 0.004634398 0 0 0
> >
> >The description in old version was better for justifying same page merging
> >feature.
> >
> >>
> >>Signed-off-by: zhouxianrong <zhouxianrong@huawei.com>
> >>---
> >> drivers/block/zram/zram_drv.c | 124 +++++++++++++++++++++++++++++++----------
> >> drivers/block/zram/zram_drv.h | 11 ++--
> >> 2 files changed, 103 insertions(+), 32 deletions(-)
> >>
> >>diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> >>index e5ab7d9..6a8c9c5 100644
> >>--- a/drivers/block/zram/zram_drv.c
> >>+++ b/drivers/block/zram/zram_drv.c
> >>@@ -95,6 +95,17 @@ static void zram_clear_flag(struct zram_meta *meta, u32 index,
> >> meta->table[index].value &= ~BIT(flag);
> >> }
> >>
> >>+static inline void zram_set_element(struct zram_meta *meta, u32 index,
> >>+ unsigned long element)
> >>+{
> >>+ meta->table[index].element = element;
> >>+}
> >>+
> >>+static inline void zram_clear_element(struct zram_meta *meta, u32 index)
> >>+{
> >>+ meta->table[index].element = 0;
> >>+}
> >>+
> >> static size_t zram_get_obj_size(struct zram_meta *meta, u32 index)
> >> {
> >> return meta->table[index].value & (BIT(ZRAM_FLAG_SHIFT) - 1);
> >>@@ -167,31 +178,78 @@ static inline void update_used_max(struct zram *zram,
> >> } while (old_max != cur_max);
> >> }
> >>
> >>-static bool page_zero_filled(void *ptr)
> >>+static inline void zram_fill_page(char *ptr, unsigned long value)
> >>+{
> >>+ int i;
> >>+ unsigned long *page = (unsigned long *)ptr;
> >>+
> >>+ if (likely(value == 0)) {
> >>+ clear_page(ptr);
> >>+ } else {
> >>+ for (i = 0; i < PAGE_SIZE / sizeof(*page); i++)
> >>+ page[i] = value;
> >>+ }
> >>+}
> >>+
> >>+static inline void zram_fill_page_partial(char *ptr, unsigned int size,
> >>+ unsigned long value)
> >>+{
> >>+ int i;
> >>+ unsigned long *page;
> >>+
> >>+ if (likely(value == 0)) {
> >>+ memset(ptr, 0, size);
> >>+ return;
> >>+ }
> >>+
> >>+ i = ((unsigned long)ptr) % sizeof(*page);
> >>+ if (i) {
> >>+ while (i < sizeof(*page)) {
> >>+ *ptr++ = (value >> (i * 8)) & 0xff;
> >>+ --size;
> >>+ ++i;
> >>+ }
> >>+ }
> >>+
> >
> >I don't think we need this part because block layer works with sector
> >size or multiple times of it so it must be aligned unsigned long.
> >
> >
> >
> >
> >.
> >
>
> Minchan and Matthew Wilcox:
>
> 1. right, but users could open /dev/block/zram0 file and do any read operations.
Could you make that happen?
I don't think it's possible as Matthew already pointed out, too.
>
> 2. about endian operation for long, the modification is trivial and low efficient.
> i have not better method. do you have any good idea for this?
So if assumption 1 is wrong, we don't need 2, either.
>
> 3. the below should be modified.
>
> static inline bool zram_meta_get(struct zram *zram)
> @@ -495,11 +553,17 @@ static void zram_meta_free(struct zram_meta *meta, u64 disksize)
>
> /* Free all pages that are still in this zram device */
> for (index = 0; index < num_pages; index++) {
> - unsigned long handle = meta->table[index].handle;
> + unsigned long handle;
> +
> + bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
> + handle = meta->table[index].handle;
>
> - if (!handle)
> + if (!handle || zram_test_flag(meta, index, ZRAM_SAME)) {
> + bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
> continue;
> + }
>
> + bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
> zs_free(meta->mem_pool, handle);
Could you explain why we need this modification?
> }
>
> @@ -511,7 +575,7 @@ static void zram_meta_free(struct zram_meta *meta, u64 disksize)
> static struct zram_meta *zram_meta_alloc(char *pool_name, u64 disksize)
> {
> size_t num_pages;
> - struct zram_meta *meta = kmalloc(sizeof(*meta), GFP_KERNEL);
> + struct zram_meta *meta = kzalloc(sizeof(*meta), GFP_KERNEL);
Ditto
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-02-06 23:48 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-06 8:42 [PATCH] mm: extend zero pages to same element pages for zram zhouxianrong
2017-01-09 23:41 ` Minchan Kim
2017-01-13 4:24 ` Sergey Senozhatsky
2017-01-13 6:23 ` Minchan Kim
2017-01-13 6:36 ` Sergey Senozhatsky
2017-01-13 6:47 ` Minchan Kim
2017-01-13 7:02 ` Sergey Senozhatsky
2017-01-13 8:03 ` Minchan Kim
2017-01-13 8:29 ` zhouxianrong
2017-01-21 8:43 ` Sergey Senozhatsky
2017-01-22 2:58 ` zhouxianrong
2017-01-22 4:45 ` Sergey Senozhatsky
2017-01-23 2:58 ` Joonsoo Kim
2017-01-23 3:32 ` zhouxianrong
2017-01-23 4:03 ` Sergey Senozhatsky
2017-01-23 6:27 ` Joonsoo Kim
2017-01-23 7:13 ` Sergey Senozhatsky
2017-01-23 7:40 ` Minchan Kim
2017-01-24 7:58 ` zhouxianrong
2017-01-25 1:29 ` Minchan Kim
2017-01-25 1:32 ` Sergey Senozhatsky
2017-01-25 2:48 ` Matthew Wilcox
2017-01-25 4:18 ` Sergey Senozhatsky
2017-01-25 4:51 ` Minchan Kim
2017-01-25 5:38 ` Sergey Senozhatsky
2017-01-25 5:44 ` Minchan Kim
2017-01-23 6:26 ` Matthew Wilcox
2017-01-23 6:32 ` 答复: " zhouxianrong
2017-02-03 8:34 ` zhouxianrong
2017-02-03 8:42 ` zhouxianrong
2017-02-03 15:33 ` Matthew Wilcox
2017-02-04 3:33 ` zhouxianrong
2017-02-05 14:21 ` Minchan Kim
2017-02-06 1:28 ` zhouxianrong
2017-02-06 14:14 ` Matthew Wilcox
2017-02-06 23:48 ` Minchan Kim [this message]
2017-02-07 2:20 ` zhouxianrong
2017-02-07 2:54 ` Minchan Kim
2017-02-07 3:24 ` zhouxianrong
2017-02-07 4:57 ` Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170206234805.GA12188@bbox \
--to=minchan@kernel.org \
--cc=Mi.Sophia.Wang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ngupta@vflare.org \
--cc=sergey.senozhatsky@gmail.com \
--cc=weidu.du@huawei.com \
--cc=willy@infradead.org \
--cc=won.ho.park@huawei.com \
--cc=zhangshiming5@huawei.com \
--cc=zhouxianrong@huawei.com \
--cc=zhouxiyu@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).