From: Minchan Kim <minchan@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: <kernel-team@lge.com>, <linux-kernel@vger.kernel.org>,
Minchan Kim <minchan@kernel.org>
Subject: [PATCH v2 0/6] zram clean up
Date: Thu, 13 Apr 2017 11:59:19 +0900 [thread overview]
Message-ID: <1492052365-16169-1-git-send-email-minchan@kernel.org> (raw)
This patchset aims zram clean-up.
[1] clean up multiple pages's bvec handling.
[2] clean up partial IO handling
[3-6] clean up zram via using accessor and removing pointless structure.
With [2-6] applied, we can get a few hundred bytes as well as huge
readibility enhance.
This patchset is based on 2017-04-11-15-23 + recent ppc64 fixes[1]
[1] https://lkml.org/lkml/2017/4/12/924
* from v1
* more clean up - Sergey
* Fix zram_reset metadata overwrite - Sergey
* Fix bvec handling in __zram_make_request
* use zram_free_page in reset rather than zs_free
x86: 708 byte save
add/remove: 1/1 grow/shrink: 0/11 up/down: 478/-1186 (-708)
function old new delta
zram_special_page_read - 478 +478
zram_reset_device 317 314 -3
mem_used_max_store 131 128 -3
compact_store 96 93 -3
mm_stat_show 203 197 -6
zram_add 719 712 -7
zram_slot_free_notify 229 214 -15
zram_make_request 819 803 -16
zram_meta_free 128 111 -17
zram_free_page 180 151 -29
disksize_store 432 361 -71
zram_decompress_page.isra 504 - -504
zram_bvec_rw 2592 2080 -512
Total: Before=25350773, After=25350065, chg -0.00%
ppc64: 231 byte save
add/remove: 2/0 grow/shrink: 1/9 up/down: 681/-912 (-231)
function old new delta
zram_special_page_read - 480 +480
zram_slot_lock - 200 +200
vermagic 39 40 +1
mm_stat_show 256 248 -8
zram_meta_free 200 184 -16
zram_add 944 912 -32
zram_free_page 348 308 -40
disksize_store 572 492 -80
zram_decompress_page 664 564 -100
zram_slot_free_notify 292 160 -132
zram_make_request 1132 1000 -132
zram_bvec_rw 2768 2396 -372
Total: Before=17565825, After=17565594, chg -0.00%
Minchan Kim (6):
zram: handle multiple pages attached bio's bvec
zram: partial IO refactoring
zram: use zram_slot_lock instead of raw bit_spin_lock op
zram: remove zram_meta structure
zram: introduce zram data accessor
zram: use zram_free_page instead of open-coded
drivers/block/zram/zram_drv.c | 567 +++++++++++++++++++++---------------------
drivers/block/zram/zram_drv.h | 6 +-
2 files changed, 281 insertions(+), 292 deletions(-)
--
2.7.4
next reply other threads:[~2017-04-13 2:59 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-13 2:59 Minchan Kim [this message]
2017-04-13 2:59 ` [PATCH v2 1/6] zram: handle multiple pages attached bio's bvec Minchan Kim
2017-04-13 7:15 ` Johannes Thumshirn
2017-04-13 13:40 ` Minchan Kim
2017-04-13 2:59 ` [PATCH v2 2/6] zram: partial IO refactoring Minchan Kim
2017-04-13 2:59 ` [PATCH v2 3/6] zram: use zram_slot_lock instead of raw bit_spin_lock op Minchan Kim
2017-04-13 2:59 ` [PATCH v2 4/6] zram: remove zram_meta structure Minchan Kim
2017-04-13 2:59 ` [PATCH v2 5/6] zram: introduce zram data accessor Minchan Kim
2017-04-13 2:59 ` [PATCH v2 6/6] zram: use zram_free_page instead of open-coded Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1492052365-16169-1-git-send-email-minchan@kernel.org \
--to=minchan@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=kernel-team@lge.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox