From: Minchan Kim <minchan@kernel.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 4/6] zram: support idle page writeback
Date: Wed, 21 Nov 2018 05:34:08 -0800 [thread overview]
Message-ID: <20181121133408.GA103278@google.com> (raw)
In-Reply-To: <20181121045551.GC599@jagdpanzerIV>
On Wed, Nov 21, 2018 at 01:55:51PM +0900, Sergey Senozhatsky wrote:
> On (11/16/18 16:20), Minchan Kim wrote:
> > + zram_set_flag(zram, index, ZRAM_UNDER_WB);
> > + zram_slot_unlock(zram, index);
> > + if (zram_bvec_read(zram, &bvec, index, 0, NULL)) {
> > + zram_slot_lock(zram, index);
> > + zram_clear_flag(zram, index, ZRAM_UNDER_WB);
> > + zram_slot_unlock(zram, index);
> > + continue;
> > + }
> > +
> > + bio_init(&bio, &bio_vec, 1);
> > + bio_set_dev(&bio, zram->bdev);
> > + bio.bi_iter.bi_sector = blk_idx * (PAGE_SIZE >> 9);
> > + bio.bi_opf = REQ_OP_WRITE | REQ_SYNC;
> > +
> > + bio_add_page(&bio, bvec.bv_page, bvec.bv_len,
> > + bvec.bv_offset);
> > + /*
> > + * XXX: A single page IO would be inefficient for write
> > + * but it would be not bad as starter.
> > + */
> > + ret = submit_bio_wait(&bio);
> > + if (ret) {
> > + zram_slot_lock(zram, index);
> > + zram_clear_flag(zram, index, ZRAM_UNDER_WB);
> > + zram_slot_unlock(zram, index);
> > + continue;
> > + }
>
> Just a thought,
>
> I wonder if it will make sense (and if it will be possible) to writeback
> idle _compressed_ objects. Right now we decompress, say, a perfectly
> fine 400-byte compressed object to a PAGE_SIZE-d object and then push
> it to the WB device. In this particular case it has a x10 bigger IO
> pressure on flash. If we can write/read compressed object then we
> will write and read 400-bytes, instead of PAGE_SIZE.
Although it has pros/cons, that's the my final goal although it would
add much complicated stuffs. Sometime, we should have the feature.
However, I want to go simple one first which is very valuable, too.
next prev parent reply other threads:[~2018-11-21 13:34 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-16 7:20 [PATCH 0/6] zram idle page writeback Minchan Kim
2018-11-16 7:20 ` [PATCH 1/6] zram: fix lockdep warning of free block handling Minchan Kim
2018-11-16 7:20 ` [PATCH 2/6] zram: refactoring flags and writeback stuff Minchan Kim
2018-11-16 7:20 ` [PATCH 3/6] zram: introduce ZRAM_IDLE flag Minchan Kim
2018-11-20 2:46 ` Sergey Senozhatsky
2018-11-22 5:11 ` Minchan Kim
2018-11-22 5:45 ` Sergey Senozhatsky
2018-11-16 7:20 ` [PATCH 4/6] zram: support idle page writeback Minchan Kim
2018-11-21 4:55 ` Sergey Senozhatsky
2018-11-21 13:34 ` Minchan Kim [this message]
2018-11-22 2:14 ` Sergey Senozhatsky
2018-11-22 5:04 ` Minchan Kim
2018-11-22 5:40 ` Sergey Senozhatsky
2018-11-22 6:15 ` Minchan Kim
2018-11-22 6:31 ` Minchan Kim
2018-11-22 6:59 ` Sergey Senozhatsky
2018-11-23 6:23 ` Minchan Kim
2018-11-16 7:20 ` [PATCH 5/6] zram: add bd_stat statistics Minchan Kim
2018-11-16 7:20 ` [PATCH 6/6] zram: writeback throttle Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181121133408.GA103278@google.com \
--to=minchan@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=sergey.senozhatsky.work@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox