* [PATCH 14/17] bcache: back to cache all readahead I/Os
[not found] <20200123170142.98974-1-colyli@suse.de>
@ 2020-01-23 17:01 ` colyli
2020-01-23 17:19 ` Michael Lyle
0 siblings, 1 reply; 7+ messages in thread
From: colyli @ 2020-01-23 17:01 UTC (permalink / raw)
To: axboe; +Cc: linux-bcache, linux-block, Coly Li, stable
From: Coly Li <colyli@suse.de>
In year 2007 high performance SSD was still expensive, in order to
save more space for real workload or meta data, the readahead I/Os
for non-meta data was bypassed and not cached on SSD.
In now days, SSD price drops a lot and people can find larger size
SSD with more comfortable price. It is unncessary to bypass normal
readahead I/Os to save SSD space for now.
This patch removes the code which checks REQ_RAHEAD tag of bio in
check_should_bypass(), then all readahead I/Os will be cached on SSD.
NOTE: this patch still keeps the checking of "REQ_META|REQ_PRIO" in
should_writeback(), because we still want to cache meta data I/Os
even they are asynchronized.
Cc: stable@vger.kernel.org
Signed-off-by: Coly Li <colyli@suse.de>
Acked-by: Eric Wheeler <bcache@linux.ewheeler.net>
---
drivers/md/bcache/request.c | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index 73478a91a342..acc07c4f27ae 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -378,15 +378,6 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
op_is_write(bio_op(bio))))
goto skip;
- /*
- * Flag for bypass if the IO is for read-ahead or background,
- * unless the read-ahead request is for metadata
- * (eg, for gfs2 or xfs).
- */
- if (bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) &&
- !(bio->bi_opf & (REQ_META|REQ_PRIO)))
- goto skip;
-
if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) ||
bio_sectors(bio) & (c->sb.block_size - 1)) {
pr_debug("skipping unaligned io");
--
2.16.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 14/17] bcache: back to cache all readahead I/Os
2020-01-23 17:01 ` [PATCH 14/17] bcache: back to cache all readahead I/Os colyli
@ 2020-01-23 17:19 ` Michael Lyle
2020-01-23 17:27 ` Coly Li
0 siblings, 1 reply; 7+ messages in thread
From: Michael Lyle @ 2020-01-23 17:19 UTC (permalink / raw)
To: Coly Li; +Cc: Jens Axboe, linux-bcache, linux-block, stable
Hi Coly and Jens--
One concern I have with this is that it's going to wear out
limited-lifetime SSDs a -lot- faster. Was any thought given to making
this a tunable instead of just changing the behavior? Even if we have
an anecdote or two that it seems to have increased performance for
some workloads, I don't expect it will have increased performance in
general and it may even be costly for some workloads (it all comes
down to what is more useful in the cache-- somewhat-recently readahead
data, or the data that it is displacing).
Regards,
Mike
On Thu, Jan 23, 2020 at 9:03 AM <colyli@suse.de> wrote:
>
> From: Coly Li <colyli@suse.de>
>
> In year 2007 high performance SSD was still expensive, in order to
> save more space for real workload or meta data, the readahead I/Os
> for non-meta data was bypassed and not cached on SSD.
>
> In now days, SSD price drops a lot and people can find larger size
> SSD with more comfortable price. It is unncessary to bypass normal
> readahead I/Os to save SSD space for now.
>
> This patch removes the code which checks REQ_RAHEAD tag of bio in
> check_should_bypass(), then all readahead I/Os will be cached on SSD.
>
> NOTE: this patch still keeps the checking of "REQ_META|REQ_PRIO" in
> should_writeback(), because we still want to cache meta data I/Os
> even they are asynchronized.
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Coly Li <colyli@suse.de>
> Acked-by: Eric Wheeler <bcache@linux.ewheeler.net>
> ---
> drivers/md/bcache/request.c | 9 ---------
> 1 file changed, 9 deletions(-)
>
> diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
> index 73478a91a342..acc07c4f27ae 100644
> --- a/drivers/md/bcache/request.c
> +++ b/drivers/md/bcache/request.c
> @@ -378,15 +378,6 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
> op_is_write(bio_op(bio))))
> goto skip;
>
> - /*
> - * Flag for bypass if the IO is for read-ahead or background,
> - * unless the read-ahead request is for metadata
> - * (eg, for gfs2 or xfs).
> - */
> - if (bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) &&
> - !(bio->bi_opf & (REQ_META|REQ_PRIO)))
> - goto skip;
> -
> if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) ||
> bio_sectors(bio) & (c->sb.block_size - 1)) {
> pr_debug("skipping unaligned io");
> --
> 2.16.4
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 14/17] bcache: back to cache all readahead I/Os
2020-01-23 17:19 ` Michael Lyle
@ 2020-01-23 17:27 ` Coly Li
2020-01-23 18:31 ` Jens Axboe
2020-01-24 16:48 ` Michael Lyle
0 siblings, 2 replies; 7+ messages in thread
From: Coly Li @ 2020-01-23 17:27 UTC (permalink / raw)
To: Michael Lyle; +Cc: Jens Axboe, linux-bcache, linux-block, stable
On 2020/1/24 1:19 上午, Michael Lyle wrote:
> Hi Coly and Jens--
>
> One concern I have with this is that it's going to wear out
> limited-lifetime SSDs a -lot- faster. Was any thought given to making
> this a tunable instead of just changing the behavior? Even if we have
> an anecdote or two that it seems to have increased performance for
> some workloads, I don't expect it will have increased performance in
> general and it may even be costly for some workloads (it all comes
> down to what is more useful in the cache-- somewhat-recently readahead
> data, or the data that it is displacing).
Hi Mike,
Copied. This is good suggestion, I will do it after I back from Lunar
New Year vacation, and submit it with other tested patches in following
v5.6-rc versions.
Thanks.
Coly Li
[snipped]
--
Coly Li
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 14/17] bcache: back to cache all readahead I/Os
2020-01-23 17:27 ` Coly Li
@ 2020-01-23 18:31 ` Jens Axboe
2020-01-24 0:49 ` Coly Li
2020-01-24 16:48 ` Michael Lyle
1 sibling, 1 reply; 7+ messages in thread
From: Jens Axboe @ 2020-01-23 18:31 UTC (permalink / raw)
To: Coly Li, Michael Lyle; +Cc: linux-bcache, linux-block, stable
On 1/23/20 10:27 AM, Coly Li wrote:
> On 2020/1/24 1:19 上午, Michael Lyle wrote:
>> Hi Coly and Jens--
>>
>> One concern I have with this is that it's going to wear out
>> limited-lifetime SSDs a -lot- faster. Was any thought given to making
>> this a tunable instead of just changing the behavior? Even if we have
>> an anecdote or two that it seems to have increased performance for
>> some workloads, I don't expect it will have increased performance in
>> general and it may even be costly for some workloads (it all comes
>> down to what is more useful in the cache-- somewhat-recently readahead
>> data, or the data that it is displacing).
>
> Hi Mike,
>
> Copied. This is good suggestion, I will do it after I back from Lunar
> New Year vacation, and submit it with other tested patches in following
> v5.6-rc versions.
Do you want me to just drop this patch for now from the series?
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 14/17] bcache: back to cache all readahead I/Os
2020-01-23 18:31 ` Jens Axboe
@ 2020-01-24 0:49 ` Coly Li
2020-01-24 1:14 ` Jens Axboe
0 siblings, 1 reply; 7+ messages in thread
From: Coly Li @ 2020-01-24 0:49 UTC (permalink / raw)
To: Jens Axboe; +Cc: Michael Lyle, linux-bcache, linux-block, stable
On 2020/1/24 2:31 上午, Jens Axboe wrote:
> On 1/23/20 10:27 AM, Coly Li wrote:
>> On 2020/1/24 1:19 上午, Michael Lyle wrote:
>>> Hi Coly and Jens--
>>>
>>> One concern I have with this is that it's going to wear out
>>> limited-lifetime SSDs a -lot- faster. Was any thought given to making
>>> this a tunable instead of just changing the behavior? Even if we have
>>> an anecdote or two that it seems to have increased performance for
>>> some workloads, I don't expect it will have increased performance in
>>> general and it may even be costly for some workloads (it all comes
>>> down to what is more useful in the cache-- somewhat-recently readahead
>>> data, or the data that it is displacing).
>>
>> Hi Mike,
>>
>> Copied. This is good suggestion, I will do it after I back from Lunar
>> New Year vacation, and submit it with other tested patches in following
>> v5.6-rc versions.
>
> Do you want me to just drop this patch for now from the series?
>
Hi Jens,
OK, please drop this patch from this series. I will re-submit the patch
with sysfs interface later with other patches.
Thanks.
--
Coly Li
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 14/17] bcache: back to cache all readahead I/Os
2020-01-24 0:49 ` Coly Li
@ 2020-01-24 1:14 ` Jens Axboe
0 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2020-01-24 1:14 UTC (permalink / raw)
To: Coly Li; +Cc: Michael Lyle, linux-bcache, linux-block, stable
On 1/23/20 5:49 PM, Coly Li wrote:
> On 2020/1/24 2:31 上午, Jens Axboe wrote:
>> On 1/23/20 10:27 AM, Coly Li wrote:
>>> On 2020/1/24 1:19 上午, Michael Lyle wrote:
>>>> Hi Coly and Jens--
>>>>
>>>> One concern I have with this is that it's going to wear out
>>>> limited-lifetime SSDs a -lot- faster. Was any thought given to making
>>>> this a tunable instead of just changing the behavior? Even if we have
>>>> an anecdote or two that it seems to have increased performance for
>>>> some workloads, I don't expect it will have increased performance in
>>>> general and it may even be costly for some workloads (it all comes
>>>> down to what is more useful in the cache-- somewhat-recently readahead
>>>> data, or the data that it is displacing).
>>>
>>> Hi Mike,
>>>
>>> Copied. This is good suggestion, I will do it after I back from Lunar
>>> New Year vacation, and submit it with other tested patches in following
>>> v5.6-rc versions.
>>
>> Do you want me to just drop this patch for now from the series?
>>
> Hi Jens,
>
> OK, please drop this patch from this series. I will re-submit the patch
> with sysfs interface later with other patches.
Sounds good, I queued up the rest for 5.6.
--
Jens Axboe
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 14/17] bcache: back to cache all readahead I/Os
2020-01-23 17:27 ` Coly Li
2020-01-23 18:31 ` Jens Axboe
@ 2020-01-24 16:48 ` Michael Lyle
1 sibling, 0 replies; 7+ messages in thread
From: Michael Lyle @ 2020-01-24 16:48 UTC (permalink / raw)
To: Coly Li; +Cc: Jens Axboe, linux-bcache, linux-block, stable
Hi Coly---
Thank you for holding the patch. I'm sorry for the late review (I was
travelling).
(We sure have a lot of settings and a lot of code dealing with them
all, which is unfortunate... but workloads / hardware used with bcache
are so varied).
Mike
On Thu, Jan 23, 2020 at 9:28 AM Coly Li <colyli@suse.de> wrote:
>
> On 2020/1/24 1:19 上午, Michael Lyle wrote:
> > Hi Coly and Jens--
> >
> > One concern I have with this is that it's going to wear out
> > limited-lifetime SSDs a -lot- faster. Was any thought given to making
> > this a tunable instead of just changing the behavior? Even if we have
> > an anecdote or two that it seems to have increased performance for
> > some workloads, I don't expect it will have increased performance in
> > general and it may even be costly for some workloads (it all comes
> > down to what is more useful in the cache-- somewhat-recently readahead
> > data, or the data that it is displacing).
>
> Hi Mike,
>
> Copied. This is good suggestion, I will do it after I back from Lunar
> New Year vacation, and submit it with other tested patches in following
> v5.6-rc versions.
>
> Thanks.
>
> Coly Li
>
> [snipped]
>
> --
>
> Coly Li
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-01-24 16:49 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20200123170142.98974-1-colyli@suse.de>
2020-01-23 17:01 ` [PATCH 14/17] bcache: back to cache all readahead I/Os colyli
2020-01-23 17:19 ` Michael Lyle
2020-01-23 17:27 ` Coly Li
2020-01-23 18:31 ` Jens Axboe
2020-01-24 0:49 ` Coly Li
2020-01-24 1:14 ` Jens Axboe
2020-01-24 16:48 ` Michael Lyle
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox