From: Kai Krakow <hurikhan77@gmail.com>
To: linux-bcache@vger.kernel.org
Subject: Re: SSD usage for bcache - Read and Writeback
Date: Thu, 14 Sep 2017 13:43:11 +0200 [thread overview]
Message-ID: <20170914134311.25fd67aa@jupiter.sol.kaishome.de> (raw)
In-Reply-To: 51852c79-bee0-19c3-92d8-6044f3e3e2a7@coly.li
Am Thu, 14 Sep 2017 09:58:25 +0200
schrieb Coly Li <i@coly.li>:
> On 2017/9/11 下午4:04, FERNANDO FREDIANI wrote:
> > Hi folks
> >
> > In Bcache people normally use a single SSD for both Read and Write
> > cache. This seems to work pretty well, at least for the load we have
> > been using here.
> >
> > However in other environments, specially on ZFS people tend to
> > suggest to use dedicated SSDs for Write (ZIL) and for Read (L2ARC).
> > Some say that performance will be much better in this way and
> > mainly say they have different wearing levels.
> > The issue now a days is that SSDs for Write Cache (or Writeback)
> > don't need to have much space available (8GB normally is more than
> > enough), just enough for the time until data is committed to the
> > pool (or slower disks) so it is hard to find a suitable SSD to
> > dedicate to this propose only without overprovisioning that part.
> > On the top of that newer SSDs have changed a lot in recent times
> > using different types of memory technologies which tend to be much
> > durable.
> >
> > Given that I personally see that using a single SSD for both Write
> > and Read cache, in any scenarios doesn't impose any significant
> > loss to the storage, given you use new technology SSDs and that you
> > will hardly saturate it most of the time. Does anyone agree or
> > disagree with that ?
>
> If there is any real performance number, it will be much easier to
> response this idea. What confuses me is, if user reads a data block
> which is just written to SSD, what is the benefit for the separated
> SSDs.
>
> Yes I agree with you that some times a single SSD as cache device is
> inefficient. Multiple cache device on bcache is a not-implemented yet
> feature as I know.
Does bcache support more that one cache device in a cset? If yes, the
best idea would be if one could implement to define one as read-mostly,
and another ssd as write-mostly.
This would make a non-strict policy which allows reading from the other
device if the block is already there, or writing to the read-mostly
device to update data in cache. Thoughts?
--
Regards,
Kai
Replies to list-only preferred.
next prev parent reply other threads:[~2017-09-14 11:45 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-11 14:04 SSD usage for bcache - Read and Writeback FERNANDO FREDIANI
2017-09-14 7:58 ` Coly Li
2017-09-14 11:43 ` Kai Krakow [this message]
2017-09-14 13:10 ` FERNANDO FREDIANI
2017-09-14 14:40 ` Emmanuel Florac
2017-09-14 14:46 ` FERNANDO FREDIANI
2017-09-14 15:04 ` Emmanuel Florac
2017-09-14 15:11 ` FERNANDO FREDIANI
2017-09-14 14:45 ` Coly Li
2017-09-14 14:54 ` FERNANDO FREDIANI
2017-09-14 15:04 ` Coly Li
2017-09-14 15:14 ` FERNANDO FREDIANI
2017-09-26 19:28 ` FERNANDO FREDIANI
2017-09-26 19:51 ` Michael Lyle
2017-09-26 20:02 ` FERNANDO FREDIANI
2017-09-26 20:27 ` Kai Krakow
2017-09-14 15:31 ` Kai Krakow
2017-09-14 15:49 ` Coly Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170914134311.25fd67aa@jupiter.sol.kaishome.de \
--to=hurikhan77@gmail.com \
--cc=linux-bcache@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox