From: Kai Krakow <hurikhan77@gmail.com>
To: linux-bcache@vger.kernel.org
Subject: Re: SSD usage for bcache - Read and Writeback
Date: Thu, 14 Sep 2017 17:31:12 +0200 [thread overview]
Message-ID: <20170914173112.376c03ef@jupiter.sol.kaishome.de> (raw)
In-Reply-To: ad61e116-e69b-d62e-b4de-f20198d7a3fe@coly.li
Am Thu, 14 Sep 2017 16:45:25 +0200
schrieb Coly Li <i@coly.li>:
> On 2017/9/14 下午3:10, FERNANDO FREDIANI wrote:
> > Hello Coly.
> >
> > If the users reads a piece of data that is just writen to SSD
> > (unlikely) it should first and in any condition be commited to the
> > permanent storage and then read from there and cached in another
> > area of the SSD. Writaback cache is very volatile and lasts only a
> > few seconds while the data is not yet committed to permanent
> > storage.
> >
> > In fact multiple device suport is not implemented yet, that's why I
> > am asking it and comparing with other well technology as ZFS.
> >
>
> Hi Fernando,
>
> Do you have some performance number to compare combined and separated
> configurations on ZFS ? If the performance improvement is not from
> adding one more SSD device, I don't why dedicate read/write SSDs may
> help for performance. In my understanding, if any of the SSD has
> spared throughput capability for read or write, mixed them together
> on both SSDs may have better performance number.
I could imagine that one way want to use a fast, more expensive disk as
read cache, while using a smaller SLC SSD as write cache for better
longevity and reliability. Because: When you write cache SSD breaks,
things go really bad. If you read cache breaks: No problem, it just
slows down.
So, in conclusion: The recommendation may not be because of
performance... Better performance may just be a (small) side effect.
> >
> > On 14/09/2017 04:58, Coly Li wrote:
> >> On 2017/9/11 下午4:04, FERNANDO FREDIANI wrote:
> [...]
> >> Hi Fernando,
> >>
> >> If there is any real performance number, it will be much easier to
> >> response this idea. What confuses me is, if user reads a data block
> >> which is just written to SSD, what is the benefit for the
> >> separated SSDs.
> >>
> >> Yes I agree with you that some times a single SSD as cache device
> >> is inefficient. Multiple cache device on bcache is a
> >> not-implemented yet feature as I know.
> >>
> >> Thanks.
>
--
Regards,
Kai
Replies to list-only preferred.
next prev parent reply other threads:[~2017-09-14 15:31 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-11 14:04 SSD usage for bcache - Read and Writeback FERNANDO FREDIANI
2017-09-14 7:58 ` Coly Li
2017-09-14 11:43 ` Kai Krakow
2017-09-14 13:10 ` FERNANDO FREDIANI
2017-09-14 14:40 ` Emmanuel Florac
2017-09-14 14:46 ` FERNANDO FREDIANI
2017-09-14 15:04 ` Emmanuel Florac
2017-09-14 15:11 ` FERNANDO FREDIANI
2017-09-14 14:45 ` Coly Li
2017-09-14 14:54 ` FERNANDO FREDIANI
2017-09-14 15:04 ` Coly Li
2017-09-14 15:14 ` FERNANDO FREDIANI
2017-09-26 19:28 ` FERNANDO FREDIANI
2017-09-26 19:51 ` Michael Lyle
2017-09-26 20:02 ` FERNANDO FREDIANI
2017-09-26 20:27 ` Kai Krakow
2017-09-14 15:31 ` Kai Krakow [this message]
2017-09-14 15:49 ` Coly Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170914173112.376c03ef@jupiter.sol.kaishome.de \
--to=hurikhan77@gmail.com \
--cc=linux-bcache@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox