From: Mike Snitzer <snitzer@redhat.com>
To: Konstantin Ryabitsev <konstantin@linuxfoundation.org>
Cc: linux-lvm@redhat.com
Subject: Re: [linux-lvm] Network-attached block storage and local SSDs for dm-cache
Date: Mon, 22 Apr 2019 14:25:44 -0400 [thread overview]
Message-ID: <20190422182544.GA27249@redhat.com> (raw)
In-Reply-To: <20190419193036.GA24986@chatter.i7.local>
On Fri, Apr 19 2019 at 3:30pm -0400,
Konstantin Ryabitsev <konstantin@linuxfoundation.org> wrote:
> Hi, all:
>
> I know it's possible to set up dm-cache to combine network-attached
> block devices and local SSDs, but I'm having a hard time finding any
> first-hand evidence of this being done anywhere -- so I'm wondering
> if it's because there are reasons why this is a Bad Idea, or merely
> because there aren't many reasons for folks to do that.
>
> The reason why I'm trying to do it, in particular, is for
> mirrors.kernel.org systems where we already rely on dm-cache to
> combine large slow spinning disks with SSDs to a great advantage.
> Most hits on those systems are to the same set of files (latest
> distro package updates), so dm-cache hit-to-miss ratio is very
> advantageous. However, we need to build newest iterations of those
> systems, and being able to use network-attached storage at providers
> like Packet with local SSD drives would remove the need for us to
> purchase and host huge drive arrays.
>
> Thanks for any insights you may offer.
Only thing that could present itself as a new challenge is the
reliability of the network-attached block devices (e.g. do network
outages compromise dm-cache's ability to function).
I've not done any focused testing for, or thinking about, the impact
unreliable block devices might have on dm-cache (or dm-thinp, etc).
Usually we advise people to ensure the devices that they layer upon are
adequately robust/reliable. Short of that you'll need to create your
own luck by engineering a solution that provides network storage
recovery.
If the "origin" device is network-attached and proves unreliable you
can expect to see the dm-cache experience errors. dm-cache is not
raid. So if concerned about network outages you might want to (ab)use
dm-multipath's "queue_if_no_path" mode to queue IO for retry once the
network-based device is available again (dm-multipath isn't raid
either, but for your purposes you need some way to isolate potential for
network based faults). Or do you think you might be able to RAID1 or
RAID5 N of these network attached drives together?
Mike
next prev parent reply other threads:[~2019-04-22 18:25 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-19 19:30 [linux-lvm] Network-attached block storage and local SSDs for dm-cache Konstantin Ryabitsev
2019-04-22 18:25 ` Mike Snitzer [this message]
2019-04-23 13:58 ` Konstantin Ryabitsev
2019-04-23 10:20 ` Zdenek Kabelac
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190422182544.GA27249@redhat.com \
--to=snitzer@redhat.com \
--cc=konstantin@linuxfoundation.org \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).