From: David Disseldorp <ddiss@suse.de>
To: Josh Durgin <jdurgin@redhat.com>
Cc: Mike Christie <mchristi@redhat.com>, ceph-devel@vger.kernel.org
Subject: Re: [PATCH 0/2] ceph osd: initial VMware VAAI support
Date: Fri, 11 Mar 2016 11:03:45 +0100 [thread overview]
Message-ID: <20160311110345.33d8028c@echidna.suse> (raw)
In-Reply-To: <56E1F912.1000509@redhat.com>
Hi Josh,
On Thu, 10 Mar 2016 14:45:38 -0800, Josh Durgin wrote:
> On 03/10/2016 04:04 AM, David Disseldorp wrote:
> > On Thu, 10 Mar 2016 00:36:55 -0600, Mike Christie wrote:
> >
> > ...
> >>> This does not include support for XCOPY/extended copy. I
> >>> am still looking into this, but it seems it might be
> >>> difficult to support due to rbd being more tuned to cloning
> >>> entire devices. When we implement VASA, the cloneVirtualVolume
> >>> might be something we can support though.
> >
> > I suppose the src-and-dest-in-same-pg requirement would complicate
> > things quite a bit, but wouldn't clonerange be an option for XCOPY
> > offloads?
>
> It's not a good fit, since with multiple clones putting data on the
> same set of osds, the workload and space utilization gets skewed for
> that set of osds compared to the rest of the cluster.
>
> It also won't give you fast cloning - it's a full copy on xfs, and
> you'd need to do one for every object affected.
Currently the copy is being done on the LIO iSCSI gateway, so offloading
any of that to the OSDs would save a lot of network traffic.
Also as Ric mentioned, XFS has clone-range support coming, so Ceph's
dedupe COW optimisations need not only be limited the Btrfs Filestore.
> Due to these limitations, lack of existing clonerange use, and the
> complications it brings to the osd as the only op affecting more than
> one object, we've talked about removing the clonerange op.
Okay, fair enough. Thanks for the details.
Cheers, David
prev parent reply other threads:[~2016-03-11 10:03 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <[PATCH 0/2] ceph osd: initial VMware VAAI support>
2016-03-10 6:34 ` (unknown), Mike Christie
2016-03-10 6:34 ` [PATCH 1/2] ceph osd: add support for new op writesame Mike Christie
2016-03-10 12:03 ` David Disseldorp
2016-03-10 6:34 ` [PATCH 2/2] ceph osd: add support for new op cmpext Mike Christie
2016-03-10 12:03 ` David Disseldorp
2016-03-10 17:06 ` Mike Christie
2016-03-10 17:12 ` David Disseldorp
2016-03-10 6:36 ` [PATCH 0/2] ceph osd: initial VMware VAAI support Mike Christie
2016-03-10 12:04 ` David Disseldorp
2016-03-10 22:45 ` Josh Durgin
2016-03-11 4:46 ` Ric Wheeler
2016-03-11 10:03 ` David Disseldorp [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160311110345.33d8028c@echidna.suse \
--to=ddiss@suse.de \
--cc=ceph-devel@vger.kernel.org \
--cc=jdurgin@redhat.com \
--cc=mchristi@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.