public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] parallel 'copy-from' Ops in copy_file_range
@ 2020-01-27 16:43 Luis Henriques
  2020-01-27 16:43 ` [RFC PATCH 1/3] libceph: add non-blocking version of ceph_osdc_copy_from() Luis Henriques
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Luis Henriques @ 2020-01-27 16:43 UTC (permalink / raw)
  To: Jeff Layton, Sage Weil, Ilya Dryomov, Yan, Zheng, Gregory Farnum
  Cc: ceph-devel, linux-kernel, Luis Henriques

Hi,

As discussed here[1] I'm sending an RFC patchset that does the
parallelization of the requests sent to the OSDs during a copy_file_range
syscall in CephFS.

  [1] https://lore.kernel.org/lkml/20200108100353.23770-1-lhenriques@suse.com/

I've also some performance numbers that I wanted to share. Here's a
description of the very simple tests I've run:

 - create a file with 200 objects in it
   * i.e. tests with different object sizes mean different file sizes
 - drop all caches and umount the filesystem
 - Measure:
   * mount filesystem
   * full file copy (with copy_file_range)
   * umount filesystem

Tests were repeated several times and the average value was used for
comparison.

  DISCLAIMER:
  These numbers are only indicative, and different clusters and client
  configs will for sure show different performance!  More rigorous tests
  would be require to validate these results.

Having as baseline a full read+write (basically, a copy_file_range
operation within a filesystem mounted without the 'copyfrom' option),
here's some values for different object sizes:

			  8M	  4M	  1M	  65k
read+write		100%	100%	100%	 100%
sequential		 51%	 52%	 83%	>100%
parallel (throttle=1)	 51%	 52%	 83%	>100%
parallel (throttle=0)	 17%	 17%	 83%	>100%

Notes:

- 'parallel (throttle=0)' was a test where *all* the requests (i.e. 200
  requests to copy the 200 objects in the file) were sent to the OSDs and
  the wait for requests completion is done at the end only.

- 'parallel (throttle=1)' was just a control test, where the wait for
  completion is done immediately after a request is sent.  It was expected
  to be very similar to the non-optimized ('sequential') tests.

- These tests were executed on a cluster with 40 OSDs, spread across 5
  (bare-metal) nodes.

- The tests with object size of 65k show that copy_file_range definitely
  doesn't scale to files with small object sizes.  '> 100%' actually means
  more than 10x slower.

Measuring the mount+copy+umount masks the actual difference between
different throttle values due to the time spent in mount+umount.  Thus,
there was no real difference between throttle=0 (send all and wait) and
throttle=20 (send 20, wait, send 20, ...).  But here's what I observed
when measuring only the copy operation (4M object size):

read+write		100%
parallel (throttle=1)	 56%
parallel (throttle=5)	 23%
parallel (throttle=10)	 14%
parallel (throttle=20)	  9%
parallel (throttle=5)	  5%

Anyway, I'll still need to revisit patch 0003 as it doesn't follow the
suggestion done by Jeff to *not* add another knob to fine-tune the
throttle value -- this patch adds a kernel parameter for a knob that I
wanted to use in my testing to observe different values of this throttle
limit.

The goal is to probably to drop this patch and do the throttling in patch
0002.  I just need to come up with a decent heuristic.  Jeff's suggestion
was to use rsize/wsize, which are set to 64M by default IIRC.  Somehow I
feel that it should be related to the number of OSDs in the cluster
instead, but I'm not sure how.  And testing these sort of heuristics would
require different clusters, which isn't particularly easy to get.  Anyway,
comments are welcome!

Cheers,
--
Luis

Luis Henriques (3):
  libceph: add non-blocking version of ceph_osdc_copy_from()
  ceph: parallelize all copy-from requests in copy_file_range
  ceph: add module param to throttle 'copy-from2' operations

 fs/ceph/file.c                  | 48 ++++++++++++++++++++++++++--
 fs/ceph/super.c                 |  4 +++
 fs/ceph/super.h                 |  2 ++
 include/linux/ceph/osd_client.h | 14 +++++++++
 net/ceph/osd_client.c           | 55 +++++++++++++++++++++++++--------
 5 files changed, 108 insertions(+), 15 deletions(-)


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-01-28 17:16 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-01-27 16:43 [RFC PATCH 0/3] parallel 'copy-from' Ops in copy_file_range Luis Henriques
2020-01-27 16:43 ` [RFC PATCH 1/3] libceph: add non-blocking version of ceph_osdc_copy_from() Luis Henriques
2020-01-27 17:47   ` Ilya Dryomov
2020-01-27 18:39     ` Luis Henriques
2020-01-27 16:43 ` [RFC PATCH 2/3] ceph: parallelize all copy-from requests in copy_file_range Luis Henriques
2020-01-27 17:58   ` Ilya Dryomov
2020-01-27 18:44     ` Luis Henriques
2020-01-28  8:39       ` Ilya Dryomov
2020-01-27 16:43 ` [RFC PATCH 3/3] ceph: add module param to throttle 'copy-from2' operations Luis Henriques
2020-01-27 18:16 ` [RFC PATCH 0/3] parallel 'copy-from' Ops in copy_file_range Ilya Dryomov
2020-01-27 18:52   ` Luis Henriques
2020-01-28 17:15     ` Gregory Farnum

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox