linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [nfsv4] Inter server-side copy performance
       [not found] <DFE33002-FE1C-4E83-B6E3-50BFD304C7F6@netapp.com>
@ 2017-04-13 17:45 ` J. Bruce Fields
  2017-04-14 20:09   ` Mora, Jorge
  0 siblings, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2017-04-13 17:45 UTC (permalink / raw)
  To: Mora, Jorge; +Cc: nfsv4@ietf.org, linux-nfs

On Wed, Apr 12, 2017 at 02:47:30PM +0000, Mora, Jorge wrote:
> The following shows a comparison between inter server-side copy and a traditional copy.

Thanks for doing this!  Adding the linux-nfs list, I hope that's OK.

> Setup:
>     Client: 16 CPUs, 32GB
>     SRC server: 4 CPUs, 8GB
>     DST server: 4 CPUs, 8GB

Could you also tell us about the network?  How much bandwidth is
available between the two servers, and between the each server and the
client?  Disk bandwidth might be useful to know too.

>     All machines are running 4.10.0-ssc-02242017.1345,
>       4.10.0 kernel on top RHEL 7.3 with Olga’s SSC patches 02242017.tgz (asynchronous SSC)
> Test:
>     The test runs 10 times for each copy size and the average is compared.
>     The servers are mounted and un-mounted for each copy (using default mount options)
>     Testing copy size: 1KB – 4GB, NOTE: the last copy is 4GB-1 bytes, the maximum copy_file_range() accepts.

If the client unmounts the servers, but if the servers keep their
exports mounted then the performance may depend on whether data is
already cached on the source server, since all these copies fit in RAM.

Are you timing just the copy_file_range() call, or do you include a
following sync?

> Results:
>     For a copy size below 16MB, traditional copy runs faster than server-side copy
>     For a copy size of 32MB and above, server-side copy is at least 30% faster.
>     For a copy size of 128MB and above, server-side copy is about 50% faster.
>     For the 1GB and 2GB copy sizes, the performance improvement is only about 30-40%, investigating why this is happening.

Might also be interesting to look at performance when copying a larger
file with multiple copy_file_range() calls.

--b.

> *** Inter-SSC performance test
>     TEST: Running test 'perf02'
> 
> >> Performance degradation:
>     INFO: 00:08:50.035165 - perf02 copy with size 1KB
>     INFO: 00:09:01.302004 - Server-side COPY: 0.104049992561 seconds
>     INFO: 00:09:01.302206 - Traditional COPY: 0.0319610118866 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 1KB file: 225%
>     INFO: 00:09:01.302378 - perf02 copy with size 2KB
>     INFO: 00:09:12.332144 - Server-side COPY: 0.0923588514328 seconds
>     INFO: 00:09:12.332335 - Traditional COPY: 0.033596777916 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 2KB file: 174%
>     INFO: 00:09:12.332510 - perf02 copy with size 4KB
>     INFO: 00:09:23.320714 - Server-side COPY: 0.100715565681 seconds
>     INFO: 00:09:23.320915 - Traditional COPY: 0.0336240530014 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 4KB file: 199%
>     INFO: 00:09:23.321086 - perf02 copy with size 8KB
>     INFO: 00:09:34.542515 - Server-side COPY: 0.0881641149521 seconds
>     INFO: 00:09:34.542742 - Traditional COPY: 0.0336608886719 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 8KB file: 161%
>     INFO: 00:09:34.542924 - perf02 copy with size 16KB
>     INFO: 00:09:45.439424 - Server-side COPY: 0.0939898014069 seconds
>     INFO: 00:09:45.439630 - Traditional COPY: 0.0277063846588 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 16KB file: 239%
>     INFO: 00:09:45.439795 - perf02 copy with size 32KB
>     INFO: 00:09:56.103480 - Server-side COPY: 0.0798309087753 seconds
>     INFO: 00:09:56.103691 - Traditional COPY: 0.0335722208023 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 32KB file: 137%
>     INFO: 00:09:56.103848 - perf02 copy with size 64KB
>     INFO: 00:10:07.217026 - Server-side COPY: 0.106491327286 seconds
>     INFO: 00:10:07.217228 - Traditional COPY: 0.034387588501 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 64KB file: 209%
>     INFO: 00:10:07.217400 - perf02 copy with size 128KB
>     INFO: 00:10:18.022320 - Server-side COPY: 0.105627346039 seconds
>     INFO: 00:10:18.022516 - Traditional COPY: 0.0511318922043 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 128KB file: 106%
>     INFO: 00:10:18.022686 - perf02 copy with size 256KB
>     INFO: 00:10:29.452353 - Server-side COPY: 0.124787020683 seconds
>     INFO: 00:10:29.452538 - Traditional COPY: 0.0552408218384 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 256KB file: 125%
>     INFO: 00:10:29.452713 - perf02 copy with size 512KB
>     INFO: 00:10:41.357315 - Server-side COPY: 0.110602092743 seconds
>     INFO: 00:10:41.357489 - Traditional COPY: 0.0643961668015 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 512KB file: 71%
>     INFO: 00:10:41.357656 - perf02 copy with size 1MB
>     INFO: 00:10:52.704159 - Server-side COPY: 0.118950200081 seconds
>     INFO: 00:10:52.704341 - Traditional COPY: 0.0703304767609 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 1MB file: 69%
>     INFO: 00:10:52.704499 - perf02 copy with size 2MB
>     INFO: 00:11:05.367265 - Server-side COPY: 0.13633646965 seconds
>     INFO: 00:11:05.367451 - Traditional COPY: 0.103643512726 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 2MB file: 31%
>     INFO: 00:11:05.367614 - perf02 copy with size 4MB
>     INFO: 00:11:18.872004 - Server-side COPY: 0.181165075302 seconds
>     INFO: 00:11:18.872204 - Traditional COPY: 0.164586114883 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 4MB file: 10%
>     INFO: 00:11:18.872368 - perf02 copy with size 8MB
>     INFO: 00:11:35.093071 - Server-side COPY: 0.29020409584 seconds
>     INFO: 00:11:35.093297 - Traditional COPY: 0.283315181732 seconds
>     FAIL: SSC should outperform traditional copy, performance degradation for a 8MB file: 2%
> 
> >> Performance gain:
>     INFO: 00:11:35.093446 - perf02 copy with size 16MB
>     INFO: 00:11:54.779844 - Server-side COPY: 0.455569577217 seconds
>     INFO: 00:11:54.780038 - Traditional COPY: 0.506252598763 seconds
>     PASS: SSC should outperform traditional copy, performance improvement for a 16MB file: 11%
>     INFO: 00:11:54.780185 - perf02 copy with size 32MB
>     INFO: 00:12:22.415131 - Server-side COPY: 0.71369125843 seconds
>     INFO: 00:12:22.415319 - Traditional COPY: 0.93473637104 seconds
>     PASS: SSC should outperform traditional copy, performance improvement for a 32MB file: 30%
>     INFO: 00:12:22.415466 - perf02 copy with size 64MB
>     INFO: 00:13:03.498260 - Server-side COPY: 1.23845338821 seconds
>     INFO: 00:13:03.498456 - Traditional COPY: 1.73098192215 seconds
>     PASS: SSC should outperform traditional copy, performance improvement for a 64MB file: 39%
>     INFO: 00:13:03.498637 - perf02 copy with size 128MB
>     INFO: 00:14:11.502475 - Server-side COPY: 2.22639911175 seconds
>     INFO: 00:14:11.502896 - Traditional COPY: 3.34778087139 seconds
>     PASS: SSC should outperform traditional copy, performance improvement for a 128MB file: 50%
>     INFO: 00:14:11.503102 - perf02 copy with size 256MB
>     INFO: 00:16:12.323350 - Server-side COPY: 4.29401872158 seconds
>     INFO: 00:16:12.323537 - Traditional COPY: 6.54622249603 seconds
>     PASS: SSC should outperform traditional copy, performance improvement for a 256MB file: 52%
>     INFO: 00:16:12.323703 - perf02 copy with size 512MB
>     INFO: 00:19:58.311598 - Server-side COPY: 8.3841770649 seconds
>     INFO: 00:19:58.311793 - Traditional COPY: 12.9200412273 seconds
>     PASS: SSC should outperform traditional copy, performance improvement for a 512MB file: 54%
>     INFO: 00:19:58.311935 - perf02 copy with size 1GB
>     INFO: 00:26:01.275809 - Server-side COPY: 14.5967838049 seconds
>     INFO: 00:26:01.276020 - Traditional COPY: 20.1957453728 seconds
>     PASS: SSC should outperform traditional copy, performance improvement for a 1GB file: 38%
>     INFO: 00:26:01.276175 - perf02 copy with size 2GB
>     INFO: 00:35:29.990220 - Server-side COPY: 23.606795764 seconds
>     INFO: 00:35:29.990432 - Traditional COPY: 31.4261539459 seconds
>     PASS: SSC should outperform traditional copy, performance improvement for a 2GB file: 33%
>     INFO: 00:35:29.990580 - perf02 copy with size 4GB
>     INFO: 00:53:17.198827 - Server-side COPY: 41.2261408091 seconds
>     INFO: 00:53:17.199033 - Traditional COPY: 62.9210350513 seconds
>     PASS: SSC should outperform traditional copy, performance improvement for a 4GB file: 52%
>     TIME: 44m27.164286s
> 
> Logfile: /home/mora/logs/nfstest_ssc_20170412000849.log
> 
> 23 tests (9 passed, 14 failed)
> 
> Total time: 44m28.334177s
> 
> 
> 
> --Jorge
> 
> 
> _______________________________________________
> nfsv4 mailing list
> nfsv4@ietf.org
> https://www.ietf.org/mailman/listinfo/nfsv4

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [nfsv4] Inter server-side copy performance
  2017-04-13 17:45 ` [nfsv4] Inter server-side copy performance J. Bruce Fields
@ 2017-04-14 20:09   ` Mora, Jorge
  2017-04-14 21:22     ` Olga Kornievskaia
  0 siblings, 1 reply; 10+ messages in thread
From: Mora, Jorge @ 2017-04-14 20:09 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: nfsv4@ietf.org, linux-nfs@vger.kernel.org

T24gNC8xMy8xNywgMTE6NDUgQU0sICJKLiBCcnVjZSBGaWVsZHMiIDxiZmllbGRzQGZpZWxkc2Vz
Lm9yZz4gd3JvdGU6DQoNCj4gT24gV2VkLCBBcHIgMTIsIDIwMTcgYXQgMDI6NDc6MzBQTSArMDAw
MCwgTW9yYSwgSm9yZ2Ugd3JvdGU6DQo+ID4gVGhlIGZvbGxvd2luZyBzaG93cyBhIGNvbXBhcmlz
b24gYmV0d2VlbiBpbnRlciBzZXJ2ZXItc2lkZSBjb3B5IGFuZCBhIHRyYWRpdGlvbmFsIGNvcHku
DQo+IA0KPiBUaGFua3MgZm9yIGRvaW5nIHRoaXMhICBBZGRpbmcgdGhlIGxpbnV4LW5mcyBsaXN0
LCBJIGhvcGUgdGhhdCdzIE9LLg0KPiANCj4gPiBTZXR1cDoNCj4gPiAgICAgQ2xpZW50OiAxNiBD
UFVzLCAzMkdCDQo+ID4gICAgIFNSQyBzZXJ2ZXI6IDQgQ1BVcywgOEdCDQo+ID4gICAgIERTVCBz
ZXJ2ZXI6IDQgQ1BVcywgOEdCDQo+IA0KPiBDb3VsZCB5b3UgYWxzbyB0ZWxsIHVzIGFib3V0IHRo
ZSBuZXR3b3JrPyAgSG93IG11Y2ggYmFuZHdpZHRoIGlzDQo+IGF2YWlsYWJsZSBiZXR3ZWVuIHRo
ZSB0d28gc2VydmVycywgYW5kIGJldHdlZW4gdGhlIGVhY2ggc2VydmVyIGFuZCB0aGUNCj4gY2xp
ZW50PyAgRGlzayBiYW5kd2lkdGggbWlnaHQgYmUgdXNlZnVsIHRvIGtub3cgdG9vLg0KDQpDbGll
bnQ6ICAgICBwYWlyICAgICgxMDAwYmFzZVQvRnVsbCkNClNSQyBTZXJ2ZXI6IHJpY29oICAgKDEw
MDBiYXNlVC9GdWxsKQ0KRFNUIFNlcnZlcjogaGFkZG9jayAoMTAwMGJhc2VUL0Z1bGwpDQoNClRy
YW5zZmVyIGZyb20gc291cmNlIHNlcnZlciB0byBjbGllbnQ6DQpbbW9yYUBwYWlyIHRlc3RdJCBz
Y3Agcmljb2g6L2hvbWUvZXhwb3J0cy9uZnN0ZXN0X3NzY19zb3VyY2VfZmlsZSAvZGV2L251bGwN
Cm5mc3Rlc3Rfc3NjX3NvdXJjZV9maWxlICAgICAgICAgICAgICAgICAgICAgICAxMDAlIDgxOTJN
QiAxMTAuN01CL3MgICAwMToxNA0KDQpUcmFuc2ZlciBmcm9tIGRlc3RpbmF0aW9uIHNlcnZlciB0
byBjbGllbnQ6DQpbbW9yYUBwYWlyIHRlc3RdJCBzY3AgaGFkZG9jazovaG9tZS9leHBvcnRzL25m
c3Rlc3Rfc3NjX3NvdXJjZV9maWxlIC9kZXYvbnVsbA0KbmZzdGVzdF9zc2Nfc291cmNlX2ZpbGUg
ICAgICAgICAgICAgICAgICAgICAgIDEwMCUgODE5Mk1CIDExMi4yTUIvcyAgIDAxOjEzDQoNClRy
YW5zZmVyIGZyb20gc291cmNlIHRvIGRlc3RpbmF0aW9uIHNlcnZlcjoNClttb3JhQGhhZGRvY2sg
fl0kIHNjcCByaWNvaDovaG9tZS9leHBvcnRzL25mc3Rlc3Rfc3NjX3NvdXJjZV9maWxlIC9kZXYv
bnVsbA0KbmZzdGVzdF9zc2Nfc291cmNlX2ZpbGUgICAgICAgICAgICAgICAgICAgICAgIDEwMCUg
ODE5Mk1CIDExMC43TUIvcyAgIDAxOjE0DQoNCkRpc2sgSS9PIG9uIHNvdXJjZSBzZXJ2ZXI6DQpb
bW9yYUByaWNvaCB+XSQgc3VkbyBzaCAtYyAiZWNobyAzID4gL3Byb2Mvc3lzL3ZtL2Ryb3BfY2Fj
aGVzIg0KW21vcmFAcmljb2ggfl0kIGRkIGlmPS9ob21lL2V4cG9ydHMvbmZzdGVzdF9zc2Nfc291
cmNlX2ZpbGUgb2Y9L2Rldi9udWxsIGJzPThrIGNvdW50PTEwMjRrDQoxMDQ4NTc2KzAgcmVjb3Jk
cyBpbg0KMTA0ODU3NiswIHJlY29yZHMgb3V0DQo4NTg5OTM0NTkyIGJ5dGVzICg4LjYgR0IpIGNv
cGllZCwgNTYuODk1MyBzLCAxNTEgTUIvcw0KW21vcmFAcmljb2ggfl0kIHN1ZG8gc2ggLWMgImVj
aG8gMyA+IC9wcm9jL3N5cy92bS9kcm9wX2NhY2hlcyINClttb3JhQHJpY29oIH5dJCBkZCBpZj0v
ZGV2L3plcm8gb2Y9L2hvbWUvZXhwb3J0cy90ZXN0IGJzPThrIGNvdW50PTEwMjRrDQoxMDQ4NTc2
KzAgcmVjb3JkcyBpbg0KMTA0ODU3NiswIHJlY29yZHMgb3V0DQo4NTg5OTM0NTkyIGJ5dGVzICg4
LjYgR0IpIGNvcGllZCwgNDYuOTA2NCBzLCAxODMgTUIvcw0KDQpEaXNrIEkvTyBvbiBkZXN0aW5h
dGlvbiBzZXJ2ZXI6DQpbbW9yYUBoYWRkb2NrIH5dJCBzdWRvIHNoIC1jICJlY2hvIDMgPiAvcHJv
Yy9zeXMvdm0vZHJvcF9jYWNoZXMiDQpbbW9yYUBoYWRkb2NrIH5dJCBkZCBpZj0vaG9tZS9leHBv
cnRzL25mc3Rlc3Rfc3NjX3NvdXJjZV9maWxlIG9mPS9kZXYvbnVsbCBicz04ayBjb3VudD0xMDI0
aw0KMTA0ODU3NiswIHJlY29yZHMgaW4NCjEwNDg1NzYrMCByZWNvcmRzIG91dA0KODU4OTkzNDU5
MiBieXRlcyAoOC42IEdCKSBjb3BpZWQsIDU3LjMzNTYgcywgMTUwIE1CL3MNClttb3JhQGhhZGRv
Y2sgfl0kIHN1ZG8gc2ggLWMgImVjaG8gMyA+IC9wcm9jL3N5cy92bS9kcm9wX2NhY2hlcyINCltt
b3JhQGhhZGRvY2sgfl0kIGRkIGlmPS9kZXYvemVybyBvZj0vaG9tZS9leHBvcnRzL3Rlc3QgYnM9
OGsgY291bnQ9MTAyNGsNCjEwNDg1NzYrMCByZWNvcmRzIGluDQoxMDQ4NTc2KzAgcmVjb3JkcyBv
dXQNCjg1ODk5MzQ1OTIgYnl0ZXMgKDguNiBHQikgY29waWVkLCA0Ny4zODA5IHMsIDE4MSBNQi9z
DQoNCj4gPiAgICAgQWxsIG1hY2hpbmVzIGFyZSBydW5uaW5nIDQuMTAuMC1zc2MtMDIyNDIwMTcu
MTM0NSwNCj4gPiAgICAgICA0LjEwLjAga2VybmVsIG9uIHRvcCBSSEVMIDcuMyB3aXRoIE9sZ2Hi
gJlzIFNTQyBwYXRjaGVzIDAyMjQyMDE3LnRneiAoYXN5bmNocm9ub3VzIFNTQykNCj4gPiBUZXN0
Og0KPiA+ICAgICBUaGUgdGVzdCBydW5zIDEwIHRpbWVzIGZvciBlYWNoIGNvcHkgc2l6ZSBhbmQg
dGhlIGF2ZXJhZ2UgaXMgY29tcGFyZWQuDQo+ID4gICAgIFRoZSBzZXJ2ZXJzIGFyZSBtb3VudGVk
IGFuZCB1bi1tb3VudGVkIGZvciBlYWNoIGNvcHkgKHVzaW5nIGRlZmF1bHQgbW91bnQgb3B0aW9u
cykNCj4gPiAgICAgVGVzdGluZyBjb3B5IHNpemU6IDFLQiDigJMgNEdCLCBOT1RFOiB0aGUgbGFz
dCBjb3B5IGlzIDRHQi0xIGJ5dGVzLCB0aGUgbWF4aW11bSBjb3B5X2ZpbGVfcmFuZ2UoKSBhY2Nl
cHRzLg0KPiANCj4gSWYgdGhlIGNsaWVudCB1bm1vdW50cyB0aGUgc2VydmVycywgYnV0IGlmIHRo
ZSBzZXJ2ZXJzIGtlZXAgdGhlaXINCj4gZXhwb3J0cyBtb3VudGVkIHRoZW4gdGhlIHBlcmZvcm1h
bmNlIG1heSBkZXBlbmQgb24gd2hldGhlciBkYXRhIGlzDQo+IGFscmVhZHkgY2FjaGVkIG9uIHRo
ZSBzb3VyY2Ugc2VydmVyLCBzaW5jZSBhbGwgdGhlc2UgY29waWVzIGZpdCBpbiBSQU0uDQoNCkdv
b2QgcG9pbnQsIEkgd2lsbCBjbGVhciB0aGUgVk0gY2FjaGVzIHVzaW5nICJlY2hvIDMgPiAvcHJv
Yy9zeXMvdm0vZHJvcF9jYWNoZXMiDQpvbiBib3RoIHNlcnZlcnMgYmVmb3JlIGVhY2ggY29weS4N
Cg0KPiBBcmUgeW91IHRpbWluZyBqdXN0IHRoZSBjb3B5X2ZpbGVfcmFuZ2UoKSBjYWxsLCBvciBk
byB5b3UgaW5jbHVkZSBhDQo+IGZvbGxvd2luZyBzeW5jPw0KDQpJIGFtIHRpbWluZyByaWdodCBi
ZWZvcmUgY2FsbGluZyBjb3B5X2ZpbGVfcmFuZ2UoKSB1cCB0byBkb2luZyBhbiBmc3luYygpIGFu
ZCBjbG9zZSgpIG9mIHRoZSBkZXN0aW5hdGlvbiBmaWxlLg0KRm9yIHRoZSB0cmFkaXRpb25hbCBj
b3B5IGlzIHRoZSBzYW1lLCBJIGFtIHRpbWluZyByaWdodCBiZWZvcmUgdGhlIGZpcnN0IHJlYWQg
b24gdGhlIHNvdXJjZSBmaWxlIHVwIHRvIHRoZQ0KZnN5bmMoKSBhbmQgY2xvc2UoKSBvZiB0aGUg
ZGVzdGluYXRpb24gZmlsZS4NCg0KPiA+IFJlc3VsdHM6DQo+ID4gICAgIEZvciBhIGNvcHkgc2l6
ZSBiZWxvdyAxNk1CLCB0cmFkaXRpb25hbCBjb3B5IHJ1bnMgZmFzdGVyIHRoYW4gc2VydmVyLXNp
ZGUgY29weQ0KPiA+ICAgICBGb3IgYSBjb3B5IHNpemUgb2YgMzJNQiBhbmQgYWJvdmUsIHNlcnZl
ci1zaWRlIGNvcHkgaXMgYXQgbGVhc3QgMzAlIGZhc3Rlci4NCj4gPiAgICAgRm9yIGEgY29weSBz
aXplIG9mIDEyOE1CIGFuZCBhYm92ZSwgc2VydmVyLXNpZGUgY29weSBpcyBhYm91dCA1MCUgZmFz
dGVyLg0KPiA+ICAgICBGb3IgdGhlIDFHQiBhbmQgMkdCIGNvcHkgc2l6ZXMsIHRoZSBwZXJmb3Jt
YW5jZSBpbXByb3ZlbWVudCBpcyBvbmx5IGFib3V0IDMwLTQwJSwgaW52ZXN0aWdhdGluZyB3aHkg
dGhpcyBpcyBoYXBwZW5pbmcuDQo+IA0KPiBNaWdodCBhbHNvIGJlIGludGVyZXN0aW5nIHRvIGxv
b2sgYXQgcGVyZm9ybWFuY2Ugd2hlbiBjb3B5aW5nIGEgbGFyZ2VyDQo+IGZpbGUgd2l0aCBtdWx0
aXBsZSBjb3B5X2ZpbGVfcmFuZ2UoKSBjYWxscy4NCg0KSSB3aWxsIGRvIHRoaXMgYXMgd2VsbC4N
Cg0KDQotLUpvcmdlDQoNCj4gLS1iLg0KPiANCj4gPiAqKiogSW50ZXItU1NDIHBlcmZvcm1hbmNl
IHRlc3QNCj4gPiAgICAgVEVTVDogUnVubmluZyB0ZXN0ICdwZXJmMDInDQo+ID4gDQo+ID4gUGVy
Zm9ybWFuY2UgZGVncmFkYXRpb246DQo+ID4gICAgIElORk86IDAwOjA4OjUwLjAzNTE2NSAtIHBl
cmYwMiBjb3B5IHdpdGggc2l6ZSAxS0INCj4gPiAgICAgSU5GTzogMDA6MDk6MDEuMzAyMDA0IC0g
U2VydmVyLXNpZGUgQ09QWTogMC4xMDQwNDk5OTI1NjEgc2Vjb25kcw0KPiA+ICAgICBJTkZPOiAw
MDowOTowMS4zMDIyMDYgLSBUcmFkaXRpb25hbCBDT1BZOiAwLjAzMTk2MTAxMTg4NjYgc2Vjb25k
cw0KPiA+ICAgICBGQUlMOiBTU0Mgc2hvdWxkIG91dHBlcmZvcm0gdHJhZGl0aW9uYWwgY29weSwg
cGVyZm9ybWFuY2UgZGVncmFkYXRpb24gZm9yIGEgMUtCIGZpbGU6IDIyNSUNCj4gPiAgICAgSU5G
TzogMDA6MDk6MDEuMzAyMzc4IC0gcGVyZjAyIGNvcHkgd2l0aCBzaXplIDJLQg0KPiA+ICAgICBJ
TkZPOiAwMDowOToxMi4zMzIxNDQgLSBTZXJ2ZXItc2lkZSBDT1BZOiAwLjA5MjM1ODg1MTQzMjgg
c2Vjb25kcw0KPiA+ICAgICBJTkZPOiAwMDowOToxMi4zMzIzMzUgLSBUcmFkaXRpb25hbCBDT1BZ
OiAwLjAzMzU5Njc3NzkxNiBzZWNvbmRzDQo+ID4gICAgIEZBSUw6IFNTQyBzaG91bGQgb3V0cGVy
Zm9ybSB0cmFkaXRpb25hbCBjb3B5LCBwZXJmb3JtYW5jZSBkZWdyYWRhdGlvbiBmb3IgYSAyS0Ig
ZmlsZTogMTc0JQ0KPiA+ICAgICBJTkZPOiAwMDowOToxMi4zMzI1MTAgLSBwZXJmMDIgY29weSB3
aXRoIHNpemUgNEtCDQo+ID4gICAgIElORk86IDAwOjA5OjIzLjMyMDcxNCAtIFNlcnZlci1zaWRl
IENPUFk6IDAuMTAwNzE1NTY1NjgxIHNlY29uZHMNCj4gPiAgICAgSU5GTzogMDA6MDk6MjMuMzIw
OTE1IC0gVHJhZGl0aW9uYWwgQ09QWTogMC4wMzM2MjQwNTMwMDE0IHNlY29uZHMNCj4gPiAgICAg
RkFJTDogU1NDIHNob3VsZCBvdXRwZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZvcm1hbmNl
IGRlZ3JhZGF0aW9uIGZvciBhIDRLQiBmaWxlOiAxOTklDQo+ID4gICAgIElORk86IDAwOjA5OjIz
LjMyMTA4NiAtIHBlcmYwMiBjb3B5IHdpdGggc2l6ZSA4S0INCj4gPiAgICAgSU5GTzogMDA6MDk6
MzQuNTQyNTE1IC0gU2VydmVyLXNpZGUgQ09QWTogMC4wODgxNjQxMTQ5NTIxIHNlY29uZHMNCj4g
PiAgICAgSU5GTzogMDA6MDk6MzQuNTQyNzQyIC0gVHJhZGl0aW9uYWwgQ09QWTogMC4wMzM2NjA4
ODg2NzE5IHNlY29uZHMNCj4gPiAgICAgRkFJTDogU1NDIHNob3VsZCBvdXRwZXJmb3JtIHRyYWRp
dGlvbmFsIGNvcHksIHBlcmZvcm1hbmNlIGRlZ3JhZGF0aW9uIGZvciBhIDhLQiBmaWxlOiAxNjEl
DQo+ID4gICAgIElORk86IDAwOjA5OjM0LjU0MjkyNCAtIHBlcmYwMiBjb3B5IHdpdGggc2l6ZSAx
NktCDQo+ID4gICAgIElORk86IDAwOjA5OjQ1LjQzOTQyNCAtIFNlcnZlci1zaWRlIENPUFk6IDAu
MDkzOTg5ODAxNDA2OSBzZWNvbmRzDQo+ID4gICAgIElORk86IDAwOjA5OjQ1LjQzOTYzMCAtIFRy
YWRpdGlvbmFsIENPUFk6IDAuMDI3NzA2Mzg0NjU4OCBzZWNvbmRzDQo+ID4gICAgIEZBSUw6IFNT
QyBzaG91bGQgb3V0cGVyZm9ybSB0cmFkaXRpb25hbCBjb3B5LCBwZXJmb3JtYW5jZSBkZWdyYWRh
dGlvbiBmb3IgYSAxNktCIGZpbGU6IDIzOSUNCj4gPiAgICAgSU5GTzogMDA6MDk6NDUuNDM5Nzk1
IC0gcGVyZjAyIGNvcHkgd2l0aCBzaXplIDMyS0INCj4gPiAgICAgSU5GTzogMDA6MDk6NTYuMTAz
NDgwIC0gU2VydmVyLXNpZGUgQ09QWTogMC4wNzk4MzA5MDg3NzUzIHNlY29uZHMNCj4gPiAgICAg
SU5GTzogMDA6MDk6NTYuMTAzNjkxIC0gVHJhZGl0aW9uYWwgQ09QWTogMC4wMzM1NzIyMjA4MDIz
IHNlY29uZHMNCj4gPiAgICAgRkFJTDogU1NDIHNob3VsZCBvdXRwZXJmb3JtIHRyYWRpdGlvbmFs
IGNvcHksIHBlcmZvcm1hbmNlIGRlZ3JhZGF0aW9uIGZvciBhIDMyS0IgZmlsZTogMTM3JQ0KPiA+
ICAgICBJTkZPOiAwMDowOTo1Ni4xMDM4NDggLSBwZXJmMDIgY29weSB3aXRoIHNpemUgNjRLQg0K
PiA+ICAgICBJTkZPOiAwMDoxMDowNy4yMTcwMjYgLSBTZXJ2ZXItc2lkZSBDT1BZOiAwLjEwNjQ5
MTMyNzI4NiBzZWNvbmRzDQo+ID4gICAgIElORk86IDAwOjEwOjA3LjIxNzIyOCAtIFRyYWRpdGlv
bmFsIENPUFk6IDAuMDM0Mzg3NTg4NTAxIHNlY29uZHMNCj4gPiAgICAgRkFJTDogU1NDIHNob3Vs
ZCBvdXRwZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZvcm1hbmNlIGRlZ3JhZGF0aW9uIGZv
ciBhIDY0S0IgZmlsZTogMjA5JQ0KPiA+ICAgICBJTkZPOiAwMDoxMDowNy4yMTc0MDAgLSBwZXJm
MDIgY29weSB3aXRoIHNpemUgMTI4S0INCj4gPiAgICAgSU5GTzogMDA6MTA6MTguMDIyMzIwIC0g
U2VydmVyLXNpZGUgQ09QWTogMC4xMDU2MjczNDYwMzkgc2Vjb25kcw0KPiA+ICAgICBJTkZPOiAw
MDoxMDoxOC4wMjI1MTYgLSBUcmFkaXRpb25hbCBDT1BZOiAwLjA1MTEzMTg5MjIwNDMgc2Vjb25k
cw0KPiA+ICAgICBGQUlMOiBTU0Mgc2hvdWxkIG91dHBlcmZvcm0gdHJhZGl0aW9uYWwgY29weSwg
cGVyZm9ybWFuY2UgZGVncmFkYXRpb24gZm9yIGEgMTI4S0IgZmlsZTogMTA2JQ0KPiA+ICAgICBJ
TkZPOiAwMDoxMDoxOC4wMjI2ODYgLSBwZXJmMDIgY29weSB3aXRoIHNpemUgMjU2S0INCj4gPiAg
ICAgSU5GTzogMDA6MTA6MjkuNDUyMzUzIC0gU2VydmVyLXNpZGUgQ09QWTogMC4xMjQ3ODcwMjA2
ODMgc2Vjb25kcw0KPiA+ICAgICBJTkZPOiAwMDoxMDoyOS40NTI1MzggLSBUcmFkaXRpb25hbCBD
T1BZOiAwLjA1NTI0MDgyMTgzODQgc2Vjb25kcw0KPiA+ICAgICBGQUlMOiBTU0Mgc2hvdWxkIG91
dHBlcmZvcm0gdHJhZGl0aW9uYWwgY29weSwgcGVyZm9ybWFuY2UgZGVncmFkYXRpb24gZm9yIGEg
MjU2S0IgZmlsZTogMTI1JQ0KPiA+ICAgICBJTkZPOiAwMDoxMDoyOS40NTI3MTMgLSBwZXJmMDIg
Y29weSB3aXRoIHNpemUgNTEyS0INCj4gPiAgICAgSU5GTzogMDA6MTA6NDEuMzU3MzE1IC0gU2Vy
dmVyLXNpZGUgQ09QWTogMC4xMTA2MDIwOTI3NDMgc2Vjb25kcw0KPiA+ICAgICBJTkZPOiAwMDox
MDo0MS4zNTc0ODkgLSBUcmFkaXRpb25hbCBDT1BZOiAwLjA2NDM5NjE2NjgwMTUgc2Vjb25kcw0K
PiA+ICAgICBGQUlMOiBTU0Mgc2hvdWxkIG91dHBlcmZvcm0gdHJhZGl0aW9uYWwgY29weSwgcGVy
Zm9ybWFuY2UgZGVncmFkYXRpb24gZm9yIGEgNTEyS0IgZmlsZTogNzElDQo+ID4gICAgIElORk86
IDAwOjEwOjQxLjM1NzY1NiAtIHBlcmYwMiBjb3B5IHdpdGggc2l6ZSAxTUINCj4gPiAgICAgSU5G
TzogMDA6MTA6NTIuNzA0MTU5IC0gU2VydmVyLXNpZGUgQ09QWTogMC4xMTg5NTAyMDAwODEgc2Vj
b25kcw0KPiA+ICAgICBJTkZPOiAwMDoxMDo1Mi43MDQzNDEgLSBUcmFkaXRpb25hbCBDT1BZOiAw
LjA3MDMzMDQ3Njc2MDkgc2Vjb25kcw0KPiA+ICAgICBGQUlMOiBTU0Mgc2hvdWxkIG91dHBlcmZv
cm0gdHJhZGl0aW9uYWwgY29weSwgcGVyZm9ybWFuY2UgZGVncmFkYXRpb24gZm9yIGEgMU1CIGZp
bGU6IDY5JQ0KPiA+ICAgICBJTkZPOiAwMDoxMDo1Mi43MDQ0OTkgLSBwZXJmMDIgY29weSB3aXRo
IHNpemUgMk1CDQo+ID4gICAgIElORk86IDAwOjExOjA1LjM2NzI2NSAtIFNlcnZlci1zaWRlIENP
UFk6IDAuMTM2MzM2NDY5NjUgc2Vjb25kcw0KPiA+ICAgICBJTkZPOiAwMDoxMTowNS4zNjc0NTEg
LSBUcmFkaXRpb25hbCBDT1BZOiAwLjEwMzY0MzUxMjcyNiBzZWNvbmRzDQo+ID4gICAgIEZBSUw6
IFNTQyBzaG91bGQgb3V0cGVyZm9ybSB0cmFkaXRpb25hbCBjb3B5LCBwZXJmb3JtYW5jZSBkZWdy
YWRhdGlvbiBmb3IgYSAyTUIgZmlsZTogMzElDQo+ID4gICAgIElORk86IDAwOjExOjA1LjM2NzYx
NCAtIHBlcmYwMiBjb3B5IHdpdGggc2l6ZSA0TUINCj4gPiAgICAgSU5GTzogMDA6MTE6MTguODcy
MDA0IC0gU2VydmVyLXNpZGUgQ09QWTogMC4xODExNjUwNzUzMDIgc2Vjb25kcw0KPiA+ICAgICBJ
TkZPOiAwMDoxMToxOC44NzIyMDQgLSBUcmFkaXRpb25hbCBDT1BZOiAwLjE2NDU4NjExNDg4MyBz
ZWNvbmRzDQo+ID4gICAgIEZBSUw6IFNTQyBzaG91bGQgb3V0cGVyZm9ybSB0cmFkaXRpb25hbCBj
b3B5LCBwZXJmb3JtYW5jZSBkZWdyYWRhdGlvbiBmb3IgYSA0TUIgZmlsZTogMTAlDQo+ID4gICAg
IElORk86IDAwOjExOjE4Ljg3MjM2OCAtIHBlcmYwMiBjb3B5IHdpdGggc2l6ZSA4TUINCj4gPiAg
ICAgSU5GTzogMDA6MTE6MzUuMDkzMDcxIC0gU2VydmVyLXNpZGUgQ09QWTogMC4yOTAyMDQwOTU4
NCBzZWNvbmRzDQo+ID4gICAgIElORk86IDAwOjExOjM1LjA5MzI5NyAtIFRyYWRpdGlvbmFsIENP
UFk6IDAuMjgzMzE1MTgxNzMyIHNlY29uZHMNCj4gPiAgICAgRkFJTDogU1NDIHNob3VsZCBvdXRw
ZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZvcm1hbmNlIGRlZ3JhZGF0aW9uIGZvciBhIDhN
QiBmaWxlOiAyJQ0KPiA+IA0KPiA+IFBlcmZvcm1hbmNlIGdhaW46DQo+ID4gICAgIElORk86IDAw
OjExOjM1LjA5MzQ0NiAtIHBlcmYwMiBjb3B5IHdpdGggc2l6ZSAxNk1CDQo+ID4gICAgIElORk86
IDAwOjExOjU0Ljc3OTg0NCAtIFNlcnZlci1zaWRlIENPUFk6IDAuNDU1NTY5NTc3MjE3IHNlY29u
ZHMNCj4gPiAgICAgSU5GTzogMDA6MTE6NTQuNzgwMDM4IC0gVHJhZGl0aW9uYWwgQ09QWTogMC41
MDYyNTI1OTg3NjMgc2Vjb25kcw0KPiA+ICAgICBQQVNTOiBTU0Mgc2hvdWxkIG91dHBlcmZvcm0g
dHJhZGl0aW9uYWwgY29weSwgcGVyZm9ybWFuY2UgaW1wcm92ZW1lbnQgZm9yIGEgMTZNQiBmaWxl
OiAxMSUNCj4gPiAgICAgSU5GTzogMDA6MTE6NTQuNzgwMTg1IC0gcGVyZjAyIGNvcHkgd2l0aCBz
aXplIDMyTUINCj4gPiAgICAgSU5GTzogMDA6MTI6MjIuNDE1MTMxIC0gU2VydmVyLXNpZGUgQ09Q
WTogMC43MTM2OTEyNTg0MyBzZWNvbmRzDQo+ID4gICAgIElORk86IDAwOjEyOjIyLjQxNTMxOSAt
IFRyYWRpdGlvbmFsIENPUFk6IDAuOTM0NzM2MzcxMDQgc2Vjb25kcw0KPiA+ICAgICBQQVNTOiBT
U0Mgc2hvdWxkIG91dHBlcmZvcm0gdHJhZGl0aW9uYWwgY29weSwgcGVyZm9ybWFuY2UgaW1wcm92
ZW1lbnQgZm9yIGEgMzJNQiBmaWxlOiAzMCUNCj4gPiAgICAgSU5GTzogMDA6MTI6MjIuNDE1NDY2
IC0gcGVyZjAyIGNvcHkgd2l0aCBzaXplIDY0TUINCj4gPiAgICAgSU5GTzogMDA6MTM6MDMuNDk4
MjYwIC0gU2VydmVyLXNpZGUgQ09QWTogMS4yMzg0NTMzODgyMSBzZWNvbmRzDQo+ID4gICAgIElO
Rk86IDAwOjEzOjAzLjQ5ODQ1NiAtIFRyYWRpdGlvbmFsIENPUFk6IDEuNzMwOTgxOTIyMTUgc2Vj
b25kcw0KPiA+ICAgICBQQVNTOiBTU0Mgc2hvdWxkIG91dHBlcmZvcm0gdHJhZGl0aW9uYWwgY29w
eSwgcGVyZm9ybWFuY2UgaW1wcm92ZW1lbnQgZm9yIGEgNjRNQiBmaWxlOiAzOSUNCj4gPiAgICAg
SU5GTzogMDA6MTM6MDMuNDk4NjM3IC0gcGVyZjAyIGNvcHkgd2l0aCBzaXplIDEyOE1CDQo+ID4g
ICAgIElORk86IDAwOjE0OjExLjUwMjQ3NSAtIFNlcnZlci1zaWRlIENPUFk6IDIuMjI2Mzk5MTEx
NzUgc2Vjb25kcw0KPiA+ICAgICBJTkZPOiAwMDoxNDoxMS41MDI4OTYgLSBUcmFkaXRpb25hbCBD
T1BZOiAzLjM0Nzc4MDg3MTM5IHNlY29uZHMNCj4gPiAgICAgUEFTUzogU1NDIHNob3VsZCBvdXRw
ZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZvcm1hbmNlIGltcHJvdmVtZW50IGZvciBhIDEy
OE1CIGZpbGU6IDUwJQ0KPiA+ICAgICBJTkZPOiAwMDoxNDoxMS41MDMxMDIgLSBwZXJmMDIgY29w
eSB3aXRoIHNpemUgMjU2TUINCj4gPiAgICAgSU5GTzogMDA6MTY6MTIuMzIzMzUwIC0gU2VydmVy
LXNpZGUgQ09QWTogNC4yOTQwMTg3MjE1OCBzZWNvbmRzDQo+ID4gICAgIElORk86IDAwOjE2OjEy
LjMyMzUzNyAtIFRyYWRpdGlvbmFsIENPUFk6IDYuNTQ2MjIyNDk2MDMgc2Vjb25kcw0KPiA+ICAg
ICBQQVNTOiBTU0Mgc2hvdWxkIG91dHBlcmZvcm0gdHJhZGl0aW9uYWwgY29weSwgcGVyZm9ybWFu
Y2UgaW1wcm92ZW1lbnQgZm9yIGEgMjU2TUIgZmlsZTogNTIlDQo+ID4gICAgIElORk86IDAwOjE2
OjEyLjMyMzcwMyAtIHBlcmYwMiBjb3B5IHdpdGggc2l6ZSA1MTJNQg0KPiA+ICAgICBJTkZPOiAw
MDoxOTo1OC4zMTE1OTggLSBTZXJ2ZXItc2lkZSBDT1BZOiA4LjM4NDE3NzA2NDkgc2Vjb25kcw0K
PiA+ICAgICBJTkZPOiAwMDoxOTo1OC4zMTE3OTMgLSBUcmFkaXRpb25hbCBDT1BZOiAxMi45MjAw
NDEyMjczIHNlY29uZHMNCj4gPiAgICAgUEFTUzogU1NDIHNob3VsZCBvdXRwZXJmb3JtIHRyYWRp
dGlvbmFsIGNvcHksIHBlcmZvcm1hbmNlIGltcHJvdmVtZW50IGZvciBhIDUxMk1CIGZpbGU6IDU0
JQ0KPiA+ICAgICBJTkZPOiAwMDoxOTo1OC4zMTE5MzUgLSBwZXJmMDIgY29weSB3aXRoIHNpemUg
MUdCDQo+ID4gICAgIElORk86IDAwOjI2OjAxLjI3NTgwOSAtIFNlcnZlci1zaWRlIENPUFk6IDE0
LjU5Njc4MzgwNDkgc2Vjb25kcw0KPiA+ICAgICBJTkZPOiAwMDoyNjowMS4yNzYwMjAgLSBUcmFk
aXRpb25hbCBDT1BZOiAyMC4xOTU3NDUzNzI4IHNlY29uZHMNCj4gPiAgICAgUEFTUzogU1NDIHNo
b3VsZCBvdXRwZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZvcm1hbmNlIGltcHJvdmVtZW50
IGZvciBhIDFHQiBmaWxlOiAzOCUNCj4gPiAgICAgSU5GTzogMDA6MjY6MDEuMjc2MTc1IC0gcGVy
ZjAyIGNvcHkgd2l0aCBzaXplIDJHQg0KPiA+ICAgICBJTkZPOiAwMDozNToyOS45OTAyMjAgLSBT
ZXJ2ZXItc2lkZSBDT1BZOiAyMy42MDY3OTU3NjQgc2Vjb25kcw0KPiA+ICAgICBJTkZPOiAwMDoz
NToyOS45OTA0MzIgLSBUcmFkaXRpb25hbCBDT1BZOiAzMS40MjYxNTM5NDU5IHNlY29uZHMNCj4g
PiAgICAgUEFTUzogU1NDIHNob3VsZCBvdXRwZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZv
cm1hbmNlIGltcHJvdmVtZW50IGZvciBhIDJHQiBmaWxlOiAzMyUNCj4gPiAgICAgSU5GTzogMDA6
MzU6MjkuOTkwNTgwIC0gcGVyZjAyIGNvcHkgd2l0aCBzaXplIDRHQg0KPiA+ICAgICBJTkZPOiAw
MDo1MzoxNy4xOTg4MjcgLSBTZXJ2ZXItc2lkZSBDT1BZOiA0MS4yMjYxNDA4MDkxIHNlY29uZHMN
Cj4gPiAgICAgSU5GTzogMDA6NTM6MTcuMTk5MDMzIC0gVHJhZGl0aW9uYWwgQ09QWTogNjIuOTIx
MDM1MDUxMyBzZWNvbmRzDQo+ID4gICAgIFBBU1M6IFNTQyBzaG91bGQgb3V0cGVyZm9ybSB0cmFk
aXRpb25hbCBjb3B5LCBwZXJmb3JtYW5jZSBpbXByb3ZlbWVudCBmb3IgYSA0R0IgZmlsZTogNTIl
DQo+ID4gICAgIFRJTUU6IDQ0bTI3LjE2NDI4NnMNCj4gPiANCj4gPiBMb2dmaWxlOiAvaG9tZS9t
b3JhL2xvZ3MvbmZzdGVzdF9zc2NfMjAxNzA0MTIwMDA4NDkubG9nDQo+ID4gDQo+ID4gMjMgdGVz
dHMgKDkgcGFzc2VkLCAxNCBmYWlsZWQpDQo+ID4gDQo+ID4gVG90YWwgdGltZTogNDRtMjguMzM0
MTc3cw0KPiA+IA0KPiA+IA0KPiA+IA0KPiA+IC0tSm9yZ2UNCj4gPiAgDQoNCg==

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [nfsv4] Inter server-side copy performance
  2017-04-14 20:09   ` Mora, Jorge
@ 2017-04-14 21:22     ` Olga Kornievskaia
  2017-04-17 13:36       ` J. Bruce Fields
  0 siblings, 1 reply; 10+ messages in thread
From: Olga Kornievskaia @ 2017-04-14 21:22 UTC (permalink / raw)
  To: Mora, Jorge; +Cc: J. Bruce Fields, nfsv4@ietf.org, linux-nfs@vger.kernel.org

On Fri, Apr 14, 2017 at 4:09 PM, Mora, Jorge <Jorge.Mora@netapp.com> wrote:
> On 4/13/17, 11:45 AM, "J. Bruce Fields" <bfields@fieldses.org> wrote:
>
>> On Wed, Apr 12, 2017 at 02:47:30PM +0000, Mora, Jorge wrote:
>> > The following shows a comparison between inter server-side copy and a =
traditional copy.
>>
>> Thanks for doing this!  Adding the linux-nfs list, I hope that's OK.
>>
>> > Setup:
>> >     Client: 16 CPUs, 32GB
>> >     SRC server: 4 CPUs, 8GB
>> >     DST server: 4 CPUs, 8GB
>>
>> Could you also tell us about the network?  How much bandwidth is
>> available between the two servers, and between the each server and the
>> client?  Disk bandwidth might be useful to know too.
>
> Client:     pair    (1000baseT/Full)
> SRC Server: ricoh   (1000baseT/Full)
> DST Server: haddock (1000baseT/Full)
>
> Transfer from source server to client:
> [mora@pair test]$ scp ricoh:/home/exports/nfstest_ssc_source_file /dev/nu=
ll
> nfstest_ssc_source_file                       100% 8192MB 110.7MB/s   01:=
14
>
> Transfer from destination server to client:
> [mora@pair test]$ scp haddock:/home/exports/nfstest_ssc_source_file /dev/=
null
> nfstest_ssc_source_file                       100% 8192MB 112.2MB/s   01:=
13
>
> Transfer from source to destination server:
> [mora@haddock ~]$ scp ricoh:/home/exports/nfstest_ssc_source_file /dev/nu=
ll
> nfstest_ssc_source_file                       100% 8192MB 110.7MB/s   01:=
14
>
> Disk I/O on source server:
> [mora@ricoh ~]$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
> [mora@ricoh ~]$ dd if=3D/home/exports/nfstest_ssc_source_file of=3D/dev/n=
ull bs=3D8k count=3D1024k
> 1048576+0 records in
> 1048576+0 records out
> 8589934592 bytes (8.6 GB) copied, 56.8953 s, 151 MB/s
> [mora@ricoh ~]$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
> [mora@ricoh ~]$ dd if=3D/dev/zero of=3D/home/exports/test bs=3D8k count=
=3D1024k
> 1048576+0 records in
> 1048576+0 records out
> 8589934592 bytes (8.6 GB) copied, 46.9064 s, 183 MB/s
>
> Disk I/O on destination server:
> [mora@haddock ~]$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
> [mora@haddock ~]$ dd if=3D/home/exports/nfstest_ssc_source_file of=3D/dev=
/null bs=3D8k count=3D1024k
> 1048576+0 records in
> 1048576+0 records out
> 8589934592 bytes (8.6 GB) copied, 57.3356 s, 150 MB/s
> [mora@haddock ~]$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
> [mora@haddock ~]$ dd if=3D/dev/zero of=3D/home/exports/test bs=3D8k count=
=3D1024k
> 1048576+0 records in
> 1048576+0 records out
> 8589934592 bytes (8.6 GB) copied, 47.3809 s, 181 MB/s
>
>> >     All machines are running 4.10.0-ssc-02242017.1345,
>> >       4.10.0 kernel on top RHEL 7.3 with Olga=E2=80=99s SSC patches 02=
242017.tgz (asynchronous SSC)
>> > Test:
>> >     The test runs 10 times for each copy size and the average is compa=
red.
>> >     The servers are mounted and un-mounted for each copy (using defaul=
t mount options)
>> >     Testing copy size: 1KB =E2=80=93 4GB, NOTE: the last copy is 4GB-1=
 bytes, the maximum copy_file_range() accepts.
>>
>> If the client unmounts the servers, but if the servers keep their
>> exports mounted then the performance may depend on whether data is
>> already cached on the source server, since all these copies fit in RAM.
>
> Good point, I will clear the VM caches using "echo 3 > /proc/sys/vm/drop_=
caches"
> on both servers before each copy.
>
>> Are you timing just the copy_file_range() call, or do you include a
>> following sync?
>
> I am timing right before calling copy_file_range() up to doing an fsync()=
 and close() of the destination file.
> For the traditional copy is the same, I am timing right before the first =
read on the source file up to the
> fsync() and close() of the destination file.

Why should do we need a sync after copy_file_range(). kernel
copy_file_range() will send the commits for any unstable copies it
received.

>
>> > Results:
>> >     For a copy size below 16MB, traditional copy runs faster than serv=
er-side copy
>> >     For a copy size of 32MB and above, server-side copy is at least 30=
% faster.
>> >     For a copy size of 128MB and above, server-side copy is about 50% =
faster.
>> >     For the 1GB and 2GB copy sizes, the performance improvement is onl=
y about 30-40%, investigating why this is happening.
>>
>> Might also be interesting to look at performance when copying a larger
>> file with multiple copy_file_range() calls.
>
> I will do this as well.
>
>
> --Jorge
>
>> --b.
>>
>> > *** Inter-SSC performance test
>> >     TEST: Running test 'perf02'
>> >
>> > Performance degradation:
>> >     INFO: 00:08:50.035165 - perf02 copy with size 1KB
>> >     INFO: 00:09:01.302004 - Server-side COPY: 0.104049992561 seconds
>> >     INFO: 00:09:01.302206 - Traditional COPY: 0.0319610118866 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 1KB file: 225%
>> >     INFO: 00:09:01.302378 - perf02 copy with size 2KB
>> >     INFO: 00:09:12.332144 - Server-side COPY: 0.0923588514328 seconds
>> >     INFO: 00:09:12.332335 - Traditional COPY: 0.033596777916 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 2KB file: 174%
>> >     INFO: 00:09:12.332510 - perf02 copy with size 4KB
>> >     INFO: 00:09:23.320714 - Server-side COPY: 0.100715565681 seconds
>> >     INFO: 00:09:23.320915 - Traditional COPY: 0.0336240530014 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 4KB file: 199%
>> >     INFO: 00:09:23.321086 - perf02 copy with size 8KB
>> >     INFO: 00:09:34.542515 - Server-side COPY: 0.0881641149521 seconds
>> >     INFO: 00:09:34.542742 - Traditional COPY: 0.0336608886719 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 8KB file: 161%
>> >     INFO: 00:09:34.542924 - perf02 copy with size 16KB
>> >     INFO: 00:09:45.439424 - Server-side COPY: 0.0939898014069 seconds
>> >     INFO: 00:09:45.439630 - Traditional COPY: 0.0277063846588 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 16KB file: 239%
>> >     INFO: 00:09:45.439795 - perf02 copy with size 32KB
>> >     INFO: 00:09:56.103480 - Server-side COPY: 0.0798309087753 seconds
>> >     INFO: 00:09:56.103691 - Traditional COPY: 0.0335722208023 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 32KB file: 137%
>> >     INFO: 00:09:56.103848 - perf02 copy with size 64KB
>> >     INFO: 00:10:07.217026 - Server-side COPY: 0.106491327286 seconds
>> >     INFO: 00:10:07.217228 - Traditional COPY: 0.034387588501 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 64KB file: 209%
>> >     INFO: 00:10:07.217400 - perf02 copy with size 128KB
>> >     INFO: 00:10:18.022320 - Server-side COPY: 0.105627346039 seconds
>> >     INFO: 00:10:18.022516 - Traditional COPY: 0.0511318922043 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 128KB file: 106%
>> >     INFO: 00:10:18.022686 - perf02 copy with size 256KB
>> >     INFO: 00:10:29.452353 - Server-side COPY: 0.124787020683 seconds
>> >     INFO: 00:10:29.452538 - Traditional COPY: 0.0552408218384 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 256KB file: 125%
>> >     INFO: 00:10:29.452713 - perf02 copy with size 512KB
>> >     INFO: 00:10:41.357315 - Server-side COPY: 0.110602092743 seconds
>> >     INFO: 00:10:41.357489 - Traditional COPY: 0.0643961668015 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 512KB file: 71%
>> >     INFO: 00:10:41.357656 - perf02 copy with size 1MB
>> >     INFO: 00:10:52.704159 - Server-side COPY: 0.118950200081 seconds
>> >     INFO: 00:10:52.704341 - Traditional COPY: 0.0703304767609 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 1MB file: 69%
>> >     INFO: 00:10:52.704499 - perf02 copy with size 2MB
>> >     INFO: 00:11:05.367265 - Server-side COPY: 0.13633646965 seconds
>> >     INFO: 00:11:05.367451 - Traditional COPY: 0.103643512726 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 2MB file: 31%
>> >     INFO: 00:11:05.367614 - perf02 copy with size 4MB
>> >     INFO: 00:11:18.872004 - Server-side COPY: 0.181165075302 seconds
>> >     INFO: 00:11:18.872204 - Traditional COPY: 0.164586114883 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 4MB file: 10%
>> >     INFO: 00:11:18.872368 - perf02 copy with size 8MB
>> >     INFO: 00:11:35.093071 - Server-side COPY: 0.29020409584 seconds
>> >     INFO: 00:11:35.093297 - Traditional COPY: 0.283315181732 seconds
>> >     FAIL: SSC should outperform traditional copy, performance degradat=
ion for a 8MB file: 2%
>> >
>> > Performance gain:
>> >     INFO: 00:11:35.093446 - perf02 copy with size 16MB
>> >     INFO: 00:11:54.779844 - Server-side COPY: 0.455569577217 seconds
>> >     INFO: 00:11:54.780038 - Traditional COPY: 0.506252598763 seconds
>> >     PASS: SSC should outperform traditional copy, performance improvem=
ent for a 16MB file: 11%
>> >     INFO: 00:11:54.780185 - perf02 copy with size 32MB
>> >     INFO: 00:12:22.415131 - Server-side COPY: 0.71369125843 seconds
>> >     INFO: 00:12:22.415319 - Traditional COPY: 0.93473637104 seconds
>> >     PASS: SSC should outperform traditional copy, performance improvem=
ent for a 32MB file: 30%
>> >     INFO: 00:12:22.415466 - perf02 copy with size 64MB
>> >     INFO: 00:13:03.498260 - Server-side COPY: 1.23845338821 seconds
>> >     INFO: 00:13:03.498456 - Traditional COPY: 1.73098192215 seconds
>> >     PASS: SSC should outperform traditional copy, performance improvem=
ent for a 64MB file: 39%
>> >     INFO: 00:13:03.498637 - perf02 copy with size 128MB
>> >     INFO: 00:14:11.502475 - Server-side COPY: 2.22639911175 seconds
>> >     INFO: 00:14:11.502896 - Traditional COPY: 3.34778087139 seconds
>> >     PASS: SSC should outperform traditional copy, performance improvem=
ent for a 128MB file: 50%
>> >     INFO: 00:14:11.503102 - perf02 copy with size 256MB
>> >     INFO: 00:16:12.323350 - Server-side COPY: 4.29401872158 seconds
>> >     INFO: 00:16:12.323537 - Traditional COPY: 6.54622249603 seconds
>> >     PASS: SSC should outperform traditional copy, performance improvem=
ent for a 256MB file: 52%
>> >     INFO: 00:16:12.323703 - perf02 copy with size 512MB
>> >     INFO: 00:19:58.311598 - Server-side COPY: 8.3841770649 seconds
>> >     INFO: 00:19:58.311793 - Traditional COPY: 12.9200412273 seconds
>> >     PASS: SSC should outperform traditional copy, performance improvem=
ent for a 512MB file: 54%
>> >     INFO: 00:19:58.311935 - perf02 copy with size 1GB
>> >     INFO: 00:26:01.275809 - Server-side COPY: 14.5967838049 seconds
>> >     INFO: 00:26:01.276020 - Traditional COPY: 20.1957453728 seconds
>> >     PASS: SSC should outperform traditional copy, performance improvem=
ent for a 1GB file: 38%
>> >     INFO: 00:26:01.276175 - perf02 copy with size 2GB
>> >     INFO: 00:35:29.990220 - Server-side COPY: 23.606795764 seconds
>> >     INFO: 00:35:29.990432 - Traditional COPY: 31.4261539459 seconds
>> >     PASS: SSC should outperform traditional copy, performance improvem=
ent for a 2GB file: 33%
>> >     INFO: 00:35:29.990580 - perf02 copy with size 4GB
>> >     INFO: 00:53:17.198827 - Server-side COPY: 41.2261408091 seconds
>> >     INFO: 00:53:17.199033 - Traditional COPY: 62.9210350513 seconds
>> >     PASS: SSC should outperform traditional copy, performance improvem=
ent for a 4GB file: 52%
>> >     TIME: 44m27.164286s
>> >
>> > Logfile: /home/mora/logs/nfstest_ssc_20170412000849.log
>> >
>> > 23 tests (9 passed, 14 failed)
>> >
>> > Total time: 44m28.334177s
>> >
>> >
>> >
>> > --Jorge
>> >
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [nfsv4] Inter server-side copy performance
  2017-04-14 21:22     ` Olga Kornievskaia
@ 2017-04-17 13:36       ` J. Bruce Fields
  2017-04-17 15:30         ` Olga Kornievskaia
  0 siblings, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2017-04-17 13:36 UTC (permalink / raw)
  To: Olga Kornievskaia; +Cc: Mora, Jorge, nfsv4@ietf.org, linux-nfs@vger.kernel.org

On Fri, Apr 14, 2017 at 05:22:13PM -0400, Olga Kornievskaia wrote:
> On Fri, Apr 14, 2017 at 4:09 PM, Mora, Jorge <Jorge.Mora@netapp.com> wrote:
> > On 4/13/17, 11:45 AM, "J. Bruce Fields" <bfields@fieldses.org> wrote:
> >> Are you timing just the copy_file_range() call, or do you include a
> >> following sync?
> >
> > I am timing right before calling copy_file_range() up to doing an fsync() and close() of the destination file.
> > For the traditional copy is the same, I am timing right before the first read on the source file up to the
> > fsync() and close() of the destination file.
> 
> Why should do we need a sync after copy_file_range(). kernel
> copy_file_range() will send the commits for any unstable copies it
> received.

Why does it do that?  As far as I can tell it's not required by
documentation for copy_file_range() or COPY.  COPY has a write verifier
and a stable_how argument in the reply.  Skipping the commits would
allow better performance in case a copy requires multiple COPY calls.

But, in any case, if copy_file_range() already committed then it
probably doesn't make a significant difference to the timing whether you
include a following sync and/or close.

--b.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [nfsv4] Inter server-side copy performance
  2017-04-17 13:36       ` J. Bruce Fields
@ 2017-04-17 15:30         ` Olga Kornievskaia
  2017-04-17 15:57           ` Anna Schumaker
       [not found]           ` <CAFX2JfkiraKm2Rmqhkrh3CSWBoYfW0QU=uXw=sSx-8Wt8JD7wg@mail.gmail.com>
  0 siblings, 2 replies; 10+ messages in thread
From: Olga Kornievskaia @ 2017-04-17 15:30 UTC (permalink / raw)
  To: J. Bruce Fields; +Cc: linux-nfs@vger.kernel.org, nfsv4@ietf.org

On Mon, Apr 17, 2017 at 9:36 AM, J. Bruce Fields <bfields@fieldses.org> wrote:
> On Fri, Apr 14, 2017 at 05:22:13PM -0400, Olga Kornievskaia wrote:
>> On Fri, Apr 14, 2017 at 4:09 PM, Mora, Jorge <Jorge.Mora@netapp.com> wrote:
>> > On 4/13/17, 11:45 AM, "J. Bruce Fields" <bfields@fieldses.org> wrote:
>> >> Are you timing just the copy_file_range() call, or do you include a
>> >> following sync?
>> >
>> > I am timing right before calling copy_file_range() up to doing an fsync() and close() of the destination file.
>> > For the traditional copy is the same, I am timing right before the first read on the source file up to the
>> > fsync() and close() of the destination file.
>>
>> Why should do we need a sync after copy_file_range(). kernel
>> copy_file_range() will send the commits for any unstable copies it
>> received.
>
> Why does it do that?  As far as I can tell it's not required by
> documentation for copy_file_range() or COPY.  COPY has a write verifier
> and a stable_how argument in the reply.  Skipping the commits would
> allow better performance in case a copy requires multiple COPY calls.
>
> But, in any case, if copy_file_range() already committed then it
> probably doesn't make a significant difference to the timing whether you
> include a following sync and/or close.

Hm. It does make sense. Anna wrote the original code which included
the COMMIT after copy which I haven't thought about.

Anna, any comments?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [nfsv4] Inter server-side copy performance
  2017-04-17 15:30         ` Olga Kornievskaia
@ 2017-04-17 15:57           ` Anna Schumaker
       [not found]           ` <CAFX2JfkiraKm2Rmqhkrh3CSWBoYfW0QU=uXw=sSx-8Wt8JD7wg@mail.gmail.com>
  1 sibling, 0 replies; 10+ messages in thread
From: Anna Schumaker @ 2017-04-17 15:57 UTC (permalink / raw)
  To: Olga Kornievskaia, J. Bruce Fields
  Cc: linux-nfs@vger.kernel.org, nfsv4@ietf.org

(Resending because Google Inbox didn't switch to plain text)

On 04/17/2017 11:30 AM, Olga Kornievskaia wrote:
> On Mon, Apr 17, 2017 at 9:36 AM, J. Bruce Fields <bfields@fieldses.org> wrote:
>> On Fri, Apr 14, 2017 at 05:22:13PM -0400, Olga Kornievskaia wrote:
>>> On Fri, Apr 14, 2017 at 4:09 PM, Mora, Jorge <Jorge.Mora@netapp.com> wrote:
>>>> On 4/13/17, 11:45 AM, "J. Bruce Fields" <bfields@fieldses.org> wrote:
>>>>> Are you timing just the copy_file_range() call, or do you include a
>>>>> following sync?
>>>>
>>>> I am timing right before calling copy_file_range() up to doing an fsync() and close() of the destination file.
>>>> For the traditional copy is the same, I am timing right before the first read on the source file up to the
>>>> fsync() and close() of the destination file.
>>>
>>> Why should do we need a sync after copy_file_range(). kernel
>>> copy_file_range() will send the commits for any unstable copies it
>>> received.
>>
>> Why does it do that?  As far as I can tell it's not required by
>> documentation for copy_file_range() or COPY.  COPY has a write verifier
>> and a stable_how argument in the reply.  Skipping the commits would
>> allow better performance in case a copy requires multiple COPY calls.
>>
>> But, in any case, if copy_file_range() already committed then it
>> probably doesn't make a significant difference to the timing whether you
>> include a following sync and/or close.
> 
> Hm. It does make sense. Anna wrote the original code which included
> the COMMIT after copy which I haven't thought about.
> 
> Anna, any comments?

I think the commit just seemed like a good idea at the time.  I'm okay with changing it if it doesn't make sense.

Anna

> 
> _______________________________________________
> nfsv4 mailing list
> nfsv4@ietf.org
> https://www.ietf.org/mailman/listinfo/nfsv4
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [nfsv4] Inter server-side copy performance
       [not found]           ` <CAFX2JfkiraKm2Rmqhkrh3CSWBoYfW0QU=uXw=sSx-8Wt8JD7wg@mail.gmail.com>
@ 2017-04-18 17:28             ` Olga Kornievskaia
  2017-04-18 18:33               ` J. Bruce Fields
  0 siblings, 1 reply; 10+ messages in thread
From: Olga Kornievskaia @ 2017-04-18 17:28 UTC (permalink / raw)
  To: Anna Schumaker; +Cc: J. Bruce Fields, linux-nfs@vger.kernel.org, nfsv4@ietf.org

On Mon, Apr 17, 2017 at 11:37 AM, Anna Schumaker
<schumakeranna@gmail.com> wrote:
>
>
> On Mon, Apr 17, 2017 at 11:30 AM Olga Kornievskaia <aglo@umich.edu> wrote:
>>
>> On Mon, Apr 17, 2017 at 9:36 AM, J. Bruce Fields <bfields@fieldses.org>
>> wrote:
>> > On Fri, Apr 14, 2017 at 05:22:13PM -0400, Olga Kornievskaia wrote:
>> >> On Fri, Apr 14, 2017 at 4:09 PM, Mora, Jorge <Jorge.Mora@netapp.com>
>> >> wrote:
>> >> > On 4/13/17, 11:45 AM, "J. Bruce Fields" <bfields@fieldses.org> wrote:
>> >> >> Are you timing just the copy_file_range() call, or do you include a
>> >> >> following sync?
>> >> >
>> >> > I am timing right before calling copy_file_range() up to doing an
>> >> > fsync() and close() of the destination file.
>> >> > For the traditional copy is the same, I am timing right before the
>> >> > first read on the source file up to the
>> >> > fsync() and close() of the destination file.
>> >>
>> >> Why should do we need a sync after copy_file_range(). kernel
>> >> copy_file_range() will send the commits for any unstable copies it
>> >> received.
>> >
>> > Why does it do that?  As far as I can tell it's not required by
>> > documentation for copy_file_range() or COPY.  COPY has a write verifier
>> > and a stable_how argument in the reply.  Skipping the commits would
>> > allow better performance in case a copy requires multiple COPY calls.
>> >
>> > But, in any case, if copy_file_range() already committed then it
>> > probably doesn't make a significant difference to the timing whether you
>> > include a following sync and/or close.
>>
>> Hm. It does make sense. Anna wrote the original code which included
>> the COMMIT after copy which I haven't thought about.
>>
>> Anna, any comments?
>
>
> I think the commit just seemed like a good idea at the time.  I'm okay with
> changing it if it doesn't make sense.

Given how the code is written now it looks like it's not possible to
save up commits....

Here's what I can see happening:

nfs42_proc_clone() as well as nfs42_proc_copy() will call
nfs_sync_inode(dst) "to make sure server(s) have the latest data"
prior to initiating the clone/copy. So even if we just queue up (not
send) the commit after the executing nfs42_proc_copy, then next call
into vfs_copy_file_range() will send out that queued up commit.

Is it ok to relax the requirement that requirement, I'm not sure...

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [nfsv4] Inter server-side copy performance
  2017-04-18 17:28             ` Olga Kornievskaia
@ 2017-04-18 18:33               ` J. Bruce Fields
  2017-06-15 19:29                 ` Mora, Jorge
  0 siblings, 1 reply; 10+ messages in thread
From: J. Bruce Fields @ 2017-04-18 18:33 UTC (permalink / raw)
  To: Olga Kornievskaia
  Cc: Anna Schumaker, linux-nfs@vger.kernel.org, nfsv4@ietf.org

On Tue, Apr 18, 2017 at 01:28:39PM -0400, Olga Kornievskaia wrote:
> Given how the code is written now it looks like it's not possible to
> save up commits....
> 
> Here's what I can see happening:
> 
> nfs42_proc_clone() as well as nfs42_proc_copy() will call
> nfs_sync_inode(dst) "to make sure server(s) have the latest data"
> prior to initiating the clone/copy. So even if we just queue up (not
> send) the commit after the executing nfs42_proc_copy, then next call
> into vfs_copy_file_range() will send out that queued up commit.
> 
> Is it ok to relax the requirement that requirement, I'm not sure...

Well, if the typical case of copy_file_range is just opening a file,
doing a single big copy_file_range(), then closing the file, then this
doesn't matter.

The linux server is currently limiting COPY to 4MB at a time, which will
make the commits more annoying.

Even there the typical case will probably still be an open, followed by
a series of non-overlapping copies, then close, and that shouldn't
require the commits.

--b.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [nfsv4] Inter server-side copy performance
  2017-04-18 18:33               ` J. Bruce Fields
@ 2017-06-15 19:29                 ` Mora, Jorge
  2017-06-15 20:37                   ` J. Bruce Fields
  0 siblings, 1 reply; 10+ messages in thread
From: Mora, Jorge @ 2017-06-15 19:29 UTC (permalink / raw)
  To: J. Bruce Fields, Olga Kornievskaia
  Cc: Anna Schumaker, linux-nfs@vger.kernel.org, nfsv4@ietf.org

SGVyZSBhcmUgdGhlIG5ldyBudW1iZXJzIHVzaW5nIGxhdGVzdCBTU0MgY29kZSBmb3IgYW4gOEdC
IGNvcHkuDQpUaGUgY29kZSBoYXMgYSBkZWxheWVkIHVubW91bnQgb24gdGhlIGRlc3RpbmF0aW9u
IHNlcnZlciB3aGljaCBhbGxvd3MgZm9yIHNpbmdsZQ0KbW91bnQgd2hlbiBtdWx0aXBsZSBDT1BZ
IGNhbGxzIGFyZSBtYWRlIGJhY2sgdG8gYmFjay4NCkFsc28sIHRoZXJlIGlzIGEgdGhpcmQgb3B0
aW9uIHdoaWNoIGlzIHVzaW5nIGlvY3RsIHdpdGggYSA2NCBiaXQgY29weSBsZW5ndGggaW4gb3Jk
ZXIgdG8gDQppc3N1ZSBhIHNpbmdsZSBjYWxsIGZvciBjb3B5IGxlbmd0aHMgPj0gNEdCLg0KDQpT
ZXR1cDoNCiAgICAgQ2xpZW50OiAxNiBDUFVzLCAzMkdCDQogICAgIFNSQyBzZXJ2ZXI6IDQgQ1BV
cywgOEdCDQogICAgIERTVCBzZXJ2ZXI6IDQgQ1BVcywgOEdCDQoNClRyYWRpdGlvbmFsIGNvcHk6
DQogICAgREJHMjogMjA6MzE6NDMuNjgzNTk1IC0gVHJhZGl0aW9uYWwgQ09QWSByZXR1cm5zIDg1
ODk5MzQ1OTAgKDk2Ljg0MzI4MTAzMDcgc2Vjb25kcykNClNTQyAoMiBjb3B5X2ZpbGVfcmFuZ2Ug
Y2FsbHMgYmFjayB0byBiYWNrKToNCiAgICBEQkcyOiAyMDozMDowMC4yNjgyMDMgLSBTZXJ2ZXIt
c2lkZSBDT1BZIHJldHVybnMgODU4OTkzNDU5MCAoODMuMDUxNzc1OTMyMyBzZWNvbmRzKQ0KICAg
IFBBU1M6IFNTQyBzaG91bGQgb3V0cGVyZm9ybSB0cmFkaXRpb25hbCBjb3B5LCBwZXJmb3JtYW5j
ZSBpbXByb3ZlbWVudCBmb3IgYSA4R0IgZmlsZTogMTYlDQpTU0MgKDIgY29weV9maWxlX3Jhbmdl
IGNhbGxzIGluIHBhcmFsbGVsKToNCiAgICBEQkcyOiAyMDozNDo0OS42ODY1NzMgLSBTZXJ2ZXIt
c2lkZSBDT1BZIHJldHVybnMgODU4OTkzNDU5MCAoNzkuMzA4MDAxMDQxNCBzZWNvbmRzKQ0KICAg
IFBBU1M6IFNTQyBzaG91bGQgb3V0cGVyZm9ybSB0cmFkaXRpb25hbCBjb3B5LCBwZXJmb3JtYW5j
ZSBpbXByb3ZlbWVudCBmb3IgYSA4R0IgZmlsZTogMjAlDQpTU0MgKDEgaW9jdGwgY2FsbCk6DQog
ICAgREJHMjogMjA6Mzg6NDEuMzIzNzc0IC0gU2VydmVyLXNpZGUgQ09QWSByZXR1cm5zIDg1ODk5
MzQ1OTAgKDc0Ljc3NzQzNTA2NDMgc2Vjb25kcykNCiAgICBQQVNTOiBTU0Mgc2hvdWxkIG91dHBl
cmZvcm0gdHJhZGl0aW9uYWwgY29weSwgcGVyZm9ybWFuY2UgaW1wcm92ZW1lbnQgZm9yIGEgOEdC
IGZpbGU6IDI4JQ0KDQpTaW5jZSBJIGRvbuKAmXQgaGF2ZSB0aHJlZSBzaW1pbGFyIHN5c3RlbXMg
dG8gdGVzdCB3aXRoLCBoYXZpbmcgdGhlIGJlc3QgbWFjaGluZSAobW9yZSBjcHXigJlzIGFuZCBt
b3JlIG1lbW9yeSkNCmFzIHRoZSBjbGllbnQgZ2l2ZXMgYSBiZXR0ZXIgcGVyZm9ybWFuY2UgZm9y
IHRoZSB0cmFkaXRpb25hbCBjb3B5LiBUaGUgZm9sbG93aW5nIHJlc3VsdHMgYXJlIGRvbmUgdXNp
bmcNCnRoZSBiZXN0IG1hY2hpbmUgYXMgdGhlIGRlc3RpbmF0aW9uIHNlcnZlciBpbnN0ZWFkLg0K
DQpTZXR1cCAodXNpbmcgdGhlIGJlc3QgbWFjaGluZSBhcyB0aGUgZGVzdGluYXRpb24gc2VydmVy
IGluc3RlYWQpOg0KICAgICBDbGllbnQ6IDQgQ1BVcywgOEdCDQogICAgIFNSQyBzZXJ2ZXI6IDQg
Q1BVcywgOEdCDQogICAgIERTVCBzZXJ2ZXI6IDE2IENQVXMsIDMyR0INCg0KVHJhZGl0aW9uYWwg
Y29weToNCiAgICBEQkcyOiAyMTo1MjoxNS4wMzk2MjUgLSBUcmFkaXRpb25hbCBDT1BZIHJldHVy
bnMgODU4OTkzNDU5MCAoMTc4LjY4NjYzNTk3MSBzZWNvbmRzKQ0KU1NDICgyIGNvcHlfZmlsZV9y
YW5nZSBjYWxscyBiYWNrIHRvIGJhY2spOg0KICAgIERCRzI6IDIxOjQ5OjA4Ljk2MTM4NCAtIFNl
cnZlci1zaWRlIENPUFkgcmV0dXJucyA4NTg5OTM0NTkwICgxNzMuMDcxMTcyOTUzIHNlY29uZHMp
DQogICAgUEFTUzogU1NDIHNob3VsZCBvdXRwZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZv
cm1hbmNlIGltcHJvdmVtZW50IGZvciBhIDhHQiBmaWxlOiAzJQ0KU1NDICgyIGNvcHlfZmlsZV9y
YW5nZSBjYWxscyBpbiBwYXJhbGxlbCk6DQogICAgREJHMjogMjE6MzU6NTkuODIyNDY3IC0gU2Vy
dmVyLXNpZGUgQ09QWSByZXR1cm5zIDg1ODk5MzQ1OTAgKDE1OS43NDM4NDk5OTMgc2Vjb25kcykN
CiAgICBQQVNTOiBTU0Mgc2hvdWxkIG91dHBlcmZvcm0gdHJhZGl0aW9uYWwgY29weSwgcGVyZm9y
bWFuY2UgaW1wcm92ZW1lbnQgZm9yIGEgOEdCIGZpbGU6IDE4JQ0KU1NDICgxIGlvY3RsIGNhbGwp
Og0KICAgIERCRzI6IDIxOjI4OjMzLjQ2MTUyOCAtIFNlcnZlci1zaWRlIENPUFkgcmV0dXJucyA4
NTg5OTM0NTkwICg4My45OTgzOTgwNjU2IHNlY29uZHMpDQogICAgUEFTUzogU1NDIHNob3VsZCBv
dXRwZXJmb3JtIHRyYWRpdGlvbmFsIGNvcHksIHBlcmZvcm1hbmNlIGltcHJvdmVtZW50IGZvciBh
IDhHQiBmaWxlOiAxMTklDQoNCkFzIHlvdSBjYW4gc2VlIGEgc2luZ2xlIDhHQiBjb3B5IChpb2N0
bCB3aXRoIDY0IGJpdCBjb3B5IGxlbmd0aCkgcGVyZm9ybXMgdGhlIHNhbWUgYXMgYmVmb3JlIChh
Ym91dCA4MCBzZWNvbmRzKQ0KYnV0IGluIHRoaXMgY2FzZSB0aGUgdHJhZGl0aW9uYWwgY29weSB0
YWtlcyBhIGxvdCBsb25nZXIuDQoNCg0KLS1Kb3JnZQ0KDQoNCk9uIDQvMTgvMTcsIDEyOjMzIFBN
LCAibGludXgtbmZzLW93bmVyQHZnZXIua2VybmVsLm9yZyBvbiBiZWhhbGYgb2YgSi4gQnJ1Y2Ug
RmllbGRzIiA8bGludXgtbmZzLW93bmVyQHZnZXIua2VybmVsLm9yZyBvbiBiZWhhbGYgb2YgYmZp
ZWxkc0BmaWVsZHNlcy5vcmc+IHdyb3RlOg0KDQogICAgT24gVHVlLCBBcHIgMTgsIDIwMTcgYXQg
MDE6Mjg6MzlQTSAtMDQwMCwgT2xnYSBLb3JuaWV2c2thaWEgd3JvdGU6DQogICAgPiBHaXZlbiBo
b3cgdGhlIGNvZGUgaXMgd3JpdHRlbiBub3cgaXQgbG9va3MgbGlrZSBpdCdzIG5vdCBwb3NzaWJs
ZSB0bw0KICAgID4gc2F2ZSB1cCBjb21taXRzLi4uLg0KICAgID4gDQogICAgPiBIZXJlJ3Mgd2hh
dCBJIGNhbiBzZWUgaGFwcGVuaW5nOg0KICAgID4gDQogICAgPiBuZnM0Ml9wcm9jX2Nsb25lKCkg
YXMgd2VsbCBhcyBuZnM0Ml9wcm9jX2NvcHkoKSB3aWxsIGNhbGwNCiAgICA+IG5mc19zeW5jX2lu
b2RlKGRzdCkgInRvIG1ha2Ugc3VyZSBzZXJ2ZXIocykgaGF2ZSB0aGUgbGF0ZXN0IGRhdGEiDQog
ICAgPiBwcmlvciB0byBpbml0aWF0aW5nIHRoZSBjbG9uZS9jb3B5LiBTbyBldmVuIGlmIHdlIGp1
c3QgcXVldWUgdXAgKG5vdA0KICAgID4gc2VuZCkgdGhlIGNvbW1pdCBhZnRlciB0aGUgZXhlY3V0
aW5nIG5mczQyX3Byb2NfY29weSwgdGhlbiBuZXh0IGNhbGwNCiAgICA+IGludG8gdmZzX2NvcHlf
ZmlsZV9yYW5nZSgpIHdpbGwgc2VuZCBvdXQgdGhhdCBxdWV1ZWQgdXAgY29tbWl0Lg0KICAgID4g
DQogICAgPiBJcyBpdCBvayB0byByZWxheCB0aGUgcmVxdWlyZW1lbnQgdGhhdCByZXF1aXJlbWVu
dCwgSSdtIG5vdCBzdXJlLi4uDQogICAgDQogICAgV2VsbCwgaWYgdGhlIHR5cGljYWwgY2FzZSBv
ZiBjb3B5X2ZpbGVfcmFuZ2UgaXMganVzdCBvcGVuaW5nIGEgZmlsZSwNCiAgICBkb2luZyBhIHNp
bmdsZSBiaWcgY29weV9maWxlX3JhbmdlKCksIHRoZW4gY2xvc2luZyB0aGUgZmlsZSwgdGhlbiB0
aGlzDQogICAgZG9lc24ndCBtYXR0ZXIuDQogICAgDQogICAgVGhlIGxpbnV4IHNlcnZlciBpcyBj
dXJyZW50bHkgbGltaXRpbmcgQ09QWSB0byA0TUIgYXQgYSB0aW1lLCB3aGljaCB3aWxsDQogICAg
bWFrZSB0aGUgY29tbWl0cyBtb3JlIGFubm95aW5nLg0KICAgIA0KICAgIEV2ZW4gdGhlcmUgdGhl
IHR5cGljYWwgY2FzZSB3aWxsIHByb2JhYmx5IHN0aWxsIGJlIGFuIG9wZW4sIGZvbGxvd2VkIGJ5
DQogICAgYSBzZXJpZXMgb2Ygbm9uLW92ZXJsYXBwaW5nIGNvcGllcywgdGhlbiBjbG9zZSwgYW5k
IHRoYXQgc2hvdWxkbid0DQogICAgcmVxdWlyZSB0aGUgY29tbWl0cy4NCiAgICANCiAgICAtLWIu
DQogICAgLS0NCiAgICBUbyB1bnN1YnNjcmliZSBmcm9tIHRoaXMgbGlzdDogc2VuZCB0aGUgbGlu
ZSAidW5zdWJzY3JpYmUgbGludXgtbmZzIiBpbg0KICAgIHRoZSBib2R5IG9mIGEgbWVzc2FnZSB0
byBtYWpvcmRvbW9Admdlci5rZXJuZWwub3JnDQogICAgTW9yZSBtYWpvcmRvbW8gaW5mbyBhdCAg
aHR0cDovL3ZnZXIua2VybmVsLm9yZy9tYWpvcmRvbW8taW5mby5odG1sDQogICAgDQoNCg==

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [nfsv4] Inter server-side copy performance
  2017-06-15 19:29                 ` Mora, Jorge
@ 2017-06-15 20:37                   ` J. Bruce Fields
  0 siblings, 0 replies; 10+ messages in thread
From: J. Bruce Fields @ 2017-06-15 20:37 UTC (permalink / raw)
  To: Mora, Jorge
  Cc: Olga Kornievskaia, Anna Schumaker, linux-nfs@vger.kernel.org,
	nfsv4@ietf.org

Thanks.

My main question is how close we get to what you'd expect given hardware
specs.  As long as it's in that neighborhood, then people will know what
to expect.

For example I'd expect server-to-server-copy bandwidth to be be roughly
the smallest of:

	- source server disk read bandwidth
	- destination server disk write bandwidth
	- network bandwidth

Which is actually the same I'd expect for a traditional copy, except
that the network bandwidth might be different.

But in your case, I'm guessing it's gigabit all around (and drive
bandwidth high enough not to matter).  And if my arithmetic is right,
traditional copy is getting around 700Mb/s and server-to-server copy
between 800 and 900 Mb/s depending on exactly how we do it?  Kinda
curious why traditional copy isn't doing better, I'd've thought we'd
have that pretty well optimized by now.

--b.

On Thu, Jun 15, 2017 at 07:29:24PM +0000, Mora, Jorge wrote:
> Here are the new numbers using latest SSC code for an 8GB copy.
> The code has a delayed unmount on the destination server which allows for single
> mount when multiple COPY calls are made back to back.
> Also, there is a third option which is using ioctl with a 64 bit copy length in order to 
> issue a single call for copy lengths >= 4GB.
> 
> Setup:
>      Client: 16 CPUs, 32GB
>      SRC server: 4 CPUs, 8GB
>      DST server: 4 CPUs, 8GB
> 
> Traditional copy:
>     DBG2: 20:31:43.683595 - Traditional COPY returns 8589934590 (96.8432810307 seconds)
> SSC (2 copy_file_range calls back to back):
>     DBG2: 20:30:00.268203 - Server-side COPY returns 8589934590 (83.0517759323 seconds)
>     PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 16%
> SSC (2 copy_file_range calls in parallel):
>     DBG2: 20:34:49.686573 - Server-side COPY returns 8589934590 (79.3080010414 seconds)
>     PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 20%
> SSC (1 ioctl call):
>     DBG2: 20:38:41.323774 - Server-side COPY returns 8589934590 (74.7774350643 seconds)
>     PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 28%
> 
> Since I don’t have three similar systems to test with, having the best machine (more cpu’s and more memory)
> as the client gives a better performance for the traditional copy. The following results are done using
> the best machine as the destination server instead.
> 
> Setup (using the best machine as the destination server instead):
>      Client: 4 CPUs, 8GB
>      SRC server: 4 CPUs, 8GB
>      DST server: 16 CPUs, 32GB
> 
> Traditional copy:
>     DBG2: 21:52:15.039625 - Traditional COPY returns 8589934590 (178.686635971 seconds)
> SSC (2 copy_file_range calls back to back):
>     DBG2: 21:49:08.961384 - Server-side COPY returns 8589934590 (173.071172953 seconds)
>     PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 3%
> SSC (2 copy_file_range calls in parallel):
>     DBG2: 21:35:59.822467 - Server-side COPY returns 8589934590 (159.743849993 seconds)
>     PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 18%
> SSC (1 ioctl call):
>     DBG2: 21:28:33.461528 - Server-side COPY returns 8589934590 (83.9983980656 seconds)
>     PASS: SSC should outperform traditional copy, performance improvement for a 8GB file: 119%
> 
> As you can see a single 8GB copy (ioctl with 64 bit copy length) performs the same as before (about 80 seconds)
> but in this case the traditional copy takes a lot longer.
> 
> 
> --Jorge
> 
> 
> On 4/18/17, 12:33 PM, "linux-nfs-owner@vger.kernel.org on behalf of J. Bruce Fields" <linux-nfs-owner@vger.kernel.org on behalf of bfields@fieldses.org> wrote:
> 
>     On Tue, Apr 18, 2017 at 01:28:39PM -0400, Olga Kornievskaia wrote:
>     > Given how the code is written now it looks like it's not possible to
>     > save up commits....
>     > 
>     > Here's what I can see happening:
>     > 
>     > nfs42_proc_clone() as well as nfs42_proc_copy() will call
>     > nfs_sync_inode(dst) "to make sure server(s) have the latest data"
>     > prior to initiating the clone/copy. So even if we just queue up (not
>     > send) the commit after the executing nfs42_proc_copy, then next call
>     > into vfs_copy_file_range() will send out that queued up commit.
>     > 
>     > Is it ok to relax the requirement that requirement, I'm not sure...
>     
>     Well, if the typical case of copy_file_range is just opening a file,
>     doing a single big copy_file_range(), then closing the file, then this
>     doesn't matter.
>     
>     The linux server is currently limiting COPY to 4MB at a time, which will
>     make the commits more annoying.
>     
>     Even there the typical case will probably still be an open, followed by
>     a series of non-overlapping copies, then close, and that shouldn't
>     require the commits.
>     
>     --b.
>     --
>     To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>     the body of a message to majordomo@vger.kernel.org
>     More majordomo info at  http://vger.kernel.org/majordomo-info.html
>     
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-06-15 20:37 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <DFE33002-FE1C-4E83-B6E3-50BFD304C7F6@netapp.com>
2017-04-13 17:45 ` [nfsv4] Inter server-side copy performance J. Bruce Fields
2017-04-14 20:09   ` Mora, Jorge
2017-04-14 21:22     ` Olga Kornievskaia
2017-04-17 13:36       ` J. Bruce Fields
2017-04-17 15:30         ` Olga Kornievskaia
2017-04-17 15:57           ` Anna Schumaker
     [not found]           ` <CAFX2JfkiraKm2Rmqhkrh3CSWBoYfW0QU=uXw=sSx-8Wt8JD7wg@mail.gmail.com>
2017-04-18 17:28             ` Olga Kornievskaia
2017-04-18 18:33               ` J. Bruce Fields
2017-06-15 19:29                 ` Mora, Jorge
2017-06-15 20:37                   ` J. Bruce Fields

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).