* Fwd: [PATCH] cifs: add .splice_write [not found] ` <20180126181320.hc3sz4hyrfytj5d2@Dell-XPS-L322X> @ 2018-01-26 21:25 ` Steve French 2018-01-27 14:31 ` Jens Axboe 0 siblings, 1 reply; 3+ messages in thread From: Steve French @ 2018-01-26 21:25 UTC (permalink / raw) To: Jens Axboe, linux-fsdevel, Andrés Souto; +Cc: CIFS, samba-technical The patch did look like it improves the performance of file copy from Nautilus when I tried it, but would prefer experimenting tool where I could measure the improvement more precisely. Is there a good command line tool which generates splice_write indirectly? As expected cp and dd don't seem to be affected by this patch. In looking back at the commit history, I noticed Jens's much older commit from 2007 (5ffc4ef45b3b0a57872f631b4e4ceb8ace0d7496) which added the callouts to splice_read and more recently Al's commit 8d0207652cbe27d1f962050737848e5ad4671958 which introduced iter_file_splice_write a couple years ago (for cifs.ko he updated via commit 3dae8750c368f8ac11c3c8c2a28f56dcee865c01 to add write_iter support). Interesting that splice write wasn't added to cifs.ko but it does seem to help in the case you describe (Nautilus or other apps that use splice). To localhost (Samba server, cifs.ko on client with default mount options, current kernel rc9) on my laptop - my quick tests to showed > 500MB/sec performance with dd (1M block size), so will be curious what splice syscall results in. ---------- Forwarded message ---------- From: Andrés Souto <kai670@gmail.com> Date: Fri, Jan 26, 2018 at 12:13 PM Subject: Re: [PATCH] cifs: add .splice_write To: Steve French <sfrench@samba.org> Cc: linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org I want to explain a bit what is the problem this patch solves. Without it, I always obtain a transfer rate below 9MB/s when writing to a cifs share using splice syscall independently of the buffer size used. However, when reading from the cifs share, I obtain around 105 MB/s. After applying the patch, the transfer rate I obtain writing to the cifs share depending on the buffer size is: blocksize (kB)| transfer rate (MB/s) --------------+---------------------- 64 | 51.52 128 | 65.41 256 | 76.64 512 | 85.72 1024 | 90.21 -- To unsubscribe from this list: send the line "unsubscribe linux-cifs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- Thanks, Steve ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Fwd: [PATCH] cifs: add .splice_write 2018-01-26 21:25 ` Fwd: [PATCH] cifs: add .splice_write Steve French @ 2018-01-27 14:31 ` Jens Axboe 2018-01-29 10:47 ` Andrés Souto 0 siblings, 1 reply; 3+ messages in thread From: Jens Axboe @ 2018-01-27 14:31 UTC (permalink / raw) To: Steve French, linux-fsdevel, Andrés Souto; +Cc: CIFS, samba-technical On 1/26/18 2:25 PM, Steve French wrote: > The patch did look like it improves the performance of file copy from > Nautilus when I tried it, but would prefer experimenting tool where I > could measure the improvement more precisely. Is there a good command > line tool which generates splice_write indirectly? As expected cp and > dd don't seem to be affected by this patch. You can use fio with ioengine=splice. Or if you have any sendfile() based tools, that should also do the trick, as sendfile() is just a wrapper around splice. -- Jens Axboe ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Fwd: [PATCH] cifs: add .splice_write 2018-01-27 14:31 ` Jens Axboe @ 2018-01-29 10:47 ` Andrés Souto 0 siblings, 0 replies; 3+ messages in thread From: Andrés Souto @ 2018-01-29 10:47 UTC (permalink / raw) To: Steve French; +Cc: Jens Axboe, linux-fsdevel, CIFS, samba-technical I have been trying to reproduce it with a localhost samba and I wasn't able to. I also tried adding delays and limiting bandwidth using tc with the idea of replicating network delays of a gigabit ethernet network but I didn't see a significant difference either. So I investigate what is the difference between writing to the NAS (where I discovered the problem) compared to the samba at localhost and I see a completely different behaviour in the amount of data carried by each SMB message. writing to NAS ----------------------------- read/write 64kB 64kB read/write 1MB 1024kB splice 64kB 4096B splice 64kB (PATCH) 64kB splice 1MB (PATCH) 1024kB writing to localhost samba --------------------------- read/write 64kB 1MB splice 64kB 1MB splice 64kB (PATCH) 1MB In every case, the server seems to send one ack SMB message for each block of data and client doesn't sends new data until the ack is received. I suppose this explains why the throughput is ridiculous low for the situations when a SMB message carries so little data. ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2018-01-29 10:47 UTC | newest] Thread overview: 3+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <20171228132308.5709-1-kai670@gmail.com> [not found] ` <20180126181320.hc3sz4hyrfytj5d2@Dell-XPS-L322X> 2018-01-26 21:25 ` Fwd: [PATCH] cifs: add .splice_write Steve French 2018-01-27 14:31 ` Jens Axboe 2018-01-29 10:47 ` Andrés Souto
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).