linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* AIO and vectored I/O support for cifs
@ 2005-03-25 21:26 Steve French
  2005-04-04  5:20 ` Suparna Bhattacharya
  0 siblings, 1 reply; 3+ messages in thread
From: Steve French @ 2005-03-25 21:26 UTC (permalink / raw)
  To: hch; +Cc: linux-fsdevel

Christoph,
I had time to add the generic vectored i/o and async i/o calls to cifs 
that you had suggested last month.  They are within the ifdef for the 
CIFS_EXPERIMENTAL config option for the time being.   I would like to do 
more testing of these though - are there any tests (even primitive ones) 
for readv/writev and async i/o?

Is there an easy way measuring the performance benefit of these (vs. the 
fallback routines in fs/read_write.c) - since presumably async and 
vectored i/o never kick in on a standard copy command such as cp or dd 
and require a modified application that is vector i/o aware or async i/o 
aware.

You had mentioned do_sync_read - is there a reason to change the current 
call to generic_file_read in the cifs read entry point to do_sync_read.  
Some filesystems which export aio routines still call generic_file_read 
and others call do_sync_read and it was not obvious to me what that 
would change.

This is partly to better limit reading from the pagecache when the read 
oplock is lost (ie when we do not have the network caching token 
allowing readahead from the server) but primarily because I would like 
to see if this could help with getting more parallelism in the single 
client to single server large file sequential copy case.   Currently 
CIFS can do large operations (as large as 128K for read or write in some 
cases) - and this is much more efficient for network transfer - but 
without mounting with forcedirectio I had limited my cifs_readpages to 
16K (4 pages typically) - and because I do the SMBread synchronously I 
am severely limiting parallelism in the case of a single threaded app.  
So where I would like to get to is during readahead having multiple SMB 
reads for the same inode on the wire at one time - with the SMB reads 
each larger than a page (between 4 and 30 pages) - and was hoping that 
the aio and readv/writev support would make that easier.  

I probably need to look more at the NFS direct i/o example to see if 
there are easy changes I can make to enable it on a per inode basis 
(rather than as only a mount option), and to double check what other 
filesystem do for returning errors on mmap and sendfile on inodes that 
are marked direct i/o.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2005-04-04  5:43 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-03-25 21:26 AIO and vectored I/O support for cifs Steve French
2005-04-04  5:20 ` Suparna Bhattacharya
2005-04-04  5:43   ` Steve French

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).