From mboxrd@z Thu Jan 1 00:00:00 1970 From: Steve French Subject: AIO and vectored I/O support for cifs Date: Fri, 25 Mar 2005 15:26:23 -0600 Message-ID: <424481FF.5000006@austin.rr.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: linux-fsdevel@vger.kernel.org Received: from ms-smtp-03.texas.rr.com ([24.93.47.42]:38039 "EHLO ms-smtp-03-eri0.texas.rr.com") by vger.kernel.org with ESMTP id S261800AbVCYV0j (ORCPT ); Fri, 25 Mar 2005 16:26:39 -0500 To: hch@lst.de Sender: linux-fsdevel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org Christoph, I had time to add the generic vectored i/o and async i/o calls to cifs that you had suggested last month. They are within the ifdef for the CIFS_EXPERIMENTAL config option for the time being. I would like to do more testing of these though - are there any tests (even primitive ones) for readv/writev and async i/o? Is there an easy way measuring the performance benefit of these (vs. the fallback routines in fs/read_write.c) - since presumably async and vectored i/o never kick in on a standard copy command such as cp or dd and require a modified application that is vector i/o aware or async i/o aware. You had mentioned do_sync_read - is there a reason to change the current call to generic_file_read in the cifs read entry point to do_sync_read. Some filesystems which export aio routines still call generic_file_read and others call do_sync_read and it was not obvious to me what that would change. This is partly to better limit reading from the pagecache when the read oplock is lost (ie when we do not have the network caching token allowing readahead from the server) but primarily because I would like to see if this could help with getting more parallelism in the single client to single server large file sequential copy case. Currently CIFS can do large operations (as large as 128K for read or write in some cases) - and this is much more efficient for network transfer - but without mounting with forcedirectio I had limited my cifs_readpages to 16K (4 pages typically) - and because I do the SMBread synchronously I am severely limiting parallelism in the case of a single threaded app. So where I would like to get to is during readahead having multiple SMB reads for the same inode on the wire at one time - with the SMB reads each larger than a page (between 4 and 30 pages) - and was hoping that the aio and readv/writev support would make that easier. I probably need to look more at the NFS direct i/o example to see if there are easy changes I can make to enable it on a per inode basis (rather than as only a mount option), and to double check what other filesystem do for returning errors on mmap and sendfile on inodes that are marked direct i/o.