From mboxrd@z Thu Jan 1 00:00:00 1970 From: Suparna Bhattacharya Subject: Re: AIO and vectored I/O support for cifs Date: Mon, 4 Apr 2005 10:50:11 +0530 Message-ID: <20050404052011.GA4114@in.ibm.com> References: <424481FF.5000006@austin.rr.com> Reply-To: suparna@in.ibm.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: hch@lst.de, linux-fsdevel@vger.kernel.org, linux-aio@kvack.org Return-path: Received: from e1.ny.us.ibm.com ([32.97.182.141]:21740 "EHLO e1.ny.us.ibm.com") by vger.kernel.org with ESMTP id S261365AbVDDFKk (ORCPT ); Mon, 4 Apr 2005 01:10:40 -0400 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e1.ny.us.ibm.com (8.12.11/8.12.11) with ESMTP id j345AbH3002308 for ; Mon, 4 Apr 2005 01:10:37 -0400 Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay04.pok.ibm.com (8.12.10/NCO/VER6.6) with ESMTP id j345AblJ202310 for ; Mon, 4 Apr 2005 01:10:37 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.12.11/8.12.11) with ESMTP id j345Aa1s023945 for ; Mon, 4 Apr 2005 00:10:36 -0500 To: Steve French Content-Disposition: inline In-Reply-To: <424481FF.5000006@austin.rr.com> Sender: linux-fsdevel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org cc'ing linux-aio, for the AIO part of the discussion. You might be able to find some of your answers in the archives. On Fri, Mar 25, 2005 at 03:26:23PM -0600, Steve French wrote: > Christoph, > I had time to add the generic vectored i/o and async i/o calls to cifs > that you had suggested last month. They are within the ifdef for the > CIFS_EXPERIMENTAL config option for the time being. I would like to do > more testing of these though - are there any tests (even primitive ones) > for readv/writev and async i/o? > > Is there an easy way measuring the performance benefit of these (vs. the > fallback routines in fs/read_write.c) - since presumably async and > vectored i/o never kick in on a standard copy command such as cp or dd > and require a modified application that is vector i/o aware or async i/o > aware. there are several tests for AIO - I tend to use Chris Mason's aio-stress which can be used to compare performance in terms of throughput for streaming reads/writes for different variations of options. (the following page isn't exactly up-to-date, but should still give you some pointers: lse.sf.net/io/aio.html) > > You had mentioned do_sync_read - is there a reason to change the current > call to generic_file_read in the cifs read entry point to do_sync_read. > Some filesystems which export aio routines still call generic_file_read > and others call do_sync_read and it was not obvious to me what that > would change. I think you could keep it the way it is - generic_file_read will take care of things. But maybe I should comment only after I see your patch. Are you planning to post it some time ? Regards Suparna > > This is partly to better limit reading from the pagecache when the read > oplock is lost (ie when we do not have the network caching token > allowing readahead from the server) but primarily because I would like > to see if this could help with getting more parallelism in the single > client to single server large file sequential copy case. Currently > CIFS can do large operations (as large as 128K for read or write in some > cases) - and this is much more efficient for network transfer - but > without mounting with forcedirectio I had limited my cifs_readpages to > 16K (4 pages typically) - and because I do the SMBread synchronously I > am severely limiting parallelism in the case of a single threaded app. > So where I would like to get to is during readahead having multiple SMB > reads for the same inode on the wire at one time - with the SMB reads > each larger than a page (between 4 and 30 pages) - and was hoping that > the aio and readv/writev support would make that easier. > > I probably need to look more at the NFS direct i/o example to see if > there are easy changes I can make to enable it on a per inode basis > (rather than as only a mount option), and to double check what other > filesystem do for returning errors on mmap and sendfile on inodes that > are marked direct i/o. > - > To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Suparna Bhattacharya (suparna@in.ibm.com) Linux Technology Center IBM Software Lab, India