* AIO and vectored I/O support for cifs
@ 2005-03-25 21:26 Steve French
2005-04-04 5:20 ` Suparna Bhattacharya
0 siblings, 1 reply; 3+ messages in thread
From: Steve French @ 2005-03-25 21:26 UTC (permalink / raw)
To: hch; +Cc: linux-fsdevel
Christoph,
I had time to add the generic vectored i/o and async i/o calls to cifs
that you had suggested last month. They are within the ifdef for the
CIFS_EXPERIMENTAL config option for the time being. I would like to do
more testing of these though - are there any tests (even primitive ones)
for readv/writev and async i/o?
Is there an easy way measuring the performance benefit of these (vs. the
fallback routines in fs/read_write.c) - since presumably async and
vectored i/o never kick in on a standard copy command such as cp or dd
and require a modified application that is vector i/o aware or async i/o
aware.
You had mentioned do_sync_read - is there a reason to change the current
call to generic_file_read in the cifs read entry point to do_sync_read.
Some filesystems which export aio routines still call generic_file_read
and others call do_sync_read and it was not obvious to me what that
would change.
This is partly to better limit reading from the pagecache when the read
oplock is lost (ie when we do not have the network caching token
allowing readahead from the server) but primarily because I would like
to see if this could help with getting more parallelism in the single
client to single server large file sequential copy case. Currently
CIFS can do large operations (as large as 128K for read or write in some
cases) - and this is much more efficient for network transfer - but
without mounting with forcedirectio I had limited my cifs_readpages to
16K (4 pages typically) - and because I do the SMBread synchronously I
am severely limiting parallelism in the case of a single threaded app.
So where I would like to get to is during readahead having multiple SMB
reads for the same inode on the wire at one time - with the SMB reads
each larger than a page (between 4 and 30 pages) - and was hoping that
the aio and readv/writev support would make that easier.
I probably need to look more at the NFS direct i/o example to see if
there are easy changes I can make to enable it on a per inode basis
(rather than as only a mount option), and to double check what other
filesystem do for returning errors on mmap and sendfile on inodes that
are marked direct i/o.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: AIO and vectored I/O support for cifs
2005-03-25 21:26 AIO and vectored I/O support for cifs Steve French
@ 2005-04-04 5:20 ` Suparna Bhattacharya
2005-04-04 5:43 ` Steve French
0 siblings, 1 reply; 3+ messages in thread
From: Suparna Bhattacharya @ 2005-04-04 5:20 UTC (permalink / raw)
To: Steve French; +Cc: hch, linux-fsdevel, linux-aio
cc'ing linux-aio, for the AIO part of the discussion. You might be able
to find some of your answers in the archives.
On Fri, Mar 25, 2005 at 03:26:23PM -0600, Steve French wrote:
> Christoph,
> I had time to add the generic vectored i/o and async i/o calls to cifs
> that you had suggested last month. They are within the ifdef for the
> CIFS_EXPERIMENTAL config option for the time being. I would like to do
> more testing of these though - are there any tests (even primitive ones)
> for readv/writev and async i/o?
>
> Is there an easy way measuring the performance benefit of these (vs. the
> fallback routines in fs/read_write.c) - since presumably async and
> vectored i/o never kick in on a standard copy command such as cp or dd
> and require a modified application that is vector i/o aware or async i/o
> aware.
there are several tests for AIO - I tend to use Chris Mason's aio-stress
which can be used to compare performance in terms of throughput for
streaming reads/writes for different variations of options.
(the following page isn't exactly up-to-date, but should still give
you some pointers: lse.sf.net/io/aio.html)
>
> You had mentioned do_sync_read - is there a reason to change the current
> call to generic_file_read in the cifs read entry point to do_sync_read.
> Some filesystems which export aio routines still call generic_file_read
> and others call do_sync_read and it was not obvious to me what that
> would change.
I think you could keep it the way it is - generic_file_read will take care
of things. But maybe I should comment only after I see your patch. Are
you planning to post it some time ?
Regards
Suparna
>
> This is partly to better limit reading from the pagecache when the read
> oplock is lost (ie when we do not have the network caching token
> allowing readahead from the server) but primarily because I would like
> to see if this could help with getting more parallelism in the single
> client to single server large file sequential copy case. Currently
> CIFS can do large operations (as large as 128K for read or write in some
> cases) - and this is much more efficient for network transfer - but
> without mounting with forcedirectio I had limited my cifs_readpages to
> 16K (4 pages typically) - and because I do the SMBread synchronously I
> am severely limiting parallelism in the case of a single threaded app.
> So where I would like to get to is during readahead having multiple SMB
> reads for the same inode on the wire at one time - with the SMB reads
> each larger than a page (between 4 and 30 pages) - and was hoping that
> the aio and readv/writev support would make that easier.
>
> I probably need to look more at the NFS direct i/o example to see if
> there are easy changes I can make to enable it on a per inode basis
> (rather than as only a mount option), and to double check what other
> filesystem do for returning errors on mmap and sendfile on inodes that
> are marked direct i/o.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Suparna Bhattacharya (suparna@in.ibm.com)
Linux Technology Center
IBM Software Lab, India
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: AIO and vectored I/O support for cifs
2005-04-04 5:20 ` Suparna Bhattacharya
@ 2005-04-04 5:43 ` Steve French
0 siblings, 0 replies; 3+ messages in thread
From: Steve French @ 2005-04-04 5:43 UTC (permalink / raw)
To: suparna; +Cc: hch, linux-fsdevel, linux-aio
Suparna Bhattacharya wrote:
>cc'ing linux-aio, for the AIO part of the discussion. You might be able
>to find some of your answers in the archives.
>
>there are several tests for AIO - I tend to use Chris Mason's aio-stress
>which can be used to compare performance in terms of throughput for
>streaming reads/writes for different variations of options.
>
>(the following page isn't exactly up-to-date, but should still give
>you some pointers: lse.sf.net/io/aio.html)
>
>
Thanks - those were lists that I was not aware of.
>>You had mentioned do_sync_read - is there a reason to change the current
>>call to generic_file_read in the cifs read entry point to do_sync_read.
>>Some filesystems which export aio routines still call generic_file_read
>>and others call do_sync_read and it was not obvious to me what that
>>would change.
>>
>>
>
>I think you could keep it the way it is - generic_file_read will take care
>of things. But maybe I should comment only after I see your patch. Are
>you planning to post it some time ?
>
>Regards
>Suparna
>
>
This went in with the patch
http://cifs.bkbits.net:8080/linux-2.5cifs/gnupatch@424470a3SsdVpix9tJE4NDebxqyRSg
and merged into mainline about six days ago (note that it is disabled by
default - unless CONFIG_CIFS_EXPERIMENTAL
is selected).
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2005-04-04 5:43 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-03-25 21:26 AIO and vectored I/O support for cifs Steve French
2005-04-04 5:20 ` Suparna Bhattacharya
2005-04-04 5:43 ` Steve French
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).