From: Steve French <smfrench@austin.rr.com>
To: hch@lst.de
Cc: linux-fsdevel@vger.kernel.org
Subject: AIO and vectored I/O support for cifs
Date: Fri, 25 Mar 2005 15:26:23 -0600 [thread overview]
Message-ID: <424481FF.5000006@austin.rr.com> (raw)
Christoph,
I had time to add the generic vectored i/o and async i/o calls to cifs
that you had suggested last month. They are within the ifdef for the
CIFS_EXPERIMENTAL config option for the time being. I would like to do
more testing of these though - are there any tests (even primitive ones)
for readv/writev and async i/o?
Is there an easy way measuring the performance benefit of these (vs. the
fallback routines in fs/read_write.c) - since presumably async and
vectored i/o never kick in on a standard copy command such as cp or dd
and require a modified application that is vector i/o aware or async i/o
aware.
You had mentioned do_sync_read - is there a reason to change the current
call to generic_file_read in the cifs read entry point to do_sync_read.
Some filesystems which export aio routines still call generic_file_read
and others call do_sync_read and it was not obvious to me what that
would change.
This is partly to better limit reading from the pagecache when the read
oplock is lost (ie when we do not have the network caching token
allowing readahead from the server) but primarily because I would like
to see if this could help with getting more parallelism in the single
client to single server large file sequential copy case. Currently
CIFS can do large operations (as large as 128K for read or write in some
cases) - and this is much more efficient for network transfer - but
without mounting with forcedirectio I had limited my cifs_readpages to
16K (4 pages typically) - and because I do the SMBread synchronously I
am severely limiting parallelism in the case of a single threaded app.
So where I would like to get to is during readahead having multiple SMB
reads for the same inode on the wire at one time - with the SMB reads
each larger than a page (between 4 and 30 pages) - and was hoping that
the aio and readv/writev support would make that easier.
I probably need to look more at the NFS direct i/o example to see if
there are easy changes I can make to enable it on a per inode basis
(rather than as only a mount option), and to double check what other
filesystem do for returning errors on mmap and sendfile on inodes that
are marked direct i/o.
next reply other threads:[~2005-03-25 21:26 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-03-25 21:26 Steve French [this message]
2005-04-04 5:20 ` AIO and vectored I/O support for cifs Suparna Bhattacharya
2005-04-04 5:43 ` Steve French
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=424481FF.5000006@austin.rr.com \
--to=smfrench@austin.rr.com \
--cc=hch@lst.de \
--cc=linux-fsdevel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).