linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Suparna Bhattacharya <suparna@in.ibm.com>
To: Steve French <smfrench@austin.rr.com>
Cc: hch@lst.de, linux-fsdevel@vger.kernel.org, linux-aio@kvack.org
Subject: Re: AIO and vectored I/O support for cifs
Date: Mon, 4 Apr 2005 10:50:11 +0530	[thread overview]
Message-ID: <20050404052011.GA4114@in.ibm.com> (raw)
In-Reply-To: <424481FF.5000006@austin.rr.com>


cc'ing linux-aio, for the AIO part of the discussion. You might be able
to find some of your answers in the archives.

On Fri, Mar 25, 2005 at 03:26:23PM -0600, Steve French wrote:
> Christoph,
> I had time to add the generic vectored i/o and async i/o calls to cifs 
> that you had suggested last month.  They are within the ifdef for the 
> CIFS_EXPERIMENTAL config option for the time being.   I would like to do 
> more testing of these though - are there any tests (even primitive ones) 
> for readv/writev and async i/o?
> 
> Is there an easy way measuring the performance benefit of these (vs. the 
> fallback routines in fs/read_write.c) - since presumably async and 
> vectored i/o never kick in on a standard copy command such as cp or dd 
> and require a modified application that is vector i/o aware or async i/o 
> aware.

there are several tests for AIO - I tend to use Chris Mason's aio-stress
which can be used to compare performance in terms of throughput for
streaming reads/writes for different variations of options.

(the following page isn't exactly up-to-date, but should still give
you some pointers: lse.sf.net/io/aio.html)

> 
> You had mentioned do_sync_read - is there a reason to change the current 
> call to generic_file_read in the cifs read entry point to do_sync_read.  
> Some filesystems which export aio routines still call generic_file_read 
> and others call do_sync_read and it was not obvious to me what that 
> would change.

I think you could keep it the way it is - generic_file_read will take care
of things. But maybe I should comment only after I see your patch. Are
you planning to post it some time ?

Regards
Suparna

> 
> This is partly to better limit reading from the pagecache when the read 
> oplock is lost (ie when we do not have the network caching token 
> allowing readahead from the server) but primarily because I would like 
> to see if this could help with getting more parallelism in the single 
> client to single server large file sequential copy case.   Currently 
> CIFS can do large operations (as large as 128K for read or write in some 
> cases) - and this is much more efficient for network transfer - but 
> without mounting with forcedirectio I had limited my cifs_readpages to 
> 16K (4 pages typically) - and because I do the SMBread synchronously I 
> am severely limiting parallelism in the case of a single threaded app.  
> So where I would like to get to is during readahead having multiple SMB 
> reads for the same inode on the wire at one time - with the SMB reads 
> each larger than a page (between 4 and 30 pages) - and was hoping that 
> the aio and readv/writev support would make that easier.  
> 
> I probably need to look more at the NFS direct i/o example to see if 
> there are easy changes I can make to enable it on a per inode basis 
> (rather than as only a mount option), and to double check what other 
> filesystem do for returning errors on mmap and sendfile on inodes that 
> are marked direct i/o.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Suparna Bhattacharya (suparna@in.ibm.com)
Linux Technology Center
IBM Software Lab, India


  reply	other threads:[~2005-04-04  5:10 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-03-25 21:26 AIO and vectored I/O support for cifs Steve French
2005-04-04  5:20 ` Suparna Bhattacharya [this message]
2005-04-04  5:43   ` Steve French

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050404052011.GA4114@in.ibm.com \
    --to=suparna@in.ibm.com \
    --cc=hch@lst.de \
    --cc=linux-aio@kvack.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=smfrench@austin.rr.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).