linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Anton Altaparmakov <aia21@cam.ac.uk>
To: Andrew Morton <akpm@osdl.org>
Cc: nathans@sgi.com, viro@parcelfarce.linux.theplanet.co.uk,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: RFC: [PATCH-2.6] Add helper function to lock multiple page cache pages.
Date: Sun, 6 Feb 2005 19:42:17 +0000 (GMT)	[thread overview]
Message-ID: <Pine.LNX.4.60.0502061932270.21938@hermes-1.csi.cam.ac.uk> (raw)
In-Reply-To: <20050203024755.1792b6c0.akpm@osdl.org>

On Thu, 3 Feb 2005, Andrew Morton wrote:
> I did a patch which switched loop to use the file_operations.read/write
> about a year ago.  Forget what happened to it.  It always seemed the right
> thing to do..

How did you implement the write?  At the moment the loop driver gets hold 
of both source and destination pages (the latter via grab_cache_page() and 
aops->prepare_write()) and copies/transforms directly from the source to 
the destination page (and then calls commit_write() on the destination 
page).  Did you allocate a buffer for each request, copy/transform to the 
buffer and then submit the buffer via file_operations->write?  That would 
clearly be not very efficient but given fops->write() is not atomic I 
don't see how that could be optimised further...

Perhaps the loop driver should work as is when 
aops->{prepare,commit}_write() are not NULL and should fall back to 
a buffered fops->write() otherwise?

Or have I missed some way in which the fops->write() case can be 
optimized?

Best regards,

	Anton
-- 
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer / IRC: #ntfs on irc.freenode.net
WWW: http://linux-ntfs.sf.net/ & http://www-stu.christs.cam.ac.uk/~aia21/

  parent reply	other threads:[~2005-02-06 19:42 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-02-02 15:12 RFC: [PATCH-2.6] Add helper function to lock multiple page cache pages Anton Altaparmakov
2005-02-02 15:43 ` Matthew Wilcox
2005-02-02 15:56   ` Anton Altaparmakov
2005-02-02 22:34 ` Andrew Morton
2005-02-03 10:37   ` Anton Altaparmakov
2005-02-03 10:47     ` Andrew Morton
2005-02-03 11:23       ` Anton Altaparmakov
2005-02-03 19:23         ` RFC: [PATCH-2.6] Add helper function to lock multiple page cache pages - nopage alternative Bryan Henderson
2005-02-04 15:36           ` Anton Altaparmakov
2005-02-04 17:17             ` Hugh Dickins
2005-02-04 23:09             ` Bryan Henderson
2005-02-03 19:03       ` RFC: [PATCH-2.6] Add helper function to lock multiple page cache pages - loop device Bryan Henderson
2005-02-06 19:42       ` Anton Altaparmakov [this message]
2005-02-06 20:42         ` RFC: [PATCH-2.6] Add helper function to lock multiple page cache pages Andrew Morton
2005-02-16 21:56           ` Anton Altaparmakov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.60.0502061932270.21938@hermes-1.csi.cam.ac.uk \
    --to=aia21@cam.ac.uk \
    --cc=akpm@osdl.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nathans@sgi.com \
    --cc=viro@parcelfarce.linux.theplanet.co.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).