linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <jens.axboe@oracle.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Dmitri Monakhov <dmonakhov@openvz.org>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH] Add block device speciffic splice write method
Date: Thu, 23 Oct 2008 08:29:23 +0200	[thread overview]
Message-ID: <20081023062921.GQ22217@kernel.dk> (raw)
In-Reply-To: <20081022223928.a6ce476f.akpm@linux-foundation.org>

On Wed, Oct 22 2008, Andrew Morton wrote:
> On Mon, 20 Oct 2008 20:11:56 +0200 Jens Axboe <jens.axboe@oracle.com> wrote:
> 
> > +ssize_t generic_file_splice_write_file_nolock(struct pipe_inode_info *pipe,
> > +					      struct file *out, loff_t *ppos,
> > +					      size_t len, unsigned int flags)
> > +{
> > +	struct address_space *mapping = out->f_mapping;
> > +	struct inode *inode = mapping->host;
> > +	struct splice_desc sd = {
> > +		.total_len = len,
> > +		.flags = flags,
> > +		.pos = *ppos,
> > +		.u.file = out,
> > +	};
> > +	ssize_t ret;
> > +
> > +	mutex_lock(&pipe->inode->i_mutex);
> > +	ret = __splice_from_pipe(pipe, &sd, pipe_to_file);
> > +	mutex_unlock(&pipe->inode->i_mutex);
> > +
> > +	if (ret > 0) {
> > +		unsigned long nr_pages;
> > +
> > +		*ppos += ret;
> > +		nr_pages = (ret + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
> > +
> > +		if (unlikely((out->f_flags & O_SYNC) || IS_SYNC(inode))) {
> > +			int er;
> > +
> > +			er = sync_page_range_nolock(inode, mapping, *ppos, ret);
> > +			if (er)
> > +				ret = er;
> > +		}
> > +		balance_dirty_pages_ratelimited_nr(mapping, nr_pages);
> > +	}
> > +
> > +	return ret;
> > +}
> > +EXPORT_SYMBOL(generic_file_splice_write_file_nolock);
> 
> I don't think the balance_dirty_pages() is needed if we just did the
> sync_page_range().

Good point, I think we can get rid of that.
> 
> 
> But really it'd be better if the throttling happened down in
> pipe_to_file(), on a per-page basis.  As it stands we can dirty an
> arbitrary number of pagecache pages without throttling.  I think?

That's pretty exactly why it isn't done in the actor, to avoid doing it
per-page. As it's going to be PIPE_BUFFERS (16) pages max, I think this
is better.

Back in the splice early days, the balance_dirty_pages() actually showed
up in profiles when it was done on a per-page basis. So I'm reluctant to
change it :-)

-- 
Jens Axboe


  reply	other threads:[~2008-10-23  6:30 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-10-19 14:00 [PATCH] Add block device speciffic splice write method Dmitri Monakhov
2008-10-20 17:49 ` Jens Axboe
2008-10-20 18:11   ` Jens Axboe
2008-10-20 18:42     ` Dmitri Monakhov
2008-10-23  5:39     ` Andrew Morton
2008-10-23  6:29       ` Jens Axboe [this message]
2008-10-23  6:41         ` Andrew Morton
2008-10-23  6:51           ` Jens Axboe
2008-10-23  7:03             ` Andrew Morton
2008-10-23  7:16               ` Jens Axboe
2008-10-23  8:41       ` Dmitri Monakhov
2008-10-20 18:29   ` Dmitri Monakhov
2008-10-20 18:33     ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20081023062921.GQ22217@kernel.dk \
    --to=jens.axboe@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=dmonakhov@openvz.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).