From: Theodore Ts'o <tytso@mit.edu>
To: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@intel.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
linux-fsdevel@vger.kernel.org, xfs@oss.sgi.com,
linux-ext4@vger.kernel.org, Jan Kara <jack@suse.cz>,
LKML <linux-kernel@vger.kernel.org>,
david@fromorbit.com, Tim Chen <tim.c.chen@linux.intel.com>,
Andi Kleen <ak@linux.intel.com>
Subject: Re: page fault scalability (ext3, ext4, xfs)
Date: Wed, 14 Aug 2013 21:11:01 -0400 [thread overview]
Message-ID: <20130815011101.GA3572@thunk.org> (raw)
In-Reply-To: <CALCETrVaRQ3WQ5++Uu_0JTaVnjUugAaAhqQK__7r5YWvLxpAhw@mail.gmail.com>
On Wed, Aug 14, 2013 at 04:38:12PM -0700, Andy Lutomirski wrote:
> > It would be better to write zeros to it, so we aren't measuring the
> > cost of the unwritten->written conversion.
>
> At the risk of beating a dead horse, how hard would it be to defer
> this part until writeback?
Part of the work has to be done at write time because we need to
update allocation statistics (i.e., so that we don't have ENOSPC
problems). The unwritten->written conversion does happen at writeback
(as does the actual block allocation if we are doing delayed
allocation).
The point is that if the goal is to measure page fault scalability, we
shouldn't have this other stuff happening as the same time as the page
fault workload.
- Ted
next prev parent reply other threads:[~2013-08-15 1:11 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-14 17:10 page fault scalability (ext3, ext4, xfs) Dave Hansen
2013-08-14 19:43 ` Theodore Ts'o
2013-08-14 20:50 ` Dave Hansen
2013-08-14 23:06 ` Theodore Ts'o
2013-08-14 23:38 ` Andy Lutomirski
2013-08-15 1:11 ` Theodore Ts'o [this message]
2013-08-15 2:10 ` Dave Chinner
2013-08-15 4:32 ` Andy Lutomirski
2013-08-15 6:01 ` Dave Chinner
2013-08-15 6:14 ` Andy Lutomirski
2013-08-15 6:18 ` David Lang
2013-08-15 6:28 ` Andy Lutomirski
2013-08-15 7:11 ` Dave Chinner
2013-08-15 7:45 ` Jan Kara
2013-08-15 21:28 ` Dave Chinner
2013-08-15 21:31 ` Andy Lutomirski
2013-08-15 21:39 ` Dave Chinner
2013-08-19 23:23 ` David Lang
2013-08-19 23:31 ` Andy Lutomirski
2013-08-15 15:17 ` Andy Lutomirski
2013-08-15 21:37 ` Dave Chinner
2013-08-15 21:43 ` Andy Lutomirski
2013-08-15 22:18 ` Dave Chinner
2013-08-15 22:26 ` Andy Lutomirski
2013-08-16 0:14 ` Dave Chinner
2013-08-16 0:21 ` Andy Lutomirski
2013-08-16 22:02 ` J. Bruce Fields
2013-08-16 23:18 ` Andy Lutomirski
2013-08-18 20:17 ` J. Bruce Fields
2013-08-19 22:17 ` J. Bruce Fields
2013-08-19 22:29 ` Andy Lutomirski
2013-08-15 15:14 ` Dave Hansen
2013-08-15 0:24 ` Dave Chinner
2013-08-15 2:24 ` Andi Kleen
2013-08-15 4:29 ` Dave Chinner
2013-08-15 15:36 ` Dave Hansen
2013-08-15 15:09 ` Dave Hansen
2013-08-15 15:05 ` Theodore Ts'o
2013-08-15 17:45 ` Dave Hansen
2013-08-15 19:31 ` Theodore Ts'o
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130815011101.GA3572@thunk.org \
--to=tytso@mit.edu \
--cc=ak@linux.intel.com \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@fromorbit.com \
--cc=jack@suse.cz \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@amacapital.net \
--cc=tim.c.chen@linux.intel.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).