public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs@oss.sgi.com
Subject: Re: [PATCH] xfs: flush the range before zero range conversion
Date: Thu, 25 Sep 2014 11:21:45 -0400	[thread overview]
Message-ID: <20140925152145.GD47304@bfoster.bfoster> (raw)
In-Reply-To: <20140925120155.GF4945@dastard>

On Thu, Sep 25, 2014 at 10:01:55PM +1000, Dave Chinner wrote:
> On Wed, Sep 24, 2014 at 03:06:31PM -0400, Brian Foster wrote:
> > XFS currently discards delalloc blocks within the target range of a zero
> > range request. Unaligned start and end offsets are zeroed through the
> > page cache and the internal, aligned blocks are converted to unwritten
> > extents.
> > 
> > If EOF is page aligned and covered by a delayed allocation extent. The
> > inode size is not updated until I/O completion. If a zero range request
> > discards a delalloc range that covers page aligned EOF as such, the
> > inode size update never occurs. For example:
> > 
> > $ rm -f /mnt/file
> > $ xfs_io -fc "pwrite 0 64k" -c "zero 60k 4k" /mnt/file
> > $ stat -c "%s" /mnt/file
> > 65536
> > $ umount /mnt
> > $ mount <dev> /mnt
> > $ stat -c "%s" /mnt/file
> > 61440
> > 
> > Update xfs_zero_file_space() to flush the range rather than discard
> > delalloc blocks to ensure that inode size updates occur appropriately.
> > 
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > ---
> > 
> > I suppose we could be more clever here and only flush the range in this
> > particular scenario, but I'm not sure if there's a major benefit there.
> 
> Punching the delalloc range rather than flushing the file
> was done intentionally - this was added primarily for speeding up
> the zeroing of large VM image files. i.e. it's an extent
> manipulation operation rather than a data Io operation. Flushing the
> file defeats the primary reason for the operation existing.
> 
> We can easily detect this situation and just zero the last block in
> the file directly after punching out all the delalloc state. This
> should happen anyway when the region to be zeroed is not page
> aligned....
> 

Hmm, good point. xfs_iozero() goes through page cache. It seems like
that should work and it's something we already have to handle in this
path. I'll give it a shot.

Brian

> > FWIW, this implicitly addresses the indlen==0 assert failures described
> > in the xfs_bmap_del_extent() rfc, but doesn't necessarily mean we
> > shouldn't fix that code IMO.
> 
> We punch delalloc extents elsewhere, so that still needs fixing.
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

      reply	other threads:[~2014-09-25 15:21 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-24 19:06 [PATCH] xfs: flush the range before zero range conversion Brian Foster
2014-09-25 12:01 ` Dave Chinner
2014-09-25 15:21   ` Brian Foster [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140925152145.GD47304@bfoster.bfoster \
    --to=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox