From: Dave Chinner <david@fromorbit.com>
To: Michael Weissenbacher <mw@dermichi.com>
Cc: xfs@oss.sgi.com
Subject: Re: xfs_bmap Cannot allocate memory
Date: Tue, 5 Jul 2011 21:39:29 +1000 [thread overview]
Message-ID: <20110705113929.GA21663@dastard> (raw)
In-Reply-To: <4E12EB13.50302@dermichi.com>
On Tue, Jul 05, 2011 at 12:44:35PM +0200, Michael Weissenbacher wrote:
> Hi Dave!
> >
> > Sounds like your memory is fragmented. IIRC, bmap tries to map all
> > the extents in a single buffer, and that might cause problems for
> > files with large numbers of extents. ENOMEM can occur if an
> > internal buffer cannot be allocated to hold all the extents to be
> > mapped in one call.
> >
> > Try using the "-n <num_extents>" option to reduce the number of
> > extents gathered per ioctl call and see if that makes the
> > issue go away.
> >
> Thanks, i've tried that:
> # xfs_bmap -n 3 /backup/tmp/cannot_allocate_memory.vhd
> /backup/tmp/cannot_allocate_memory.vhd:
> 0: [0..134279]: 444610560..444744839
> 1: [134280..134399]: hole
> 2: [134400..206495]: 433472688..433544783
> # xfs_bmap -n 70000 /backup/tmp/cannot_allocate_memory.vhd | tail -n1
> 69999: [244690864..244690871]: 1173913592..1173913599
> # xfs_bmap -n 75000 /backup/tmp/cannot_allocate_memory.vhd | tail -n1
> 74999: [253425664..253425671]: 1284986768..1284986775
> # xfs_bmap -n 80000 /backup/tmp/cannot_allocate_memory.vhd | tail -n1
> 79999: [262287488..262289015]: hole
> # xfs_bmap -n 85000 /backup/tmp/cannot_allocate_memory.vhd | tail -n1
> 84999: [272607184..272613335]: 1497107288..1497113439
> # xfs_bmap -n 90000 /backup/tmp/cannot_allocate_memory.vhd | tail -n1
> xfs_bmap: xfsctl(XFS_IOC_GETBMAPX) iflags=0x0
> ["/backup/tmp/cannot_allocate_memory.vhd"]: Cannot allocate memory
>
> - Seems that xfs_bmap reads at maximum the number of extents that i
> specified with -n
> - Seems that the file has even more then 85000 extents
Ah, there's a mismatch betwenteh man page and the implementation,
then. The man page implies that "-n <num>" means query num extents
at a time to map the entire file. It's implemented as "map the first
<num> extents", though.
You could try this:
# xfs_io -f -c "fiemap -v" <file>
Because fiemap loops doing getting a small number of extents at a
time...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2011-07-05 11:39 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-07-05 5:02 xfs_bmap Cannot allocate memory Michael Weissenbacher
2011-07-05 10:32 ` Dave Chinner
2011-07-05 10:44 ` Michael Weissenbacher
2011-07-05 11:39 ` Dave Chinner [this message]
2011-07-05 13:18 ` Michael Weissenbacher
2011-07-05 10:35 ` Stan Hoeppner
2011-07-05 11:08 ` Michael Weissenbacher
2011-07-05 10:49 ` Christoph Hellwig
2011-07-05 11:01 ` Michael Weissenbacher
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110705113929.GA21663@dastard \
--to=david@fromorbit.com \
--cc=mw@dermichi.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox