public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Thomas Klaube <thomas@klaube.net>
To: xfs@oss.sgi.com
Subject: xlog_write: reservation ran out. Need to up reservation
Date: Tue, 19 Aug 2014 17:34:30 +0200 (CEST)	[thread overview]
Message-ID: <362338960.3862279.1408462470243.JavaMail.zimbra@klaube.net> (raw)
In-Reply-To: <159192779.3859815.1408461799560.JavaMail.zimbra@klaube.net>

Hi all,

I am currently testing/benchmarking xfs on top of a bcache. When I run a heavy
IO workload (fio with 64 threads, read/write) on the device for ~30-45min I get

[ 9092.978268] XFS (bcache1): xlog_write: reservation summary:
[ 9092.978268]   trans type  = (null) (42)
[ 9092.978268]   unit res    = 18730384 bytes
[ 9092.978268]   current res = -1640 bytes
[ 9092.978268]   total reg   = 512 bytes (o/flow = 1163749592 bytes)
[ 9092.978268]   ophdrs      = 655304 (ophdr space = 7863648 bytes)
[ 9092.978268]   ophdr + reg = 1171613752 bytes
[ 9092.978268]   num regions = 2
[ 9092.978268] 
[ 9092.978272] XFS (bcache1): region[0]: LR header - 512 bytes
[ 9092.978273] XFS (bcache1): region[1]: commit - 0 bytes
[ 9092.978274] XFS (bcache1): xlog_write: reservation ran out. Need to up reservation
[ 9092.978303] XFS (bcache1): xfs_do_force_shutdown(0x2) called from line 2036 of file fs/xfs/xfs_log.c.  Return address = 0xffffffffa04433c8
[ 9092.979189] XFS (bcache1): Log I/O Error Detected.  Shutting down filesystem
[ 9092.979210] XFS (bcache1): Please umount the filesystem and rectify the problem(s)
[ 9092.979238] XFS (bcache1): xfs_do_force_shutdown(0x2) called from line 1497 of file fs/xfs/xfs_log.c.  Return address = 0xffffffffa0443b57
[ 9093.183869] XFS (bcache1): xfs_log_force: error 5 returned.
[ 9093.489944] XFS (bcache1): xfs_log_force: error 5 returned.

Kernel is 3.16.1 but this also happens with Ubuntu 3.13.0.34. 
With the bcache the fio puts ~30k IOps on the filesystem. 

xfs_info:
meta-data=/dev/bcache1           isize=256    agcount=8, agsize=268435455 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=1949957886, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

umount/mount recovers the fs and the fs seems ok.

I can reproduce this behavior. Is there anything I could try to debug
this?

Regards
Thomas

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

       reply	other threads:[~2014-08-19 15:34 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <159192779.3859815.1408461799560.JavaMail.zimbra@klaube.net>
2014-08-19 15:34 ` Thomas Klaube [this message]
2014-08-19 22:55   ` xlog_write: reservation ran out. Need to up reservation Dave Chinner
2014-08-21  7:24     ` Thomas Klaube

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=362338960.3862279.1408462470243.JavaMail.zimbra@klaube.net \
    --to=thomas@klaube.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox