cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Ross Lagerwall <ross.lagerwall@citrix.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] Assertion failure: sdp->sd_log_blks_free <= sdp->sd_jdesc->jd_blocks
Date: Tue, 19 Mar 2019 18:17:15 +0000	[thread overview]
Message-ID: <fe00b6ce-7fb9-8769-0bc5-e830cebb1e75@citrix.com> (raw)

Hi,

Occasionally during testing, we see the following assertion failure in 
log_pull_tail():

[ 1104.061245] gfs2: fsid=xapi-clusterd:2d2cc24c-c48a-ca.0: fatal: 
assertion "atomic_read(&sdp->sd_log_blks_free) <= 
sdp->sd_jdesc->jd_blocks" failed
[ 1104.061245]    function = log_pull_tail, file = fs/gfs2/log.c, line = 510

It always seems to happen shortly after journal recovery. I added some 
debug logging at the point of the assertion failure and got the following:

[  862.335390] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: jid=0: 
Trying to acquire journal lock...
[  862.364578] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: jid=0: 
Looking at journal...
[  862.475108] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: jid=0: 
Acquiring the transaction lock...
[  862.475110] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: jid=0: 
Replaying journal...
[  862.475538] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: jid=0: 
Replayed 1 of 4 blocks
[  862.475539] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: jid=0: Found 
3 revoke tags
[  862.484394] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: jid=0: 
Journal replayed in 149ms [jlck:29ms, jhead:110ms, tlck:0ms, replay:9ms]
[  862.484431] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: recover jid 
0 result success
[  862.484434] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: jid=0: Done
[  862.484545] gfs2: fsid=xapi-clusterd:9f0b3587-c0f3-c8.1: recover 
generation 4 done
[  863.362577] DBG: log_blks_free = 14838 jd_blocks = 8192
[  863.362578] DBG: dist = 6663 new_tail = 1025 sd_log_tail = 2554
[  863.362579] DBG: sd_log_head = 1025 sd_log_head_flush = 1027
[  863.362579] DBG: sd_log_idle = 0
[  863.362580] DBG: sd_ail1_list empty? = yes

It looks to me that something messes with sd_log_tail and 
sd_log_head/sd_log_head_flush during recovery which causes a subsequent 
call to gfs2_log_flush() to hit the assertion.

Any ideas about this?

Thanks,
-- 
Ross Lagerwall



             reply	other threads:[~2019-03-19 18:17 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-19 18:17 Ross Lagerwall [this message]
2019-03-19 18:48 ` [Cluster-devel] Assertion failure: sdp->sd_log_blks_free <= sdp->sd_jdesc->jd_blocks Bob Peterson
2019-03-22 18:15   ` Ross Lagerwall
2019-03-25 15:29     ` Bob Peterson
2019-03-27 14:40       ` Ross Lagerwall
2019-03-27 17:04         ` Bob Peterson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fe00b6ce-7fb9-8769-0bc5-e830cebb1e75@citrix.com \
    --to=ross.lagerwall@citrix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).