linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Daniel Wagner <wagi@monom.org>
Cc: linux-fsdevel@vger.kernel.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xfs@oss.sgi.com
Subject: Re: Internal error xfs_trans_cancel
Date: Thu, 2 Jun 2016 16:35:39 +1000	[thread overview]
Message-ID: <20160602063539.GM12670@dastard> (raw)
In-Reply-To: <af997005-e144-5509-885c-a688562ea3ec@monom.org>

On Thu, Jun 02, 2016 at 07:23:24AM +0200, Daniel Wagner wrote:
> > posix03 and posix04 just emit error messages:
> > 
> > posix04 -n 40 -l 100
> > posix04: invalid option -- 'l'
> > posix04: Usage: posix04 [-i iterations] [-n nr_children] [-s] <filename>
> > .....
> 
> I screwed that this up. I have patched my version of lockperf to make
> all test using the same options names. Though forgot to send those
> patches. Will do now.
> 
> In this case you can use use '-i' instead of '-l'.
> 
> > So I changed them to run "-i $l" instead, and that has a somewhat
> > undesired effect:
> > 
> > static void
> > kill_children()
> > {
> >         siginfo_t       infop;
> > 
> >         signal(SIGINT, SIG_IGN);
> >>>>>>   kill(0, SIGINT);
> >         while (waitid(P_ALL, 0, &infop, WEXITED) != -1);
> > }
> > 
> > Yeah, it sends a SIGINT to everything with a process group id. It
> > kills the parent shell:
> 
> Ah that rings a bell. I tuned the parameters so that I did not run into
> this problem. I'll do patch for this one. It's pretty annoying.
> 
> > $ ./run-lockperf-tests.sh /mnt/scratch/
> > pid 9597's current affinity list: 0-15
> > pid 9597's new affinity list: 0,4,8,12
> > sh: 1: cannot create /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor: Directory nonexistent
> > posix01 -n 8 -l 100
> > posix02 -n 8 -l 100
> > posix03 -n 8 -i 100
> > 
> > $
> > 
> > So, I've just removed those tests from your script. I'll see if I
> > have any luck with reproducing the problem now.
> 
> I was able to reproduce it again with the same steps.

Hmmm, Ok. I've been running the lockperf test and kernel builds all
day on a filesystem that is identical in shape and size to yours
(i.e. xfs_info output is the same) but I haven't reproduced it yet.
Is it possible to get a metadump image of your filesystem to see if
I can reproduce it on that?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2016-06-02  6:35 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-01  5:52 Internal error xfs_trans_cancel Daniel Wagner
2016-06-01  7:10 ` Dave Chinner
2016-06-01 13:50   ` Daniel Wagner
2016-06-01 14:13     ` Daniel Wagner
2016-06-01 14:19       ` Daniel Wagner
2016-06-02  0:26       ` Dave Chinner
2016-06-02  5:23         ` Daniel Wagner
2016-06-02  6:35           ` Dave Chinner [this message]
2016-06-02 13:29             ` Daniel Wagner
2016-06-26 12:16               ` Thorsten Leemhuis
2016-06-26 15:13                 ` Daniel Wagner
2016-06-14  4:29 ` Josh Poimboeuf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160602063539.GM12670@dastard \
    --to=david@fromorbit.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=wagi@monom.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).