public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: David Chinner <dgc@sgi.com>
To: Rabeeh Khoury <rabeeh@marvell.com>
Cc: nscott@aconex.com, linux-fsdevel@vger.kernel.org,
	xfs@oss.sgi.com, Lennert Buijtenhek <buytenh@marvell.com>
Subject: Re: NFSD on XFS with RT subvolume
Date: Fri, 8 Feb 2008 14:27:30 +1100	[thread overview]
Message-ID: <20080208032730.GL155407@sgi.com> (raw)
In-Reply-To: <B9FFC3F97441D04093A504CEA31B7C4102220ACE@MSILEXCH01.marvell.com>

On Wed, Feb 06, 2008 at 04:08:58PM +0200, Rabeeh Khoury wrote:
> > >
> > > Exporting an XFS volume with kernel NFSD when real-time subvolume is
> > > enabled hangs the kernel.
> > >
> > > I'm using vanilla LK 2.6.22.7; first I create the XFS volume with
> two
> > > partitions of 20GB each with extent size of 1MB; then I create a
> > > subdirectory in the volume and mark it (using xfs_io util) as it
> belongs
> > > to the rt subvolume with inheritance flag.
> > >
> > > After mounting that volume through NFSv3 / UDP; and trying a 'dd
> > > if=/dev/zero of=/mnt/rt/test bs=1M count=1000' the machine running
> NFSD
> > > hangs infinitely.
> > 
> > Did you manage to get a stack trace, OOC?  No reason why it shouldn't
> > work AFAIK.
> 
> I didn't mention that I'm using ARM EABI machine for that; but the same
> scenario happened on Ubuntu Gutsy 7.10.
> The serial console stops responding, but getting Sysrq with showPc
> function working I'v got some stack traces (Look for #stack-trace
> below).

Nothing indicating a hang in the stack traces, just lots of
truncates in progress. If you run the same test on the local machine,
does the system hang? Or does it only hang through NFS.

BTW, having multiple truncates in flight doesn't match up with you
supposed test case above.  If all you are doing is a dd, then there
should only be one truncate occurring (on open). Try running with
conv=notrunc and see if that hangs in a similar manner...

> I'm running Fedora-8 on the ARM machine using xfsprogs-2.9.4-4.f8 RPM.
> The output of formatting /dev/sda5 and /dev/sda6 as the rt-subvolume is
> the following, but this time /dev/sda5 is 2GByte and /dev/sda6 is
> 20GByte (look for #mkfs.xfs).
> 
> Another note is that sometimes I'm getting an error message that XFS is
> trying to access LBA beyond the volume.

Does xfs_check or xfs_repair -n indicate and corruption on disk?

> Maybe you can suggest few tests that I can perform to figure out what's
> the root cause?

If you don't use a rt device, does the same test hang?

FWIW, if you run the same test on x86 or x86_64, does it hang?

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group

      reply	other threads:[~2008-02-08  3:27 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <B9FFC3F97441D04093A504CEA31B7C41021A627E@MSILEXCH01.marvell.com>
2008-02-03 22:05 ` NFSD on XFS with RT subvolume Nathan Scott
2008-02-06 14:08   ` Rabeeh Khoury
2008-02-08  3:27     ` David Chinner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080208032730.GL155407@sgi.com \
    --to=dgc@sgi.com \
    --cc=buytenh@marvell.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=nscott@aconex.com \
    --cc=rabeeh@marvell.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox