From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id q5J1SJen006865 for ; Mon, 18 Jun 2012 20:28:19 -0500 Received: from ipmail04.adl6.internode.on.net (ipmail04.adl6.internode.on.net [150.101.137.141]) by cuda.sgi.com with ESMTP id YJtdCVAIJ5XaFMXd for ; Mon, 18 Jun 2012 18:28:17 -0700 (PDT) Date: Tue, 19 Jun 2012 11:27:00 +1000 From: Dave Chinner Subject: Re: XFS status update for May 2012 Message-ID: <20120619012700.GH25389@dastard> References: <20120618120853.GA15480@infradead.org> <4FDF9998.6020205@sandeen.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <4FDF9998.6020205@sandeen.net> List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: Eric Sandeen Cc: Andreas Dilger , Christoph Hellwig , "linux-fsdevel@vger.kernel.org Devel" , xfs@oss.sgi.com On Mon, Jun 18, 2012 at 04:11:52PM -0500, Eric Sandeen wrote: > On 6/18/12 1:25 PM, Andreas Dilger wrote: > > On 2012-06-18, at 6:08 AM, Christoph Hellwig wrote: > >> May saw the release of Linux 3.4, including a decent sized XFS update. > >> Remarkable XFS features in Linux 3.4 include moving over all metadata > >> updates to use transactions, the addition of a work queue for the > >> low-level allocator code to avoid stack overflows due to extreme stack > >> use in the Linux VM/VFS call chain, > > > > This is essentially a workaround for too-small stacks in the kernel, > > which we've had to do at times as well, by doing work in a separate > > thread (with a new stack) and waiting for the results? This is a > > generic problem that any reasonably-complex filesystem will have when > > running under memory pressure on a complex storage stack (e.g. LVM + > > iSCSI), but causes unnecessary context switching. > > > > Any thoughts on a better way to handle this, or will there continue > > to be a 4kB stack limit and hack around this with repeated kmalloc > > well, 8k on x86_64 (not 4k) right? But still... > > Maybe it's still a partial hack but it's more generic - should we have > IRQ stacks like x86 has? (I think I'm right that that only exists > on x86 / 32-bit) - is there any downside to that? We already have irq stacks for x86-64 - the stackunwinder knows about them so when you get a stack trace from the interrupt stack is walks back across to the thread stack at the appropriate point... See dump_trace() in arch/x86/kernel/dumpstack_64.c Cheers, Dave. -- Dave Chinner david@fromorbit.com _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs