From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id q0SG7tgd155973 for ; Sat, 28 Jan 2012 10:07:55 -0600 Received: from mail.sandeen.net (sandeen.net [63.231.237.45]) by cuda.sgi.com with ESMTP id ySvFpgkxVDEA9VvZ for ; Sat, 28 Jan 2012 08:07:54 -0800 (PST) Message-ID: <4F241D5A.4080906@sandeen.net> Date: Sat, 28 Jan 2012 10:07:54 -0600 From: Eric Sandeen MIME-Version: 1.0 Subject: Re: Insane file system overhead on large volume References: <4F22EB3C.6020106@sandeen.net> (sfid-20120127_225414_630806_3564B30F) <201201281555.22179.Martin@lichtvoll.de> <4F2415B2.3080605@sandeen.net> In-Reply-To: <4F2415B2.3080605@sandeen.net> List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: Martin Steigerwald Cc: Manny , xfs@oss.sgi.com On 1/28/12 9:35 AM, Eric Sandeen wrote: > On 1/28/12 8:55 AM, Martin Steigerwald wrote: >> Am Freitag, 27. Januar 2012 schrieb Eric Sandeen: ... >>> So Christoph's question was a good one; where are you getting >>> your sizes? > > To solve your original problem, can you answer the above question? > Adding your actual raid config output (/proc/mdstat maybe) would help > too. Sorry, nevermind. I missed the earlier reply about solving the problem and confused the responders. Argh. -Eric _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs