From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Miller Subject: Re: SLUB performance regression vs SLAB Date: Thu, 04 Oct 2007 14:11:13 -0700 (PDT) Message-ID: <20071004.141113.08322956.davem@davemloft.net> References: <20071004192824.GA9852@linux.intel.com> <20071004.135537.39158051.davem@davemloft.net> <470554D9.2050505@redhat.com> Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: willy@linux.intel.com, clameter@sgi.com, nickpiggin@yahoo.com.au, hch@lst.de, mel@skynet.ie, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, dgc@sgi.com, jens.axboe@oracle.com, suresh.b.siddha@intel.com To: cebbert@redhat.com Return-path: In-Reply-To: <470554D9.2050505@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org From: Chuck Ebbert Date: Thu, 04 Oct 2007 17:02:17 -0400 > How do you simulate reading 100TB of data spread across 3000 disks, > selecting 10% of it using some criterion, then sorting and > summarizing the result? You repeatedly read zeros from a smaller disk into the same amount of memory, and sort that as if it were real data instead. You're not thinking outside of the box, and you need to do that to write good test cases and fix kernel bugs effectively.