linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	epasch@de.ibm.com, SCHILLIG@de.ibm.com,
	Martin Schwidefsky <schwidefsky@de.ibm.com>,
	Heiko Carstens <heiko.carstens@de.ibm.com>,
	christof.schmitt@de.ibm.com, thoss@de.ibm.com, hare@suse.de,
	npiggin@suse.de
Subject: Re: Performance regression in scsi sequential throughput (iozone) due to "e084b - page-allocator: preserve PFN ordering when	__GFP_COLD is set"
Date: Tue, 09 Feb 2010 07:23:34 +0100	[thread overview]
Message-ID: <4B70FF66.7020602@linux.vnet.ibm.com> (raw)
In-Reply-To: <20100208152131.GC23680@csn.ul.ie>



Mel Gorman wrote:
> On Mon, Feb 08, 2010 at 03:01:16PM +0100, Christian Ehrhardt wrote:
>>
>> Mel Gorman wrote:
>>> On Fri, Feb 05, 2010 at 04:51:10PM +0100, Christian Ehrhardt wrote:
>>>   
>>>> I'll keep the old thread below as reference.
>>>>
>>>> After taking a round of ensuring reproducibility and a pile of new   
>>>> measurements I now can come back with several new insights.
>>>>
>>>> FYI - I'm now running iozone triplets (4, then 8, then 16 parallel   
>>>> threads) with sequential read load and all that 4 times to find   
>>>> potential noise. But since I changed to that load instead of random 
>>>> read  wit hone thread and ensuring the most memory is cleared (sync + 
>>>> echo 3 >  /proc/sys/vm/drop_caches + a few sleeps) . The noise is now 
>>>> down <2%.  For detailed questions about the setup feel free to ask me 
>>>> directly as I  won't flood this thread too much with such details.
>>>>
>>>>     
>>> Is there any chance you have a driver script for the test that you could send
>>> me? I'll then try reproducing based on that script and see what happens. I'm
>>> not optimistic I'll be able to reproduce the problem because I think
>>> it's specific to your setup but you never know.
>>>   
>> I don't have one as it runs in a bigger automated test environment, but  
>> it is easy enough to write down something comparable.
> 
> I'd appreciate it, thanks.
> 

Testing of your two patches starts in a few minutes, thanks in advance.

Here the info how to execute the core of the test - I cross fingers that anyone else can reproduce it that way :-)

I use it in a huge automation framework which takes care of setting up the system, disks, gathering statistics and so on, but it essentially comes down to something simple like that:

#!/bin/bash
# reboot your system with 256m
# attach 16 disks (usually up to 64, but 16 should be enough to show the issue)
# mount your disks at /mnt/subw0, /mnt/subw1, ...
for i in 4 8 16 4 8 16 4 8 16 4 8 16
do
        sync; sleep 10s; echo 3 > /proc/sys/vm/drop_caches; sleep 2s;
        iozone -s 2000m -r 64k -t $i  -e -w -R -C  -i 0 -F /mnt/subw0 /mnt/subw1 /mnt/subw2 /mnt/subw3 /mnt/subw4 /mnt/subw5 /mnt/subw6 /mnt/subw7 /mnt/subw8 /mnt/subw9 /mnt/subw10 /mnt/subw11 /mnt/subw12 /mnt/subw13 /mnt/subw14 /mnt/subw15
        sync; sleep 10s; echo 3 > /proc/sys/vm/drop_caches; sleep 2s;
        iozone -s 2000m -r 64k -t $i  -e -w -R -C  -i 1 -F /mnt/subw0 /mnt/subw1 /mnt/subw2 /mnt/subw3 /mnt/subw4 /mnt/subw5 /mnt/subw6 /mnt/subw7 /mnt/subw8 /mnt/subw9 /mnt/subw10 /mnt/subw11 /mnt/subw12 /mnt/subw13 /mnt/subw14 /mnt/subw15
done
# while we could reduce the number of writes to one 16 thread write I use it that way as it is more similar to our original load (makes no difference anyway)

[...]
-- 

Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, Open Virtualization 

  parent reply	other threads:[~2010-02-09  6:23 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-12-07 14:39 Performance regression in scsi sequential throughput (iozone) due to "e084b - page-allocator: preserve PFN ordering when __GFP_COLD is set" Christian Ehrhardt
2009-12-07 15:09 ` Mel Gorman
2009-12-08 17:59   ` Christian Ehrhardt
2009-12-10 14:36     ` Christian Ehrhardt
2009-12-11 11:20       ` Mel Gorman
2009-12-11 14:47         ` Christian Ehrhardt
2009-12-18 13:38           ` Christian Ehrhardt
2009-12-18 17:42             ` Mel Gorman
2010-01-14 12:30               ` Christian Ehrhardt
2010-01-19 11:33                 ` Mel Gorman
2010-02-05 15:51                   ` Christian Ehrhardt
2010-02-05 17:49                     ` Mel Gorman
2010-02-08 14:01                       ` Christian Ehrhardt
2010-02-08 15:21                         ` Mel Gorman
2010-02-08 16:55                           ` Mel Gorman
2010-02-09  6:23                           ` Christian Ehrhardt [this message]
2010-02-09 15:52                           ` Christian Ehrhardt
2010-02-09 17:57                             ` Mel Gorman
2010-02-11 16:11                               ` Christian Ehrhardt
2010-02-12 10:05                                 ` Nick Piggin
2010-02-15  6:59                                   ` Nick Piggin
2010-02-15 15:46                                   ` Christian Ehrhardt
2010-02-16 11:25                                     ` Mel Gorman
2010-02-16 16:47                                       ` Christian Ehrhardt
2010-02-17  9:55                                         ` Christian Ehrhardt
2010-02-17 10:03                                           ` Christian Ehrhardt
2010-02-18 11:43                                           ` Mel Gorman
2010-02-18 16:09                                             ` Christian Ehrhardt
2010-02-19 11:19                                               ` Christian Ehrhardt
2010-02-19 15:19                                                 ` Mel Gorman
2010-02-22 15:42                                                   ` Christian Ehrhardt
2010-02-25 15:13                                                     ` Christian Ehrhardt
2010-02-26 11:18                                                       ` Nick Piggin
2010-03-02  6:52                                                   ` Nick Piggin
2010-03-02 10:04                                                     ` Mel Gorman
2010-03-02 10:36                                                       ` Nick Piggin
2010-03-02 11:01                                                         ` Mel Gorman
2010-03-02 11:18                                                           ` Nick Piggin
2010-03-02 11:24                                                             ` Mel Gorman
2010-03-03  6:51                                                               ` Christian Ehrhardt
2010-02-08 15:02                       ` Christian Ehrhardt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B70FF66.7020602@linux.vnet.ibm.com \
    --to=ehrhardt@linux.vnet.ibm.com \
    --cc=SCHILLIG@de.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=christof.schmitt@de.ibm.com \
    --cc=epasch@de.ibm.com \
    --cc=hare@suse.de \
    --cc=heiko.carstens@de.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mel@csn.ul.ie \
    --cc=npiggin@suse.de \
    --cc=schwidefsky@de.ibm.com \
    --cc=thoss@de.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).