linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Laurent Vivier <Laurent.Vivier@bull.net>
To: Mingming Cao <cmm@us.ibm.com>
Cc: Arjan van de Ven <arjan@infradead.org>,
	Ravikiran G Thirumalai <kiran@scalex86.org>,
	Andrew Morton <akpm@osdl.org>, Takashi Sato <sho@tnes.nec.co.jp>,
	linux-kernel@vger.kernel.org,
	ext2-devel <ext2-devel@lists.sourceforge.net>,
	linux-fsdevel@vger.kernel.org
Subject: Re: [Ext2-devel] Re: [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB
Date: Thu, 20 Apr 2006 13:28:33 +0200	[thread overview]
Message-ID: <1145532513.5872.6.camel@openx2.frec.bull.fr> (raw)
In-Reply-To: <1145394113.3771.11.camel@dyn9047017067.beaverton.ibm.com>


[-- Attachment #1.1: Type: text/plain, Size: 2305 bytes --]

Le mar 18/04/2006 à 23:01, Mingming Cao a écrit :
> On Tue, 2006-04-18 at 09:30 +0200, Arjan van de Ven wrote:
> > On Tue, 2006-04-18 at 09:14 +0200, Laurent Vivier wrote:
> > > Le lun 17/04/2006 à 23:32, Ravikiran G Thirumalai a écrit :
> > > > On Mon, Apr 17, 2006 at 11:09:36PM +0200, Arjan van de Ven wrote:
> > > > > On Mon, 2006-04-17 at 14:07 -0700, Ravikiran G Thirumalai wrote:
> > > > > > 
> > > > > > 
> > > > > > I ran the same tests on a 16 core EM64T box very similar to the one
> > > > > > you ran
> > > > > > dbench on :). Dbench results on ext3 varies quite a bit.  I couldn't
> > > > > > get 
> > > > > > to a statistically significant conclusion  For eg,
> > > > > 
> > > > > 
> > > > > dbench is not a good performance benchmark. At all. Don't use it for
> > > > > that ;)
> > > > 
> > > > Agreed. (I did not mean to use it in the first place :).  I was just trying 
> > > > to verify the benchmark results posted earlier)
> > > > 
> > > > Thanks,
> > > > Kiran
> > > 
> > > What is the good performance benchmark to know if we should use atomic_t
> > > instead of percpu_counter ?
> > 
> > you probably want something like postal/postmark instead or so (although
> > that's not ideal either), at least that's reproducable
> > 
> postmark is a single threaded benchmark.
> 
> The ext3 filesystem free blocks counter is mostly being updated at block
> allocation and free code. So, a test with many many threads doing block
> allocation/deallocation simultaneously will stress the free blocks
> counter accounting better than a single threaded fs benchmark. After
> all, the main reason we choose to use percpu counter for the free blocks
> counter at the first place, I believe, was to support parallel block
> allocation. 
> 
> I would suggest run tiobench with many threads (>256), or even better,
> run tiobench with many dd tests at the background.

You can find attached my results with tiobench (256 threads, always on
x440 with 8 CPUs hyperthreaded = 16).

But, as the results are very different, I think we can't really
conclude... in fact, I think atomic_t or percpu_counter have no impact
on the results.

Regards,
Laurent
-- 
Laurent Vivier
Bull, Architect of an Open World (TM)
http://www.bullopensource.org/ext4

[-- Attachment #1.2: tiobench.txt --]
[-- Type: text/plain, Size: 9165 bytes --]

tiobench.pl --size 16 --numruns 10 --threads 256

Unit information
================
File size = megabytes
Blk Size  = bytes
Rate      = megabytes per second
CPU%      = percentage of CPU used during the test
Latency   = milliseconds
Lat%      = percent of requests that took longer than X seconds
CPU Eff   = Rate divided by CPU% - throughput per cpu load

Sequential Reads
atomic_long_t                  16   4096  256  394.38 1540.%     2.822     5960.36   0.00000  0.00000    26
atomic_long_t                  16   4096  256  384.70 1547.%     2.087     6187.27   0.00000  0.00000    25
atomic_long_t                  16   4096  256  395.69 1548.%     2.001     5988.64   0.00000  0.00000    26
atomic_long_t                  16   4096  256  396.10 1544.%     1.987     5836.74   0.00000  0.00000    26
atomic_long_t                  16   4096  256  377.66 1551.%     2.435     6038.05   0.00000  0.00000    24
atomic_long_t                  16   4096  256  378.35 1550.%     2.145     6369.26   0.00000  0.00000    24
atomic_long_t                  16   4096  256  375.38 1545.%     2.236     6318.84   0.00000  0.00000    24
atomic_long_t                  16   4096  256  399.97 1546.%     2.021     5911.10   0.00000  0.00000    26
atomic_long_t                  16   4096  256  386.59 1542.%     2.045     5968.13   0.00000  0.00000    25
atomic_long_t                  16   4096  256  401.45 1547.%     2.000     5918.17   0.00000  0.00000    26
percpu_counter                 16   4096  256  396.22 1539.%     1.965     5984.66   0.00000  0.00000    26
percpu_counter                 16   4096  256  403.50 1547.%     2.373     5503.98   0.00000  0.00000    26
percpu_counter                 16   4096  256  388.54 1547.%     2.047     6100.05   0.00000  0.00000    25
percpu_counter                 16   4096  256  397.43 1540.%     2.096     6010.45   0.00000  0.00000    26
percpu_counter                 16   4096  256  398.81 1543.%     2.134     5677.78   0.00000  0.00000    26
percpu_counter                 16   4096  256  399.85 1548.%     1.980     5805.21   0.00000  0.00000    26
percpu_counter                 16   4096  256  394.70 1551.%     2.021     5960.13   0.00000  0.00000    25
percpu_counter                 16   4096  256  396.64 1543.%     2.132     5901.40   0.00000  0.00000    26
percpu_counter                 16   4096  256  390.70 1550.%     1.906     5972.05   0.00000  0.00000    25
percpu_counter                 16   4096  256  401.98 1547.%     2.049     5839.44   0.00000  0.00000    26


Random Reads
atomic_long_t                  16   4096  256  100.79 1230.%     0.351        3.82   0.00000  0.00000     8
atomic_long_t                  16   4096  256  112.61 1342.%     0.367       13.42   0.00000  0.00000     8
atomic_long_t                  16   4096  256  111.16 1354.%     0.503      450.22   0.00000  0.00000     8
atomic_long_t                  16   4096  256  112.79 1333.%     0.366        4.73   0.00000  0.00000     8
atomic_long_t                  16   4096  256  114.61 1322.%     0.375       12.74   0.00000  0.00000     9
atomic_long_t                  16   4096  256  111.18 1350.%     0.368       19.89   0.00000  0.00000     8
atomic_long_t                  16   4096  256  112.14 1344.%     0.395      143.76   0.00000  0.00000     8
atomic_long_t                  16   4096  256  112.41 1346.%     0.374       31.40   0.00000  0.00000     8
atomic_long_t                  16   4096  256  112.89 1353.%     0.372       12.64   0.00000  0.00000     8
atomic_long_t                  16   4096  256  112.40 1354.%     0.365       10.88   0.00000  0.00000     8
percpu_counter                 16   4096  256  112.68 1341.%     0.439      263.93   0.00000  0.00000     8
percpu_counter                 16   4096  256  109.77 1356.%     0.372       35.58   0.00000  0.00000     8
percpu_counter                 16   4096  256  114.18 1339.%     0.371       29.47   0.00000  0.00000     9
percpu_counter                 16   4096  256  112.61 1339.%     0.430      241.78   0.00000  0.00000     8
percpu_counter                 16   4096  256  111.09 1343.%     0.372       46.84   0.00000  0.00000     8
percpu_counter                 16   4096  256  111.98 1355.%     0.358       16.41   0.00000  0.00000     8
percpu_counter                 16   4096  256  112.55 1348.%     0.393      121.30   0.00000  0.00000     8
percpu_counter                 16   4096  256  114.84 1324.%     0.398      112.03   0.00000  0.00000     9
percpu_counter                 16   4096  256  111.45 1353.%     0.368       22.09   0.00000  0.00000     8
percpu_counter                 16   4096  256  112.21 1352.%     0.405      150.62   0.00000  0.00000     8

Sequential Writes
atomic_long_t                  16   4096  256   28.72 350.0%   103.526     3087.83   0.00000  0.00000     8
atomic_long_t                  16   4096  256   28.47 350.1%   107.563     4127.54   0.00000  0.00000     8
atomic_long_t                  16   4096  256   28.10 346.6%   108.709     2767.21   0.00000  0.00000     8
atomic_long_t                  16   4096  256   28.24 345.1%   106.619     3025.58   0.00000  0.00000     8
atomic_long_t                  16   4096  256   28.35 358.4%   110.779     3844.84   0.00000  0.00000     8
atomic_long_t                  16   4096  256   28.14 349.9%   109.956     3580.34   0.00000  0.00000     8
atomic_long_t                  16   4096  256   28.23 349.6%   110.011     2770.21   0.00000  0.00000     8
atomic_long_t                  16   4096  256   28.46 355.4%   108.701     2694.87   0.00000  0.00000     8
atomic_long_t                  16   4096  256   28.14 346.3%   108.114     3461.73   0.00000  0.00000     8
atomic_long_t                  16   4096  256   28.49 348.8%   109.625     3160.93   0.00000  0.00000     8
percpu_counter                 16   4096  256   27.92 344.3%   109.420     3497.14   0.00000  0.00000     8
percpu_counter                 16   4096  256   28.23 343.2%   110.146     3279.80   0.00000  0.00000     8
percpu_counter                 16   4096  256   28.35 345.6%   110.027     3527.68   0.00000  0.00000     8
percpu_counter                 16   4096  256   28.17 348.9%   107.143     3735.24   0.00000  0.00000     8
percpu_counter                 16   4096  256   28.07 333.8%   107.581     2442.35   0.00000  0.00000     8
percpu_counter                 16   4096  256   28.36 343.2%   106.625     2740.56   0.00000  0.00000     8
percpu_counter                 16   4096  256   28.33 343.6%   107.201     3029.49   0.00000  0.00000     8
percpu_counter                 16   4096  256   28.28 339.9%   107.849     3100.73   0.00000  0.00000     8
percpu_counter                 16   4096  256   28.27 344.8%   108.008     3753.06   0.00000  0.00000     8
percpu_counter                 16   4096  256   28.54 354.1%   108.254     2831.67   0.00000  0.00000     8

Random Writes
atomic_long_t                  16   4096  256    2.47 64.55%     5.690     1295.08   0.00000  0.00000     4
atomic_long_t                  16   4096  256    2.45 63.52%     6.021     1386.22   0.00000  0.00000     4
atomic_long_t                  16   4096  256    2.47 63.87%     5.621      912.91   0.00000  0.00000     4
atomic_long_t                  16   4096  256    2.45 64.34%     6.361     1444.26   0.00000  0.00000     4
atomic_long_t                  16   4096  256    2.47 64.04%     5.793     1307.02   0.00000  0.00000     4
atomic_long_t                  16   4096  256    2.47 64.14%     5.979     1690.00   0.00000  0.00000     4
atomic_long_t                  16   4096  256    2.49 64.63%     5.993     1820.59   0.00000  0.00000     4
atomic_long_t                  16   4096  256    2.48 64.98%     6.400     1829.93   0.00000  0.00000     4
atomic_long_t                  16   4096  256    2.44 64.05%     5.763     1631.14   0.00000  0.00000     4
atomic_long_t                  16   4096  256    2.53 66.19%     5.728      919.34   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.43 63.69%     5.927      851.10   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.42 62.73%     5.997     1371.13   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.46 64.17%     6.223     1808.24   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.46 64.04%     5.897     1410.98   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.47 63.98%     5.829     1197.30   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.48 64.15%     5.660     1079.85   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.47 63.81%     6.041     1401.02   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.48 64.88%     6.078     1218.58   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.49 64.90%     6.148     1468.47   0.00000  0.00000     4
percpu_counter                 16   4096  256    2.47 65.64%     5.724     1148.28   0.00000  0.00000     4


[-- Attachment #2: Ceci est une partie de message numériquement signée. --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

  reply	other threads:[~2006-04-20 11:28 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20060325223358sho@rifu.tnes.nec.co.jp>
     [not found] ` <1143485147.3970.23.camel@dyn9047017067.beaverton.ibm.com>
     [not found]   ` <20060327131049.2c6a5413.akpm@osdl.org>
     [not found]     ` <20060327225847.GC3756@localhost.localdomain>
     [not found]       ` <1143530126.11560.6.camel@openx2.frec.bull.fr>
     [not found]         ` <1143568905.3935.13.camel@dyn9047017067.beaverton.ibm.com>
     [not found]           ` <1143623605.5046.11.camel@openx2.frec.bull.fr>
2006-03-30  1:38             ` [RFC][PATCH 0/2]Extend ext3 filesystem limit from 8TB to 16TB Mingming Cao
2006-03-30  1:54               ` Andrew Morton
2006-03-31 22:42                 ` Mingming Cao
2006-04-02 20:13                   ` Mingming Cao
2006-04-10  9:11                 ` [Ext2-devel] " Laurent Vivier
2006-04-10  8:24                   ` Andrew Morton
2006-04-13 15:26                     ` Laurent Vivier
2006-04-17 21:07                       ` Ravikiran G Thirumalai
2006-04-17 21:09                         ` Arjan van de Ven
2006-04-17 21:32                           ` Ravikiran G Thirumalai
2006-04-18  7:14                             ` Laurent Vivier
2006-04-18  7:30                               ` [Ext2-devel] " Arjan van de Ven
2006-04-18 10:57                                 ` Laurent Vivier
2006-04-18 19:08                                   ` Ravikiran G Thirumalai
2006-04-18 14:09                                 ` Laurent Vivier
2006-04-18 21:01                                 ` [Ext2-devel] " Mingming Cao
2006-04-20 11:28                                   ` Laurent Vivier [this message]
2006-04-20 14:39                                   ` Laurent Vivier
2006-04-21 11:17                                     ` [Ext2-devel] " Laurent Vivier
2006-04-10 16:57                   ` Mingming Cao
2006-04-10 19:06                     ` Mingming Cao
2006-04-11  7:07                       ` Laurent Vivier
2006-04-14 17:23                         ` [Ext2-devel] " Ravikiran G Thirumalai
2006-03-30 17:36               ` Andreas Dilger
2006-03-30 19:01                 ` Mingming Cao
2006-03-30 17:40               ` Andreas Dilger
2006-03-30 19:16                 ` Mingming Cao
2006-03-30 19:22                   ` Mingming Cao
2006-03-31  6:42                     ` Andreas Dilger
2006-03-31 13:33                   ` Andi Kleen
2006-04-01  6:50                     ` Nathan Scott
2006-05-26  5:00               ` [PATCH 0/2]Define ext3 in-kernel filesystem block types and extend " Mingming Cao
2006-05-26 18:08                 ` Andrew Morton
2006-05-30 17:55                   ` Mingming Cao
2006-03-30  1:39             ` [RFC][PATCH 1/2]ext3 block allocation/reservation fixes to support 2**32 block numbers Mingming Cao
2006-03-30  1:39             ` [RFC][PATCH 2/2]Other ext3 in-kernel block number type fix " Mingming Cao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1145532513.5872.6.camel@openx2.frec.bull.fr \
    --to=laurent.vivier@bull.net \
    --cc=akpm@osdl.org \
    --cc=arjan@infradead.org \
    --cc=cmm@us.ibm.com \
    --cc=ext2-devel@lists.sourceforge.net \
    --cc=kiran@scalex86.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=sho@tnes.nec.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).