From: Christian Stroetmann <stroetmann@ontolinux.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Linux FS Devel <linux-fsdevel@vger.kernel.org>,
Linux Kernel <linux-kernel@vger.kernel.org>
Subject: Re: Tux3 Report: Faster than tmpfs, what?
Date: Fri, 10 May 2013 07:06:38 +0200 [thread overview]
Message-ID: <518C805E.7010603@ontolinux.com> (raw)
In-Reply-To: <20130510045049.GU24635@dastard>
Aloha hardcore coders
Thank you very much for working out the facts, Dave.
You proved why I had all the years such a special suspicious feeling by
reading between the lines of the Tux3 e-mails sent to the mailing-list,
which should not mean that I do not like the work around the Tux3 file
system in general. Quite contrary, it is highly interesting to watch if
there are possibilites to bring the whole field further. But this kind
of marketing seen in the past is truely not constructive but contemporary.
Have fun in the sun
Christian Stroetmann
> On Tue, May 07, 2013 at 04:24:05PM -0700, Daniel Phillips wrote:
>> When something sounds to good to be true, it usually is. But not always. Today
>> Hirofumi posted some nigh on unbelievable dbench results that show Tux3
>> beating tmpfs. To put this in perspective, we normally regard tmpfs as
>> unbeatable because it is just a thin shim between the standard VFS mechanisms
>> that every filesystem must use, and the swap device. Our usual definition of
>> successful optimization is that we end up somewhere between Ext4 and Tmpfs,
>> or in other words, faster than Ext4. This time we got an excellent surprise.
>>
>> The benchmark:
>>
>> dbench -t 30 -c client2.txt 1& (while true; do sync; sleep 4; done)
> I'm deeply suspicious of what is in that client2.txt file. dbench on
> ext4 on a 4 SSD RAID0 array with a single process gets 130MB/s
> (kernel is 3.9.0). Your workload gives you over 1GB/s on ext4.....
>
>> tux3:
>> Operation Count AvgLat MaxLat
>> ----------------------------------------
>> NTCreateX 1477980 0.003 12.944
> ....
>> ReadX 2316653 0.002 0.499
>> LockX 4812 0.002 0.207
>> UnlockX 4812 0.001 0.221
>> Throughput 1546.81 MB/sec 1 clients 1 procs max_latency=12.950 ms
> Hmmm... No "Flush" operations. Gotcha - you've removed the data
> integrity operations from the benchmark.
>
> Ah, I get it now - you've done that so the front end of tux3 won't
> encounter any blocking operations and so can offload 100% of
> operations. It also explains the sync call every 4 seconds to keep
> tux3 back end writing out to disk so that a) all the offloaded work
> is done by the sync process and not measured by the benchmark, and
> b) so the front end doesn't overrun queues and throttle or run out
> of memory.
>
> Oh, so nicely contrived. But terribly obvious now that I've found
> it. You've carefully crafted the benchmark to demonstrate a best
> case workload for the tux3 architecture, then carefully not
> measured the overhead of the work tux3 has offloaded, and then not
> disclosed any of this in the hope that all people will look at is
> the headline.
>
> This would make a great case study for a "BenchMarketing For
> Dummies" book.
>
> Shame for you that you sent it to a list where people see the dbench
> numbers for ext4 and immediately think "that's not right" and then
> look deeper. Phoronix might swallow your sensationalist headline
> grab without analysis, but I don't think I'm alone in my suspicion
> that there was something stinky about your numbers.
>
> Perhaps in future you'll disclose such information with your
> results, otherwise nobody is ever going to trust anything you say
> about tux3....
>
> Cheers,
>
> Dave.
next prev parent reply other threads:[~2013-05-10 5:09 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-07 23:24 Tux3 Report: Faster than tmpfs, what? Daniel Phillips
2013-05-10 4:50 ` Dave Chinner
2013-05-10 5:06 ` Christian Stroetmann [this message]
2013-05-10 5:47 ` OGAWA Hirofumi
2013-05-14 6:34 ` Dave Chinner
2013-05-14 7:59 ` OGAWA Hirofumi
2013-05-11 6:12 ` Daniel Phillips
2013-05-11 18:35 ` james northrup
2013-05-12 4:39 ` Daniel Phillips
2013-05-11 21:26 ` Theodore Ts'o
2013-05-12 4:28 ` Daniel Phillips
2013-05-13 23:22 ` Daniel Phillips
[not found] ` <35557711-A88D-4226-B3C6-3787573F5403@dilger.ca>
2013-05-14 6:25 ` Daniel Phillips
2013-05-15 17:10 ` Andreas Dilger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=518C805E.7010603@ontolinux.com \
--to=stroetmann@ontolinux.com \
--cc=david@fromorbit.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox