* Random write regression
@ 2009-07-20 13:11 Steven Pratt
2009-07-20 13:34 ` Chris Mason
2009-07-21 7:04 ` Yan Zheng
0 siblings, 2 replies; 5+ messages in thread
From: Steven Pratt @ 2009-07-20 13:11 UTC (permalink / raw)
To: linux-btrfs
Finally got around to going through latest data. Seems like we lost all
the random write performance gains. Creates are better, but total
regression on the random workload. Sequential reads seem to have
dropped as well.
Results are uploading now.
http://btrfs.boxacle.net/repository/raid/history/History.html
These are for RAID only as single disk system still having issues
completing btrfs runs. Also, missing oprofile duw to oprofile causing
an NMI and killing the system.
Chris, this was built on 7/6, but I see no new changes sine 7/2/.
Steve
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Random write regression
2009-07-20 13:11 Random write regression Steven Pratt
@ 2009-07-20 13:34 ` Chris Mason
2009-07-20 14:47 ` Steven Pratt
2009-07-21 7:04 ` Yan Zheng
1 sibling, 1 reply; 5+ messages in thread
From: Chris Mason @ 2009-07-20 13:34 UTC (permalink / raw)
To: Steven Pratt; +Cc: linux-btrfs
On Mon, Jul 20, 2009 at 08:11:42AM -0500, Steven Pratt wrote:
> Finally got around to going through latest data. Seems like we lost all
> the random write performance gains. Creates are better, but total
> regression on the random workload. Sequential reads seem to have
> dropped as well.
Interesting, was this filesystem freshly created?
-chris
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Random write regression
2009-07-20 13:34 ` Chris Mason
@ 2009-07-20 14:47 ` Steven Pratt
0 siblings, 0 replies; 5+ messages in thread
From: Steven Pratt @ 2009-07-20 14:47 UTC (permalink / raw)
To: Chris Mason, Steven Pratt, linux-btrfs
Chris Mason wrote:
> On Mon, Jul 20, 2009 at 08:11:42AM -0500, Steven Pratt wrote:
>
>> Finally got around to going through latest data. Seems like we lost all
>> the random write performance gains. Creates are better, but total
>> regression on the random workload. Sequential reads seem to have
>> dropped as well.
>>
>
> Interesting, was this filesystem freshly created?
>
Yes, alway mkfs before the runs.
Steve
> -chris
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Random write regression
2009-07-20 13:11 Random write regression Steven Pratt
2009-07-20 13:34 ` Chris Mason
@ 2009-07-21 7:04 ` Yan Zheng
2009-07-21 14:48 ` Steven Pratt
1 sibling, 1 reply; 5+ messages in thread
From: Yan Zheng @ 2009-07-21 7:04 UTC (permalink / raw)
To: Steven Pratt; +Cc: linux-btrfs
2009/7/20 Steven Pratt <slpratt@austin.ibm.com>:
> Finally got around to going through latest data. =A0Seems like we los=
t all the
> random write performance gains. =A0Creates are better, but total regr=
ession on
> the random workload. =A0Sequential reads seem to have dropped as well=
=2E
>
> Results are uploading now.
> =A0http://btrfs.boxacle.net/repository/raid/history/History.html
>
> These are for RAID only as single disk system still having issues com=
pleting
> btrfs runs. =A0Also, missing oprofile duw to oprofile causing an NMI =
and
> killing the system.
>
> Chris, this was built on 7/6, but I see no new changes sine 7/2/.
> Steve
>
>
The output of ffsb in the latest 128 threads random odirect write bench=
mark was
=2E...
checking existing fs: /mnt/ffsb1
fs setup took 6 secs
Syncing()...2 sec
=2E...
The corresponding output on 30 June was
=2E...
creating new fileset /mnt/ffsb1
fs setup took 847 secs
Syncing()...1 sec
=2E...
It seems the filesystem used in the latest benchmark wasn't freshly cre=
ated.
Yan, Zheng
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Random write regression
2009-07-21 7:04 ` Yan Zheng
@ 2009-07-21 14:48 ` Steven Pratt
0 siblings, 0 replies; 5+ messages in thread
From: Steven Pratt @ 2009-07-21 14:48 UTC (permalink / raw)
To: Yan Zheng; +Cc: linux-btrfs
Yan Zheng wrote:
> 2009/7/20 Steven Pratt <slpratt@austin.ibm.com>:
>
>> Finally got around to going through latest data. Seems like we lost all the
>> random write performance gains. Creates are better, but total regression on
>> the random workload. Sequential reads seem to have dropped as well.
>>
>> Results are uploading now.
>> http://btrfs.boxacle.net/repository/raid/history/History.html
>>
>> These are for RAID only as single disk system still having issues completing
>> btrfs runs. Also, missing oprofile duw to oprofile causing an NMI and
>> killing the system.
>>
>> Chris, this was built on 7/6, but I see no new changes sine 7/2/.
>> Steve
>>
>>
>>
>
> The output of ffsb in the latest 128 threads random odirect write benchmark was
> ....
> checking existing fs: /mnt/ffsb1
> fs setup took 6 secs
> Syncing()...2 sec
> ....
>
> The corresponding output on 30 June was
> ....
> creating new fileset /mnt/ffsb1
> fs setup took 847 secs
> Syncing()...1 sec
> ....
>
> It seems the filesystem used in the latest benchmark wasn't freshly created.
>
Yes, the older (better) random write performance did indeed recreate the
files before the test. Thanks for catching this. I had 2 job files, 1
for just btrfs and 1 for all file systems and the reuse flag is
different between them. Please ignore this regression. I will re-run
without the reuse flag and expect things to be similar. This does
indicate that btrfs degrades quite rapidly under random write, but that
is a separate topic.
Steve
> Yan, Zheng
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2009-07-21 14:48 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-07-20 13:11 Random write regression Steven Pratt
2009-07-20 13:34 ` Chris Mason
2009-07-20 14:47 ` Steven Pratt
2009-07-21 7:04 ` Yan Zheng
2009-07-21 14:48 ` Steven Pratt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox