linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Unbelievably _good_ write performance: RHEL5.4 mirror
@ 2009-11-02 18:28 Chris Worley
  2009-11-03 12:37 ` Sujit K M
  0 siblings, 1 reply; 11+ messages in thread
From: Chris Worley @ 2009-11-02 18:28 UTC (permalink / raw)
  To: linuxraid

I expect RAID1 write performance to be, at best, the performance of
the slowest drive.

I'm seeing twice the performance, as if it were a RAID0.  The read
performance is 2x also, which is what I would expect.

I'm using the incantation:

mdadm --create /dev/md0 --chunk=256 --level=1 --assume-clean
--raid-devices=2 /dev/sd[bc]

I use "assume clean" on the fresh create, as there is no reason to
sync the new drives.

My "fio" test uses O_DIRECT with 64 threads, each with a queue depth
of 64, running for 10 minutes.  All caching is disabled, and the NOOP
scheduler is being used.  I run this test all the time, and can't
imagine why it's getting such repeatably good write performance.

Any ideas?

Thanks,

Chris

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-02 18:28 Unbelievably _good_ write performance: RHEL5.4 mirror Chris Worley
@ 2009-11-03 12:37 ` Sujit K M
  2009-11-03 16:31   ` Michael Evans
  0 siblings, 1 reply; 11+ messages in thread
From: Sujit K M @ 2009-11-03 12:37 UTC (permalink / raw)
  To: Chris Worley; +Cc: linuxraid

This I think is an reported bug.

On Mon, Nov 2, 2009 at 11:58 PM, Chris Worley <worleys@gmail.com> wrote:
> I expect RAID1 write performance to be, at best, the performance of
> the slowest drive.
>
> I'm seeing twice the performance, as if it were a RAID0.  The read
> performance is 2x also, which is what I would expect.
>
> I'm using the incantation:
>
> mdadm --create /dev/md0 --chunk=256 --level=1 --assume-clean
> --raid-devices=2 /dev/sd[bc]
>
> I use "assume clean" on the fresh create, as there is no reason to
> sync the new drives.
>
> My "fio" test uses O_DIRECT with 64 threads, each with a queue depth
> of 64, running for 10 minutes.  All caching is disabled, and the NOOP
> scheduler is being used.  I run this test all the time, and can't
> imagine why it's getting such repeatably good write performance.
>
> Any ideas?
>
> Thanks,
>
> Chris
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
-- Sujit K M

blog(http://kmsujit.blogspot.com/)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-03 12:37 ` Sujit K M
@ 2009-11-03 16:31   ` Michael Evans
  2009-11-03 19:29     ` Chris Worley
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Evans @ 2009-11-03 16:31 UTC (permalink / raw)
  To: Sujit K M; +Cc: Chris Worley, linuxraid

Is it possible the data-set you are testing with is too small?  Modern
drives have caches of around 32MB, your test might only be executing
within the drive's caches.

On Tue, Nov 3, 2009 at 4:37 AM, Sujit K M <sjt.kar@gmail.com> wrote:
> This I think is an reported bug.
>
> On Mon, Nov 2, 2009 at 11:58 PM, Chris Worley <worleys@gmail.com> wrote:
>> I expect RAID1 write performance to be, at best, the performance of
>> the slowest drive.
>>
>> I'm seeing twice the performance, as if it were a RAID0.  The read
>> performance is 2x also, which is what I would expect.
>>
>> I'm using the incantation:
>>
>> mdadm --create /dev/md0 --chunk=256 --level=1 --assume-clean
>> --raid-devices=2 /dev/sd[bc]
>>
>> I use "assume clean" on the fresh create, as there is no reason to
>> sync the new drives.
>>
>> My "fio" test uses O_DIRECT with 64 threads, each with a queue depth
>> of 64, running for 10 minutes.  All caching is disabled, and the NOOP
>> scheduler is being used.  I run this test all the time, and can't
>> imagine why it's getting such repeatably good write performance.
>>
>> Any ideas?
>>
>> Thanks,
>>
>> Chris
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
>
> --
> -- Sujit K M
>
> blog(http://kmsujit.blogspot.com/)
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-03 16:31   ` Michael Evans
@ 2009-11-03 19:29     ` Chris Worley
  2009-11-04  0:11       ` Michael Evans
  0 siblings, 1 reply; 11+ messages in thread
From: Chris Worley @ 2009-11-03 19:29 UTC (permalink / raw)
  To: Michael Evans; +Cc: Sujit K M, linuxraid

On Tue, Nov 3, 2009 at 9:31 AM, Michael Evans <mjevans1983@gmail.com> wrote:
> Is it possible the data-set you are testing with is too small?  Modern
> drives have caches of around 32MB, your test might only be executing
> within the drive's caches.

Cache is disabled on the drives, and disabled on the system by using
O_DIRECT on the benchmark.  I am seeing proper behavior using
synchronous writes, but that slows down performance too much.

Chris
>
> On Tue, Nov 3, 2009 at 4:37 AM, Sujit K M <sjt.kar@gmail.com> wrote:
>> This I think is an reported bug.
>>
>> On Mon, Nov 2, 2009 at 11:58 PM, Chris Worley <worleys@gmail.com> wrote:
>>> I expect RAID1 write performance to be, at best, the performance of
>>> the slowest drive.
>>>
>>> I'm seeing twice the performance, as if it were a RAID0.  The read
>>> performance is 2x also, which is what I would expect.
>>>
>>> I'm using the incantation:
>>>
>>> mdadm --create /dev/md0 --chunk=256 --level=1 --assume-clean
>>> --raid-devices=2 /dev/sd[bc]
>>>
>>> I use "assume clean" on the fresh create, as there is no reason to
>>> sync the new drives.
>>>
>>> My "fio" test uses O_DIRECT with 64 threads, each with a queue depth
>>> of 64, running for 10 minutes.  All caching is disabled, and the NOOP
>>> scheduler is being used.  I run this test all the time, and can't
>>> imagine why it's getting such repeatably good write performance.
>>>
>>> Any ideas?
>>>
>>> Thanks,
>>>
>>> Chris
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
>>
>>
>> --
>> -- Sujit K M
>>
>> blog(http://kmsujit.blogspot.com/)
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-03 19:29     ` Chris Worley
@ 2009-11-04  0:11       ` Michael Evans
  2009-11-04  0:21         ` Chris Worley
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Evans @ 2009-11-04  0:11 UTC (permalink / raw)
  To: Chris Worley; +Cc: Sujit K M, linuxraid

On Tue, Nov 3, 2009 at 11:29 AM, Chris Worley <worleys@gmail.com> wrote:
> On Tue, Nov 3, 2009 at 9:31 AM, Michael Evans <mjevans1983@gmail.com> wrote:
>> Is it possible the data-set you are testing with is too small?  Modern
>> drives have caches of around 32MB, your test might only be executing
>> within the drive's caches.
>
> Cache is disabled on the drives, and disabled on the system by using
> O_DIRECT on the benchmark.  I am seeing proper behavior using
> synchronous writes, but that slows down performance too much.
>

I'm not familiar with O_DIRECT so I looked it up:
http://www.kernel.org/doc/man-pages/online/pages/man2/open.2.html

It -appears- that O_DIRECT merely attempts to efficiently map
transfers as full blocks/allocation units.  I see no mention, within
the context of O_DIRECT of buffering or not (and presume it -is-
buffered by default).

This page provides additional detail:
http://www.ukuug.org/events/linux2001/papers/html/AArcangeli-o_direct.html

It seems that O_DIRECT disables any kernel-side caches for operations,
but does not modify any caching that might be performed by the
underlying block devices.  O_DIRECT might not even be applicable with
an MD/DM device.


O_DIRECT (Since Linux 2.4.10)
              Try to minimize cache effects of the I/O to and from
this file.  In
              general this will degrade performance, but it is useful in special
              situations, such as when applications do their own
caching.  File I/O
              is done directly to/from user space buffers.  The
O_DIRECT flag on its
              own makes at an effort to transfer data synchronously,
but does not
              give the guarantees of the O_SYNC that data and
necessary metadata are
              transferred.  To guarantee synchronous I/O the O_SYNC
must be used in
              addition to O_DIRECT.  See NOTES below for further discussion.

              A semantically similar (but deprecated) interface for
block devices is
              described in raw(8).

As that quote from the manual states, to get the effect you desire you
must use -both- O_DIRECT and O_SYNC.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-04  0:11       ` Michael Evans
@ 2009-11-04  0:21         ` Chris Worley
  2009-11-04  0:29           ` Michael Evans
  0 siblings, 1 reply; 11+ messages in thread
From: Chris Worley @ 2009-11-04  0:21 UTC (permalink / raw)
  To: Michael Evans; +Cc: Sujit K M, linuxraid

On Tue, Nov 3, 2009 at 5:11 PM, Michael Evans <mjevans1983@gmail.com> wrote:
> On Tue, Nov 3, 2009 at 11:29 AM, Chris Worley <worleys@gmail.com> wrote:
>> On Tue, Nov 3, 2009 at 9:31 AM, Michael Evans <mjevans1983@gmail.com> wrote:
>>> Is it possible the data-set you are testing with is too small?  Modern
>>> drives have caches of around 32MB, your test might only be executing
>>> within the drive's caches.
>>
>> Cache is disabled on the drives, and disabled on the system by using
>> O_DIRECT on the benchmark.  I am seeing proper behavior using
>> synchronous writes, but that slows down performance too much.
>>
>
> I'm not familiar with O_DIRECT so I looked it up:
> http://www.kernel.org/doc/man-pages/online/pages/man2/open.2.html
>
> It -appears- that O_DIRECT merely attempts to efficiently map
> transfers as full blocks/allocation units.  I see no mention, within
> the context of O_DIRECT of buffering or not (and presume it -is-
> buffered by default).
>
> This page provides additional detail:
> http://www.ukuug.org/events/linux2001/papers/html/AArcangeli-o_direct.html
>
> It seems that O_DIRECT disables any kernel-side caches for operations,
> but does not modify any caching that might be performed by the
> underlying block devices.  O_DIRECT might not even be applicable with
> an MD/DM device.
>
>
> O_DIRECT (Since Linux 2.4.10)
>              Try to minimize cache effects of the I/O to and from
> this file.  In
>              general this will degrade performance, but it is useful in special
>              situations, such as when applications do their own
> caching.  File I/O
>              is done directly to/from user space buffers.  The
> O_DIRECT flag on its
>              own makes at an effort to transfer data synchronously,
> but does not
>              give the guarantees of the O_SYNC that data and
> necessary metadata are
>              transferred.  To guarantee synchronous I/O the O_SYNC
> must be used in
>              addition to O_DIRECT.  See NOTES below for further discussion.
>
>              A semantically similar (but deprecated) interface for
> block devices is
>              described in raw(8).
>
> As that quote from the manual states, to get the effect you desire you
> must use -both- O_DIRECT and O_SYNC.
>

So, you're saying that the MD layer is doing it's own buffering?  Are
you sure?  With the system cache disabled and the drive (and block
device driver) cache disabled, there should be no reason to require
synchronous I/O, unless, as you suggest, the MD layer is broken.

You're saying that both O_DIRECT and O_SYNC must be used to disable
cache effects. Why then are there two separate flags and not just one?
 Synchronous is a very different behavior that is not necessary for
this test and put additional requirements that are not needed for this
test.

Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-04  0:21         ` Chris Worley
@ 2009-11-04  0:29           ` Michael Evans
  2009-11-04  0:37             ` Chris Worley
  2009-11-04 12:40             ` Michael Tokarev
  0 siblings, 2 replies; 11+ messages in thread
From: Michael Evans @ 2009-11-04  0:29 UTC (permalink / raw)
  To: Chris Worley; +Cc: Sujit K M, linuxraid

>
> So, you're saying that the MD layer is doing it's own buffering?  Are
> you sure?  With the system cache disabled and the drive (and block
> device driver) cache disabled, there should be no reason to require
> synchronous I/O, unless, as you suggest, the MD layer is broken.
>
> You're saying that both O_DIRECT and O_SYNC must be used to disable
> cache effects. Why then are there two separate flags and not just one?
>  Synchronous is a very different behavior that is not necessary for
> this test and put additional requirements that are not needed for this
> test.
>
> Chris
>

Reading the manual page it seems O_DIRECT explicitly minimizes any
attempts at extra copying; not explicitly disabling buffers, merely
not adding more.  In another mail thread this tweak-able was
discussed:

echo 0 > /sys/block/md*/md/stripe_cache_size

Which would should (I think) disable any cache involved with the md layer.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-04  0:29           ` Michael Evans
@ 2009-11-04  0:37             ` Chris Worley
  2009-11-04  0:41               ` Michael Evans
  2009-11-04 12:40             ` Michael Tokarev
  1 sibling, 1 reply; 11+ messages in thread
From: Chris Worley @ 2009-11-04  0:37 UTC (permalink / raw)
  To: Michael Evans; +Cc: Sujit K M, linuxraid

On Tue, Nov 3, 2009 at 5:29 PM, Michael Evans <mjevans1983@gmail.com> wrote:
>>
>> So, you're saying that the MD layer is doing it's own buffering?  Are
>> you sure?  With the system cache disabled and the drive (and block
>> device driver) cache disabled, there should be no reason to require
>> synchronous I/O, unless, as you suggest, the MD layer is broken.
>>
>> You're saying that both O_DIRECT and O_SYNC must be used to disable
>> cache effects. Why then are there two separate flags and not just one?
>>  Synchronous is a very different behavior that is not necessary for
>> this test and put additional requirements that are not needed for this
>> test.
>>
>> Chris
>>
>
> Reading the manual page it seems O_DIRECT explicitly minimizes any
> attempts at extra copying; not explicitly disabling buffers, merely
> not adding more.  In another mail thread this tweak-able was
> discussed:
>
> echo 0 > /sys/block/md*/md/stripe_cache_size
>
> Which would should (I think) disable any cache involved with the md layer.

Stripe_cache_size doesn't seem to exist in the /sys/block/md*/md/
directory on the RHEL5.3 2.6.18-92 kernel I'm using.

# ls /sys/block/md*/md/s*
/sys/block/md0/md/safe_mode_delay  /sys/block/md0/md/suspend_hi
/sys/block/md0/md/suspend_lo  /sys/block/md0/md/sync_action
/sys/block/md0/md/sync_completed  /sys/block/md0/md/sync_speed
/sys/block/md0/md/sync_speed_max  /sys/block/md0/md/sync_speed_min

Thanks,

Chris
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-04  0:37             ` Chris Worley
@ 2009-11-04  0:41               ` Michael Evans
  2009-11-04  3:45                 ` NeilBrown
  0 siblings, 1 reply; 11+ messages in thread
From: Michael Evans @ 2009-11-04  0:41 UTC (permalink / raw)
  To: Chris Worley; +Cc: Sujit K M, linuxraid

On Tue, Nov 3, 2009 at 4:37 PM, Chris Worley <worleys@gmail.com> wrote:
> On Tue, Nov 3, 2009 at 5:29 PM, Michael Evans <mjevans1983@gmail.com> wrote:
>>>
>>> So, you're saying that the MD layer is doing it's own buffering?  Are
>>> you sure?  With the system cache disabled and the drive (and block
>>> device driver) cache disabled, there should be no reason to require
>>> synchronous I/O, unless, as you suggest, the MD layer is broken.
>>>
>>> You're saying that both O_DIRECT and O_SYNC must be used to disable
>>> cache effects. Why then are there two separate flags and not just one?
>>>  Synchronous is a very different behavior that is not necessary for
>>> this test and put additional requirements that are not needed for this
>>> test.
>>>
>>> Chris
>>>
>>
>> Reading the manual page it seems O_DIRECT explicitly minimizes any
>> attempts at extra copying; not explicitly disabling buffers, merely
>> not adding more.  In another mail thread this tweak-able was
>> discussed:
>>
>> echo 0 > /sys/block/md*/md/stripe_cache_size
>>
>> Which would should (I think) disable any cache involved with the md layer.
>
> Stripe_cache_size doesn't seem to exist in the /sys/block/md*/md/
> directory on the RHEL5.3 2.6.18-92 kernel I'm using.
>
> # ls /sys/block/md*/md/s*
> /sys/block/md0/md/safe_mode_delay  /sys/block/md0/md/suspend_hi
> /sys/block/md0/md/suspend_lo  /sys/block/md0/md/sync_action
> /sys/block/md0/md/sync_completed  /sys/block/md0/md/sync_speed
> /sys/block/md0/md/sync_speed_max  /sys/block/md0/md/sync_speed_min
>
> Thanks,
>
> Chris
>>
>

I'm not sure when it was added, but I've =no= idea the number of bugs
that may have been fixed since then.  You should look at the other
files in that virtual directory to see if there's another parameter
that it replaced.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-04  0:41               ` Michael Evans
@ 2009-11-04  3:45                 ` NeilBrown
  0 siblings, 0 replies; 11+ messages in thread
From: NeilBrown @ 2009-11-04  3:45 UTC (permalink / raw)
  To: Michael Evans; +Cc: Chris Worley, Sujit K M, linuxraid

On Wed, November 4, 2009 11:41 am, Michael Evans wrote:
> On Tue, Nov 3, 2009 at 4:37 PM, Chris Worley <worleys@gmail.com> wrote:
>> On Tue, Nov 3, 2009 at 5:29 PM, Michael Evans <mjevans1983@gmail.com>
>> wrote:
>>>>
>>>> So, you're saying that the MD layer is doing it's own buffering?  Are
>>>> you sure?  With the system cache disabled and the drive (and block
>>>> device driver) cache disabled, there should be no reason to require
>>>> synchronous I/O, unless, as you suggest, the MD layer is broken.

MD only does buffering for RAID4/5/6, and then normally only for writes.
It has to buffer writes so that it can create the XOR reliably.

>>>>
>>>> You're saying that both O_DIRECT and O_SYNC must be used to disable
>>>> cache effects. Why then are there two separate flags and not just one?
>>>>  Synchronous is a very different behavior that is not necessary for
>>>> this test and put additional requirements that are not needed for this

Without O_SYNC, the filesystem metadata which records where the blocks
in the file are may be not be updated synchronously.  This is probably
what you want when testing device throughput.
So just using O_DIRECT is correct.


>>>> test.
>>>>
>>>> Chris
>>>>
>>>
>>> Reading the manual page it seems O_DIRECT explicitly minimizes any
>>> attempts at extra copying; not explicitly disabling buffers, merely
>>> not adding more.  In another mail thread this tweak-able was
>>> discussed:
>>>
>>> echo 0 > /sys/block/md*/md/stripe_cache_size
>>>
>>> Which would should (I think) disable any cache involved with the md
>>> layer.

And given that the cache - where available - is essential, this would
effectively stop your md array.... except that numbers less than 16 are
rejected.


>>
>> Stripe_cache_size doesn't seem to exist in the /sys/block/md*/md/
>> directory on the RHEL5.3 2.6.18-92 kernel I'm using.
>>
>> # ls /sys/block/md*/md/s*
>> /sys/block/md0/md/safe_mode_delay  /sys/block/md0/md/suspend_hi
>> /sys/block/md0/md/suspend_lo  /sys/block/md0/md/sync_action
>> /sys/block/md0/md/sync_completed  /sys/block/md0/md/sync_speed
>> /sys/block/md0/md/sync_speed_max  /sys/block/md0/md/sync_speed_min

RAID1 does not have a stripe cache, hence no 'stripe_cache_size'.

As to why RAID1 appears fast than the plain drive, I cannot imagine.
I would need specific details of exactly the tests that were run,
exactly the results obtained, and exactly the version of the
various software used.

NeilBrown

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Unbelievably _good_ write performance: RHEL5.4 mirror
  2009-11-04  0:29           ` Michael Evans
  2009-11-04  0:37             ` Chris Worley
@ 2009-11-04 12:40             ` Michael Tokarev
  1 sibling, 0 replies; 11+ messages in thread
From: Michael Tokarev @ 2009-11-04 12:40 UTC (permalink / raw)
  To: Michael Evans; +Cc: Chris Worley, Sujit K M, linuxraid

Michael Evans wrote:
>> So, you're saying that the MD layer is doing it's own buffering?  Are
>> you sure?  With the system cache disabled and the drive (and block
>> device driver) cache disabled, there should be no reason to require
>> synchronous I/O, unless, as you suggest, the MD layer is broken.
>>
>> You're saying that both O_DIRECT and O_SYNC must be used to disable
>> cache effects. Why then are there two separate flags and not just one?
>>  Synchronous is a very different behavior that is not necessary for
>> this test and put additional requirements that are not needed for this
>> test.
>>
>> Chris
>>
> 
> Reading the manual page it seems O_DIRECT explicitly minimizes any
> attempts at extra copying; not explicitly disabling buffers, merely
> not adding more.  In another mail thread this tweak-able was
> discussed:
> 
> echo 0 > /sys/block/md*/md/stripe_cache_size

stripe_cache_size only exists for raid4, raid5 and raid6.
It never existed for other raid levels because it's pointless.

/mjt

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2009-11-04 12:40 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-11-02 18:28 Unbelievably _good_ write performance: RHEL5.4 mirror Chris Worley
2009-11-03 12:37 ` Sujit K M
2009-11-03 16:31   ` Michael Evans
2009-11-03 19:29     ` Chris Worley
2009-11-04  0:11       ` Michael Evans
2009-11-04  0:21         ` Chris Worley
2009-11-04  0:29           ` Michael Evans
2009-11-04  0:37             ` Chris Worley
2009-11-04  0:41               ` Michael Evans
2009-11-04  3:45                 ` NeilBrown
2009-11-04 12:40             ` Michael Tokarev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).