public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Gim Leong Chin <chingimleong@yahoo.com.sg>
To: "Carlos E. R." <robin.listas@telefonica.net>,
	XFS mail list <xfs@oss.sgi.com>
Subject: Re: Advice needed with file system corruption
Date: Tue, 9 Aug 2016 15:43:03 +0000 (UTC)	[thread overview]
Message-ID: <291675510.10842010.1470757383533.JavaMail.yahoo@mail.yahoo.com> (raw)
In-Reply-To: <7faf941c-2188-589d-b624-9fedb22dee40@telefonica.net>


[-- Attachment #1.1: Type: text/plain, Size: 1416 bytes --]

On 2016-08-09 06:02, Gim Leong Chin wrote:



>> Drives connected to RAID controllers with battery backed cache should
>> have their caches "disabled" (they are really set to write through mode
>> instead).  By the way, I found out in lab testing that 7200 RPM SATA
>> drives suffer a big performance loss when doing sequential writes in
>> cache write through mode.<
> If you disable the disk internal cache, as a consequence you also
> disable the disk internal write optimizations. It has to be much slower
> at writing. It seems to me obvious.

> -- 
> Cheers / Saludos,

 >       Carlos E. R.
 >       (from 13.1 x86_64 "Bottle" at Telcontar)

The drop in sequential write data rate for 3.5" 7200 RPM SATA drives was around 50%, I cannot remember the exact numbers, that is not obvious to me.

As a reminder, the drive cache is really set to write through mode, it is not possible to disable the cache, as an application engineer from HGST told me, so the drive internal write optimizations are still there, just that the IO command is reported to be completed only when the data has been writen to the drive platter.

10k and 15k RPM SAS drives connected to LSI Internal RAID controllers have their drive cache "disabled" automatically, I wonder how much is the data rate drop compared to drive cache "enabled", considering that LSI IR controllers do not have cache.

GL
  

[-- Attachment #1.2: Type: text/html, Size: 3109 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2016-08-09 15:45 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-14 12:27 Advice needed with file system corruption Steve Brooks
2016-07-14 13:05 ` Carlos Maiolino
2016-07-14 13:57   ` Steve Brooks
2016-07-14 14:17     ` Carlos Maiolino
2016-07-14 23:33       ` Dave Chinner
2016-08-08 14:11 ` Emmanuel Florac
2016-08-08 15:38   ` Roger Willcocks
2016-08-08 15:44     ` Emmanuel Florac
2016-08-09  4:02       ` Gim Leong Chin
2016-08-09 12:40         ` Carlos E. R.
2016-08-09 15:43           ` Gim Leong Chin [this message]
2016-08-09 21:26           ` Dave Chinner
2016-08-08 16:16   ` Steve Brooks

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=291675510.10842010.1470757383533.JavaMail.yahoo@mail.yahoo.com \
    --to=chingimleong@yahoo.com.sg \
    --cc=robin.listas@telefonica.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox