From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: Re: Terrible performance of sequential O_DIRECT 4k writes in SAN environment. ~3 times slower then Solars 10 with the same HBA/Storage. Date: Wed, 8 Jan 2014 07:26:10 -0800 Message-ID: <20140108152610.GA5863@infradead.org> References: <20140106201032.GA13491@quack.suse.cz> <20140107155830.GA28395@infradead.org> <20140108140307.GA588@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org To: Sergey Meirovich Cc: Christoph Hellwig , Jan Kara , linux-scsi , Linux Kernel Mailing List , Gluk List-Id: linux-scsi@vger.kernel.org On Wed, Jan 08, 2014 at 04:43:07PM +0200, Sergey Meirovich wrote: > Results are almost the same: > 14.68Mb/sec 3758.02 Requests/sec > On my laptop SSD I get the following results (sometimes up to 200MB/s, sometimes down to 100MB/s, always in the 40k to 50k IOps range): time elapsed (sec.): 5 bandwidth (MiB/s): 160.00 IOps: 40960.00 The IOps are more than the hardware is physically capable of, but given that you didn't specify O_SYNC this seems sensible given that we never have to flush the disk cache. Could it be that your array has WCE=0? In Linux we'll never enable the cache automatically, but Solaris does at least when using ZFS. Try running: sdparm --set=WCE /dev/sdX and try again.