From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Mon, 21 Oct 2013 07:11:47 -0700 From: Christoph Hellwig Message-ID: <20131021141147.GA30189@infradead.org> References: <20131017151828.GB28859@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Subject: Re: [linux-lvm] poor read performance on rbd+LVM, LVM overload Reply-To: LVM general discussion and development List-Id: LVM general discussion and development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , List-Id: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Sage Weil Cc: elder@inktank.com, Mike Snitzer , Ugis , linux-lvm@redhat.com, "ceph-devel@vger.kernel.org" , "ceph-users@ceph.com" On Sun, Oct 20, 2013 at 08:58:58PM -0700, Sage Weil wrote: > It looks like without LVM we're getting 128KB requests (which IIRC is > typical), but with LVM it's only 4KB. Unfortunately my memory is a bit > fuzzy here, but I seem to recall a property on the request_queue or device > that affected this. RBD is currently doing Unfortunately most device mapper modules still split all I/O into 4k chunks before handling them. They rely on the elevator to merge them back together down the line, which isn't overly efficient but should at least provide larger segments for the common cases.