From mboxrd@z Thu Jan 1 00:00:00 1970 Date: Mon, 21 Oct 2013 11:06:16 -0700 From: Christoph Hellwig Message-ID: <20131021180616.GA7196@infradead.org> References: <20131017151828.GB28859@redhat.com> <20131021141147.GA30189@infradead.org> <20131021150129.GA28099@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20131021150129.GA28099@redhat.com> Subject: Re: [linux-lvm] poor read performance on rbd+LVM, LVM overload Reply-To: LVM general discussion and development List-Id: LVM general discussion and development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , List-Id: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Mike Snitzer Cc: elder@inktank.com, Sage Weil , Christoph Hellwig , Ugis , linux-lvm@redhat.com, "ceph-devel@vger.kernel.org" , "ceph-users@ceph.com" On Mon, Oct 21, 2013 at 11:01:29AM -0400, Mike Snitzer wrote: > It isn't DM that splits the IO into 4K chunks; it is the VM subsystem > no? Well, it's the block layer based on what DM tells it. Take a look at dm_merge_bvec >From dm_merge_bvec: /* * If the target doesn't support merge method and some of the devices * provided their merge_bvec method (we know this by looking at * queue_max_hw_sectors), then we can't allow bios with multiple vector * entries. So always set max_size to 0, and the code below allows * just one page. */ Although it's not the general case, just if the driver has a merge_bvec method. But this happens if you using DM ontop of MD where I saw it aswell as on rbd, which is why it's correct in this context, too. Sorry for over generalizing a bit.