From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751681AbdJIE3B (ORCPT ); Mon, 9 Oct 2017 00:29:01 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47098 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751184AbdJIE3A (ORCPT ); Mon, 9 Oct 2017 00:29:00 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 0DA205D68C Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=ming.lei@redhat.com Date: Mon, 9 Oct 2017 12:28:37 +0800 From: Ming Lei To: Christoph Hellwig Cc: Jens Axboe , linux-block@vger.kernel.org, Mike Snitzer , dm-devel@redhat.com, Bart Van Assche , Laurence Oberman , Paolo Valente , Oleksandr Natalenko , Tom Nguyen , linux-kernel@vger.kernel.org, Omar Sandoval Subject: Re: [PATCH V5 8/8] blk-mq: improve bio merge from blk-mq sw queue Message-ID: <20171009042835.GA19029@ming.t460p> References: <20170930112655.31451-1-ming.lei@redhat.com> <20170930112655.31451-9-ming.lei@redhat.com> <20171003092143.GF2771@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171003092143.GF2771@infradead.org> User-Agent: Mutt/1.8.3 (2017-05-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Mon, 09 Oct 2017 04:29:00 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 03, 2017 at 02:21:43AM -0700, Christoph Hellwig wrote: > This looks generally good to me, but I really worry about the impact > on very high iops devices. Did you try this e.g. for random reads > from unallocated blocks on an enterprise NVMe SSD? Looks no such impact, please see the following data in the fio test(libaio, direct, bs=4k, 64jobs, randread, none scheduler) [root@storageqe-62 results]# ../parse_fio 4.14.0-rc2.no_blk_mq_perf+-nvme-64jobs-mq-none.log 4.14.0-rc2.BLK_MQ_PERF_V5+-nvme-64jobs-mq-none.log --------------------------------------- IOPS(K) | NONE | NONE --------------------------------------- randread | 650.98 | 653.15 --------------------------------------- OR: If you worry about this impact, can we simply disable merge on NVMe for none scheduler? It is basically impossible to merge NVMe's request/bio when none is used, but it can be doable in case of kyber scheduler. -- Ming