From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751841AbdB1Sm0 (ORCPT ); Tue, 28 Feb 2017 13:42:26 -0500 Received: from mx1.redhat.com ([209.132.183.28]:47510 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751675AbdB1SmY (ORCPT ); Tue, 28 Feb 2017 13:42:24 -0500 Subject: Re: [PATCH] target/user: Add daynmic growing data area featuresupport To: Xiubo Li , Andy Grover , nab@linux-iscsi.org, shli@kernel.org References: <1487323472-20481-1-git-send-email-lixiubo@cmss.chinamobile.com> <09891673-0d95-8b66-ddce-0ace7aea43d1@redhat.com> <58B4BCA5.2060002@redhat.com> <983dc030-0352-05d8-9fc7-a6cdf2c59f8d@cmss.chinamobile.com> Cc: hch@lst.de, sheng@yasker.org, namei.unix@gmail.com, bart.vanassche@sandisk.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-kernel@vger.kernel.org, Jianfei Hu , Venky Shankar From: Mike Christie Message-ID: <58B5BDC2.3040300@redhat.com> Date: Tue, 28 Feb 2017 12:13:22 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <983dc030-0352-05d8-9fc7-a6cdf2c59f8d@cmss.chinamobile.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Tue, 28 Feb 2017 18:13:26 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02/27/2017 07:22 PM, Xiubo Li wrote: > Hi Mike > > Thanks verrry much for your work and test cases. > > >>>>> From: Xiubo Li >>>>> >>>>> Currently for the TCMU, the ring buffer size is fixed to 64K cmd >>>>> area + 1M data area, and this will be bottlenecks for high iops. >>> Hi Xiubo, thanks for your work. >>> >>> daynmic -> dynamic >>> >>> Have you benchmarked this patch and determined what kind of iops >>> improvement it allows? Do you see the data area reaching its >>> fully-allocated size? >>> >> I tested this patch with Venky's tcmu-runner rbd aio patches, with one >> 10 gig iscsi session, and for pretty basic fio direct io (64 -256K >> read/writes with a queue depth of 64 numjobs between 1 and 4) tests read >> throughput goes from about 80 to 500 MB/s. > Looks nice. > >> Write throughput is pretty >> low at around 150 MB/s. > What's the original write throughput without this patch? Is it also > around 80 MB/s ? It is around 20-30 MB/s. Same fio args except using --rw=write.