From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751908AbdB1DHc (ORCPT ); Mon, 27 Feb 2017 22:07:32 -0500 Received: from mx1.redhat.com ([209.132.183.28]:42942 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751558AbdB1DH3 (ORCPT ); Mon, 27 Feb 2017 22:07:29 -0500 Subject: Re: [PATCH] target/user: Add daynmic growing data area feature support To: Andy Grover , lixiubo@cmss.chinamobile.com, nab@linux-iscsi.org, shli@kernel.org References: <1487323472-20481-1-git-send-email-lixiubo@cmss.chinamobile.com> <09891673-0d95-8b66-ddce-0ace7aea43d1@redhat.com> Cc: hch@lst.de, sheng@yasker.org, namei.unix@gmail.com, bart.vanassche@sandisk.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, linux-kernel@vger.kernel.org, Jianfei Hu , Venky Shankar From: Mike Christie Message-ID: <58B4BCA5.2060002@redhat.com> Date: Mon, 27 Feb 2017 17:56:21 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <09891673-0d95-8b66-ddce-0ace7aea43d1@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Mon, 27 Feb 2017 23:56:24 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02/22/2017 02:32 PM, Andy Grover wrote: > On 02/17/2017 01:24 AM, lixiubo@cmss.chinamobile.com wrote: >> > From: Xiubo Li >> > >> > Currently for the TCMU, the ring buffer size is fixed to 64K cmd >> > area + 1M data area, and this will be bottlenecks for high iops. > Hi Xiubo, thanks for your work. > > daynmic -> dynamic > > Have you benchmarked this patch and determined what kind of iops > improvement it allows? Do you see the data area reaching its > fully-allocated size? > I tested this patch with Venky's tcmu-runner rbd aio patches, with one 10 gig iscsi session, and for pretty basic fio direct io (64 -256K read/writes with a queue depth of 64 numjobs between 1 and 4) tests read throughput goes from about 80 to 500 MB/s. Write throughput is pretty low at around 150 MB/s. I did not hit the fully allocated size. I did not drive a lot of IO though.