From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755340AbbI2GSn (ORCPT ); Tue, 29 Sep 2015 02:18:43 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:19424 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753088AbbI2GSe (ORCPT ); Tue, 29 Sep 2015 02:18:34 -0400 Message-ID: <560A2D2F.2040609@oracle.com> Date: Mon, 28 Sep 2015 23:18:23 -0700 From: Srinivas Eeda User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0 MIME-Version: 1.0 To: Miklos Szeredi , Ashish Samant CC: Linux-Fsdevel , Kernel Mailing List , fuse-devel Subject: Re: fuse scalability part 1 References: <20150518151336.GA9960@tucsk> <56044C66.1090207@oracle.com> In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Source-IP: aserv0022.oracle.com [141.146.126.234] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Miklos, On 09/25/2015 05:11 AM, Miklos Szeredi wrote: > On Thu, Sep 24, 2015 at 9:17 PM, Ashish Samant wrote: > >> We did some performance testing without these patches and with these patches >> (with -o clone_fd option specified). We did 2 types of tests: >> >> 1. Throughput test : We did some parallel dd tests to read/write to FUSE >> based database fs on a system with 8 numa nodes and 288 cpus. The >> performance here is almost equal to the the per-numa patches we submitted a >> while back.Please find results attached. > Interesting. This means, that serving the request on a different NUMA > node as the one where the request originated doesn't appear to make > the performance much worse. with the new change, contention of spinlock is significantly reduced, hence the latency caused by NUMA is not visible. Even in earlier case, the scalability was not a big problem if we bind all processes(fuse worker and user (dd threads)) to a single NUMA node. The problem was only seen when threads spread out across numa nodes and contend for the spin lock. > > Thanks, > Miklos