From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DE20C7EE25 for ; Wed, 7 Jun 2023 14:21:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240421AbjFGOVS (ORCPT ); Wed, 7 Jun 2023 10:21:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241089AbjFGOVP (ORCPT ); Wed, 7 Jun 2023 10:21:15 -0400 Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BFBF1BF8 for ; Wed, 7 Jun 2023 07:21:12 -0700 (PDT) Received: by mail-ed1-x534.google.com with SMTP id 4fb4d7f45d1cf-51492ae66a4so1384975a12.1 for ; Wed, 07 Jun 2023 07:21:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=szeredi.hu; s=google; t=1686147671; x=1688739671; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=VMyK7sGNgF/LzrX7OfXx8UkwIkNpkpIRipLRDd307mc=; b=ZAO+UjggyLUAXocwo+f1ffF55K91DfZ8epyFt5H0xeaunDg8f5rPHJ8ICHZLpusfva JOKoJzBMOq6DC191K2bSpMWtnDs0ghAus9h8caFQjAh0rN8bKkNUT5UbDrikEueltYzn yxjqkMSG/io7e4iINCYZTrnsKgv5+gf0A08nA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686147671; x=1688739671; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VMyK7sGNgF/LzrX7OfXx8UkwIkNpkpIRipLRDd307mc=; b=DvdLQEloCOw6wVFd4p97ore/73Ku3BXok4unTvGk3cxasmjrNkxlPcqsRF5MTiQ/V+ z/QgJkKQZflm4mg7LSpLO/3xK3YRAy5FvkDqdd3ZkiwCjJrIDe3Ai41XskIc2Udx/yo5 HurQaGCuQ1X8v38Joue1FV9eXG39SqnvXAxNalqPDg3Pf9Cm+PHkifd0QXeiD7ttTZgl 0HeEeeyDjwiWVU3cj/F34RvT3Sow5UGw7TyVAeyvGqO9WAMzSdzu5RNFg4uSW+k9/v/O /wraAHtdAhgr9r2FA0Bw9WTXd9tIVZ1SQ2s1axwsWVJL1jAkXhW6brz8kKe07kejTi2m 2WwA== X-Gm-Message-State: AC+VfDx2WOEI0E6TKRCm6uZ9L2W6R96r2Z9sshJr1P4YOY3ddemCzx9A D88ASG4quSHIokKRh+kmysELdar3GTlYaI/MqX3bXQ== X-Google-Smtp-Source: ACHHUZ6ls0BfeaGjgy/UHP9tI5Zr52h5fzDJc+6ypxsFs41cAhR8QcI6xLlvjBxPFGZL1jHMOUTeHCwQNCnACX8vt6Q= X-Received: by 2002:a17:907:7f93:b0:96f:bd84:b89c with SMTP id qk19-20020a1709077f9300b0096fbd84b89cmr6973182ejc.70.1686147670910; Wed, 07 Jun 2023 07:21:10 -0700 (PDT) MIME-Version: 1.0 References: <20230321011047.3425786-1-bschubert@ddn.com> <02f19f49-47f8-b1c5-224d-d7233b62bf32@fastmail.fm> In-Reply-To: From: Miklos Szeredi Date: Wed, 7 Jun 2023 16:20:59 +0200 Message-ID: Subject: Re: [RFC PATCH 00/13] fuse uring communication To: Amir Goldstein Cc: Bernd Schubert , Bernd Schubert , linux-fsdevel@vger.kernel.org, Dharmendra Singh Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Thu, 23 Mar 2023 at 12:55, Amir Goldstein wrote: > > On Thu, Mar 23, 2023 at 1:18=E2=80=AFPM Bernd Schubert > wrote: > > there were several zufs threads, but I don't remember discussions about > > cache line - maybe I had missed it. I can try to read through the old > > threads, in case you don't have it. > > Miklos talked about it somewhere... It was a private exchange between Amir and me: On Tue, 25 Feb 2020 at 20:33, Miklos Szeredi wrote > On Tue, Feb 25, 2020 at 6:49 PM Amir Goldstein w= rote: [...] > > BTW, out of curiosity, what was the purpose of the example of > > "use shared memory instead of threads"? > > In the threaded case there's a shared piece of memory in the kernel > (mm->cpu_bitmap) that is updated on each context switch (i.e. each > time a request is processed by one of the server threads). If this i= s > a big NUMA system then cacheline pingpong on this bitmap can be a rea= l > performance hit. > > Using shared memory means that the address space is not shared, hence > each server "thread" will have a separate "mm" structure, hence no > cacheline pingpong. > > It would be nice if the underlying problem with shared address space > could be solved in a scalable way instead of having to resort to this > hack, but it's not a trivial thing to do. If you look at the > scheduler code, there's already a workaround for this issue in the > kernel threads case, but that doesn't work for user threads. Thanks, Miklos