From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CDDDEEB56E for ; Fri, 8 Sep 2023 22:02:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240925AbjIHWCr (ORCPT ); Fri, 8 Sep 2023 18:02:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243126AbjIHWCn (ORCPT ); Fri, 8 Sep 2023 18:02:43 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E235819B4 for ; Fri, 8 Sep 2023 15:01:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1694210511; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c/Mdr+GzW+sAH8pnXlRdcxL3Lco75+PnNmrEF8RR5jk=; b=Uof0oJEqWt2er8EfHUGYqP2gN/brdaUL2+dM82kBRVt3k6zRyMnbww2b37lokiSamrKi58 dpCn9PId8vXiyiW1I3+v3moEK+S85HGALCJIH+vqAbDIt201LrZwvPQH9KKVfG+sPPBpiR 3nt7ledm4BI6oRoW2X+FeE8kYeTPVFA= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-588-1UZSawt_Ph2y7twuOPKy9g-1; Fri, 08 Sep 2023 18:01:50 -0400 X-MC-Unique: 1UZSawt_Ph2y7twuOPKy9g-1 Received: by mail-qk1-f197.google.com with SMTP id af79cd13be357-770544d0501so53229185a.0 for ; Fri, 08 Sep 2023 15:01:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694210510; x=1694815310; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=c/Mdr+GzW+sAH8pnXlRdcxL3Lco75+PnNmrEF8RR5jk=; b=LK0z8zKuGl2tL82srwzvdJyydY24aLO10ss1zA+otGzob/DK3SVzvxc14n2KNwEMd3 WsL82ufy7mH18Pc1Dryt0m/u76i8KBynkclIQdeOmZrR96/3oy3Axsup7OQWk+WYUlFF Zvx7SuHN/yPzqqD4sDcACmOnsChJASbL7poOceL4q7Ab43BIo00jrmBkSDz5nHr/Bjkh H2sLfx7exQmyAOqOaBATHBYb1iD2+DXx/T/goX5on6kQTRf4SiGGKtMlhhoSKg63/K2F w5UYd8eLFjHUDBCWPi9bcNxjB4G/OVJVaFDddcOaTksDzhojnL91VqnWPQ8KLMO4appd k41A== X-Gm-Message-State: AOJu0YyyWhu61p+HbMIXcxpU5x6WWCTleTbIb/oqOqoVscfKQ5+t+k4h CGHEugNBxGg+N33SzMR39qsvXQ4PhrhWwabwII4OskW42MpBfvSUL37UXdQE4N1tLaAylfEQMM6 KJ6wN5An1GbilrAsMqhDhOhwi X-Received: by 2002:a05:620a:1a20:b0:76d:9234:1db4 with SMTP id bk32-20020a05620a1a2000b0076d92341db4mr3888989qkb.7.1694210510049; Fri, 08 Sep 2023 15:01:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGYXsZIB5Fi5tMCsJrBEtgGgvTsQegolX3ll0BuwfoWJhX6998+NmmxiHzd2BWsKmfRMjarkA== X-Received: by 2002:a05:620a:1a20:b0:76d:9234:1db4 with SMTP id bk32-20020a05620a1a2000b0076d92341db4mr3888960qkb.7.1694210509711; Fri, 08 Sep 2023 15:01:49 -0700 (PDT) Received: from x1n (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id du32-20020a05620a47e000b0076cc4610d0asm884723qkb.85.2023.09.08.15.01.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Sep 2023 15:01:49 -0700 (PDT) Date: Fri, 8 Sep 2023 18:01:26 -0400 From: Peter Xu To: Axel Rasmussen Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anish Moorthy , Alexander Viro , Mike Kravetz , Peter Zijlstra , Andrew Morton , Mike Rapoport , Christian Brauner , linux-fsdevel@vger.kernel.org, Andrea Arcangeli , Ingo Molnar , James Houghton , Nadav Amit Subject: Re: [PATCH 0/7] mm/userfaultfd/poll: Scale userfaultfd wakeups Message-ID: References: <20230905214235.320571-1-peterx@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 07, 2023 at 12:18:29PM -0700, Axel Rasmussen wrote: > On Tue, Sep 5, 2023 at 2:42 PM Peter Xu wrote: > > > > Userfaultfd is the type of file that doesn't need wake-all semantics: if > > there is a message enqueued (for either a fault address, or an event), we > > only need to wake up one service thread to handle it. Waking up more > > normally means a waste of cpu cycles. Besides that, and more importantly, > > that just doesn't scale. > > Hi Peter, Hi, Axel, Sorry to respond late. > > I took a quick look over the series and didn't see anything > objectionable. I was planning to actually test the series out and then > send out R-b's, but it will take some additional time (next week). Thanks. The 2nd patch definitely needs some fixup on some functions (either I overlooked without enough CONFIG_* chosen; I am surprised I have vhost even compiled out when testing..), hope that won't bring you too much trouble. I'll send a fixup soon on top of patch 2. > > In the meantime, I was curious about the use case. A design I've seen > for VM live migration is to have 1 thread reading events off the uffd, > and then have many threads actually resolving the fault events that > come in (e.g. fetching pages over the network, issuing UFFDIO_COPY or > UFFDIO_CONTINUE, or whatever). In that design, since we only have a > single reader anyway, I think this series doesn't help. Yes. If the test to carry out only uses 1 thread, it shouldn't bring much difference. > > But, I'm curious if you have data indicating that > 1 reader is more > performant overall? I suspect it might be the case that, with "enough" > vCPUs, it makes sense to do so, but I don't have benchmark data to > tell me what that tipping point is yet. > > OTOH, if one reader is plenty in ~all cases, optimizing this path is > less important. For myself I don't yet have an application that can leverage this much indeed, because QEMU so far only uses 1 reader thread. IIRC Anish was exactly proposing some kvm specific solutions to make single uffd scale, and this might be suitable for any use case like that where we may want to use single uffd and try to make it scale with threads. Using 1 reader + N worker is also a solution, but when using N readers (which also do the work) the app will hit this problem. I am also aware that some apps use more than 1 reader threads (umap), but I don't really know more than that. The problem is I think we shouldn't have that overhead easily just because an app invokes >1 readers, meanwhile it also doesn't make much sense to wake up all readers for a single event for userfaults. So it should always be something good to have. Thanks, -- Peter Xu