From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87B79C433EF for ; Fri, 17 Jun 2022 15:02:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382475AbiFQPB7 (ORCPT ); Fri, 17 Jun 2022 11:01:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237187AbiFQPB6 (ORCPT ); Fri, 17 Jun 2022 11:01:58 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00AD841628 for ; Fri, 17 Jun 2022 08:01:57 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id f8so4097503plo.9 for ; Fri, 17 Jun 2022 08:01:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=67qC+hjpQQ8/u9lImvfqVDPKc0MC50T6OyuazzxQTOs=; b=rcfjRAXxIPRtebzytFQ4rdSLQPqdVJUVxjkJrOP2+ko477yKr0ij30QC5VTxQvmOZ1 bFDDGtEpXpALWRNA0AKUnQh24Ko2VZIH+URIKsdIq0cycBHC6uFgxJU9+xElyqc8enXi h8CN/B/Frtq3zfIeRuxjL024ZWbiEDZgVBKOOL1LsuCCvGmmVR08SIqY8f6S6o4Xlo5T bCUAM0Ri8COUC7loHrkQgZQk6SBAQ+fUUa+AdDjmblGImt7BZanv1Pn5xBfUFA8ISN2H g2VrKo7DWnK0HLqxyahPVDdaz0lu7ZBRlpMfVPvpgyT87bUFm74HAYxaB2zDe5kuSQVV DaWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=67qC+hjpQQ8/u9lImvfqVDPKc0MC50T6OyuazzxQTOs=; b=BCYSKF7syaMY+3HdQxgKq4zdbKXwKCvw6a4G4EGa8gZaeQw8Hn7wTcZFElA1SGfvMF fbDVaIJ7RqHYaLCM+t3zNyk6yWOMOxKOmA4oA6HJHIKe4gPbej5Wgd02LAw3Pn/b2HVW 5Xba9N60XzXiQIPqgp1KAuE7LFPOJ+jc5+9FRt4ynxgblQJcSHhkmDlfzp9HZxEAUhLZ hQDFziULAMTCYf5QM3Muy9ZjVnj/bUhsZsfyaCzU6wZbrHSzn4aHyXj7IRbNIl9+CpZs iy7jTYtq8edzA7PAFPDNZb4TgW5thThsjvgm4dsC1EAQNudZx3BMDLKkVsA6bVSH0Mz7 Tn4g== X-Gm-Message-State: AJIora/xRdlran7mF5OAhadIZoGGUxd7ZjYIzYuUSdk2vz7jWrb9W6Gs 20xbkTcLwwe9vu8dSCcJ2DSQ5g== X-Google-Smtp-Source: AGRyM1uFEQsMzgu/EpRCLWD/tpRg2MiJuqY9ezxKkezmOL1sxRiLi0E4PmGYF22syg3VejhQ2hPZIA== X-Received: by 2002:a17:90b:1e4e:b0:1e3:47e4:92b6 with SMTP id pi14-20020a17090b1e4e00b001e347e492b6mr21981003pjb.47.1655478117043; Fri, 17 Jun 2022 08:01:57 -0700 (PDT) Received: from google.com (123.65.230.35.bc.googleusercontent.com. [35.230.65.123]) by smtp.gmail.com with ESMTPSA id s17-20020a17090a5d1100b001e0d4169365sm5724411pji.17.2022.06.17.08.01.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jun 2022 08:01:56 -0700 (PDT) Date: Fri, 17 Jun 2022 15:01:52 +0000 From: Sean Christopherson To: David Matlack Cc: Paolo Bonzini , Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , Lai Jiangshan Subject: Re: [PATCH v6 10/22] KVM: x86/mmu: Pass memory caches to allocate SPs separately Message-ID: References: <20220516232138.1783324-1-dmatlack@google.com> <20220516232138.1783324-11-dmatlack@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220516232138.1783324-11-dmatlack@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org On Mon, May 16, 2022, David Matlack wrote: > Refactor kvm_mmu_alloc_shadow_page() to receive the caches from which it > will allocate the various pieces of memory for shadow pages as a > parameter, rather than deriving them from the vcpu pointer. This will be > useful in a future commit where shadow pages are allocated during VM > ioctls for eager page splitting, and thus will use a different set of > caches. > > Preemptively pull the caches out all the way to > kvm_mmu_get_shadow_page() since eager page splitting will not be calling Uber nit, "eager hugepage splitting" to provide a mental cue/reminder for why those pages are direct. > kvm_mmu_alloc_shadow_page() directly. > > No functional change intended. > > Signed-off-by: David Matlack > --- Reviewed-by: Sean Christopherson