From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D4FEDC4345F for ; Tue, 30 Apr 2024 19:32:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID :Mime-Version:Date:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=G1uPvDnXHJaBuqwjhDSS1rZu1PeVaVufd5Mr9p8o9uI=; b=vFUzXns6fOHu5E BIjo+/uwLMujhZnnjDrTl87Ch8XjvQ/XsYIxLdjeFLHpPw3jtShffhf9CNNHN7xU/zA3KB3G6S2eu IzPyOe7JMVZKQ5qK7FF0XcTnOSucwsNmxI0l82yrfXMN5xlnGFUumSgZArYdYslbF6wY6EmqDnWr8 sBhCYrmvDCNfMBZujs9rcwW+7uG0VSlvEeIDYftUIsr7qRwt1hL1sBp2g0aLMGKPofHqBUJyieimh Sp/rLrZDqhrlDik2BAPB2HmWETFkVMP+eyVqhLXaAAOpL7AbMjqcnVaNGbpiQui2KFl88spBChTEW 7wpWw2B+0SXrQjDJrqLw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1tDF-00000007iiz-1d5V; Tue, 30 Apr 2024 19:32:25 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1tCx-00000007ib5-3Jfu for linux-riscv@lists.infradead.org; Tue, 30 Apr 2024 19:32:10 +0000 Received: by mail-pj1-x104a.google.com with SMTP id 98e67ed59e1d1-2ac3403b27eso6941951a91.0 for ; Tue, 30 Apr 2024 12:32:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714505520; x=1715110320; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:reply-to:from:to:cc :subject:date:message-id:reply-to; bh=z8cKIw53BRkz+z0+cac0VztUPzZ2bkNPe6LEvW4OcBU=; b=zOiSmv8GuhC/tbT+GG1ZeNmY1FR8gaBwb09DYAZ+m7HdqalJL8KYyMxl7nB3nV9jyz PIA7iGF5yQyWpu8+F0oNfKaaT2U08A7KfF8voGUatGoYmTBjQHaR7GkTgsKwjQL5qbEr D+pG3wkl0Lo9/Eg3/cvTzhXeDkiZoE52Tbyo70JXcjgQTgkb/r7Vkke7ftu/zNfwdRkZ vX/iev8mHTkcGbZvqL4f5If710695WDAqw9xYRrY4GyvMH5Vf8SS8iAwj8OrzUGyGZM4 +qdt0lsPB4G7QUtRFpdJs0c7lxsD4gRbSrn/VnA7fAU2MI6BnZThTLDolSWI0K1qSqcP jg1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714505520; x=1715110320; h=cc:to:from:subject:message-id:mime-version:date:reply-to :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z8cKIw53BRkz+z0+cac0VztUPzZ2bkNPe6LEvW4OcBU=; b=braE+9nwiBypoeL1psb71YFb71jS8Nf199dXkmkHsF85S7EGOaVGTtInwiA8+shS4/ L/kAzTL+4mZgsgpS0sVwLZdFa/GWc18RTSLXwqWjeo9te64hpe97O1fbqvyMXZNLq9UY T735e1ghv6AcWr/VEaERlAHeqPOuoI4rR6IDuFwgjKYsM12A3XYyFrxDsCH16WH4sBVf 5Ur+yNwX+vshw4t9T1U8G5ZefrFCCM+qKXn9v/c9JhZ7xJWGNW7SJ7IU55/0Bs9gXezI G2RbgFE7y6s5YdAlS2auEV7LXFhY2B3PEhXh+oZsmwrU+BhdKFOfu6EvfHJw42bqbFcm P+Fw== X-Forwarded-Encrypted: i=1; AJvYcCWozaQLut1Vdpjup+6mBBx/+Kt4tyWtgel0FALCVLpLP1sJkbES39z2hGGhjfCbU9ZNSozq6WEPgpV0loQi7Gq9y/cCnwsqSfOX8P03sbF7 X-Gm-Message-State: AOJu0YxHQOv9Aes2aDRyjQD0hjef5CroeMAaicVjNntlfLwXyJfs9Ont emRKKLyBcW1NUuuO/ycXdUFlGCxKbc8mFu0+kj0u6Qlpa7M13/jU9WbJgJGq8Mu0jB9/2a68Fd4 bYA== X-Google-Smtp-Source: AGHT+IH3JUNqnUxIpxTDCx/ljNN0s2y6mLqDYiJH9gncvZUrZrTQ/cL7Kjz1TcqYgEZ6lTuIQ7azs+1F9y4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:e504:b0:2a2:ff01:dd7c with SMTP id t4-20020a17090ae50400b002a2ff01dd7cmr1190pjy.8.1714505520000; Tue, 30 Apr 2024 12:32:00 -0700 (PDT) Date: Tue, 30 Apr 2024 12:31:53 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.45.0.rc0.197.gbae5840b3b-goog Message-ID: <20240430193157.419425-1-seanjc@google.com> Subject: [PATCH 0/4] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() From: Sean Christopherson To: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson , Paolo Bonzini Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240430_123207_879117_038974B4 X-CRM114-Status: GOOD ( 12.53 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Drop kvm_arch_sched_in() and instead pass a @sched_in boolean to kvm_arch_vcpu_load(). While fiddling with an idea for optimizing state management on AMD CPUs, I wanted to skip re-saving certain host state when a vCPU is scheduled back in, as the state (theoretically) shouldn't change for the task while it's scheduled out. Actually doing that was annoying and unnecessarily brittle due to having a separate API for the kvm_sched_in() case (the state save needed to be in kvm_arch_vcpu_load() for the common path). E.g. I could have set a "temporary"-ish flag somewhere in kvm_vcpu, but (a) that's gross and (b) it would rely on the arbitrary ordering between sched_in() and vcpu_load() staying the same. The only real downside I see is that arm64 and riscv end up having to pass "false" for their direct usage of kvm_arch_vcpu_load(), and passing boolean literals isn't ideal. But that can be solved by adding an inner helper that omits the @sched_in param (I almost added a patch to do that, but I couldn't convince myself it was necessary). The other motivation for this is to avoid yet another arch hook, and more arbitrary ordering, if there's a future need to hook kvm_sched_out() (we've come close on the x86 side several times). Sean Christopherson (4): KVM: Plumb in a @sched_in flag to kvm_arch_vcpu_load() KVM: VMX: Move PLE grow/shrink helpers above vmx_vcpu_load() KVM: x86: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() KVM: Delete the now unused kvm_arch_sched_in() arch/arm64/include/asm/kvm_host.h | 1 - arch/arm64/kvm/arm.c | 2 +- arch/arm64/kvm/emulate-nested.c | 4 +- arch/arm64/kvm/reset.c | 2 +- arch/loongarch/include/asm/kvm_host.h | 1 - arch/loongarch/kvm/vcpu.c | 2 +- arch/mips/include/asm/kvm_host.h | 1 - arch/mips/kvm/mmu.c | 2 +- arch/powerpc/include/asm/kvm_host.h | 1 - arch/powerpc/kvm/powerpc.c | 2 +- arch/riscv/include/asm/kvm_host.h | 1 - arch/riscv/kvm/vcpu.c | 4 +- arch/s390/include/asm/kvm_host.h | 1 - arch/s390/kvm/kvm-s390.c | 2 +- arch/x86/include/asm/kvm-x86-ops.h | 1 - arch/x86/include/asm/kvm_host.h | 4 +- arch/x86/kvm/pmu.c | 6 +-- arch/x86/kvm/svm/svm.c | 13 ++--- arch/x86/kvm/vmx/main.c | 2 - arch/x86/kvm/vmx/vmx.c | 75 +++++++++++++-------------- arch/x86/kvm/vmx/x86_ops.h | 3 +- arch/x86/kvm/x86.c | 26 +++++----- include/linux/kvm_host.h | 4 +- virt/kvm/kvm_main.c | 5 +- 24 files changed, 70 insertions(+), 95 deletions(-) base-commit: a96cb3bf390eebfead5fc7a2092f8452a7997d1b -- 2.45.0.rc0.197.gbae5840b3b-goog _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv