From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 20211C4345F for ; Wed, 1 May 2024 14:28:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=nUqM3JVCK29JMp30wXnD5jZPeFCOY5Tv4ZZTcZhUmUk=; b=ha61Fz7CuOTtXNKRsKZ07xYNOf Eh5IbhcVXa0EExHo3EeY4F60NUV2EYIRukcorgnucxyX4cqo9Aszo/p/RYjoftdWjiH7nYljUBjz4 sqk0L4rRFeq8s84qw2MusO6zBsxuM1pFZn28yaj7pSuTZciUt5mCn3/HOl8OVSopF2uLcl+fGFGjT QPzXN0ouU2s3G5WSDxZQkP1/ClQHWyy39ezzilFGe3n/LPdtZIR2x5Bny6VrF/u7P7/VRhiUw80P/ HPP5lvNJzCv7gxl/SIztVqZsjhmtOFfyZ42Enq21fcVKXLPN0VAkXOd/Cn8uyAkMpgp3qCLP4mTzq lEDzDMqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s2Awo-00000009nvm-0Ry6; Wed, 01 May 2024 14:28:38 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s2Awl-00000009ntC-0xNc for linux-arm-kernel@lists.infradead.org; Wed, 01 May 2024 14:28:36 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-dcdc3db67f0so1160935276.1 for ; Wed, 01 May 2024 07:28:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714573703; x=1715178503; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XN3PxGtZEQXNqntUnTfIz5xk/K9LwEoscNdYQ+/jCbc=; b=ho124KpflC6ns/iEiCSj8Irfz/Uvab716H1therNhSUzDdQ3fnDbnQXarm18/IlQVe po6CgSUKq+yhigf3BpRi76LpoI4SReeQ1dzLbWqjsyDES7DVaBVmpxPWcOzsj1dhMeHS ChLSw+5czSgKPr9iuJvpc9Ofml3xJH+sSgsASCqMAfiD4MdW7NACOFRoifPEmxh7Ztre nOOqLBQNutXNw7XaQxZz8PED/p5djJ1bRkxhI4GIcprILmT3F1UIABEr6iZTt2FPX/J9 6Qh84/KHP6M7xnrx+XGP7uLnUP9rgRxhC2H3368upjJbYsW3S5mJ1WcS890gR8AzpKgN 57zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714573703; x=1715178503; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XN3PxGtZEQXNqntUnTfIz5xk/K9LwEoscNdYQ+/jCbc=; b=LCpeMyVP7NCvufBhKnkVxke583NY01ow75PkNka66vYsiwcqRwsA3tBdTqXGyGSJzf /j8I0J2kqriESfXVeaU8mVlylNGNBLl13GIhGOZCfvxtiyXpkAlrAlUF3i2b99UBHRrP krJ8e3H38IVik25Ay1XHKY3xw7Hq/tMlfBTMkIe7D10wZidlRAFgZ3AvetNkIhHE2blo HOkON0amp/VyH3n0ogSdZ19UuMlxWnZ8ctdAoBZoV/wnk8+58jw+TceA4g4y2l2v8Mj7 xCgc+F+tXemaOboHsvAr8y4PH+ZV72Q1C18WbP8fSvUaiKuOjHzUeuEpG9KbDKpUtqcA d8Xw== X-Forwarded-Encrypted: i=1; AJvYcCVr+32H0aWaRDFxWu44DMFzOryOiwxAHo3JZEV+7271FgkMx93E0+ixsAkjzCVytbOuyt/S7L41j81/vT/b6Bf2sArkLaRUVTXBxkzKqv9kUgmeLms= X-Gm-Message-State: AOJu0YybAgaDWIQ4tX+iC+wPHhcVX/Ud0JdzkePnHlRpf1EaAxZk7Io3 4HuTxsCtIXvixCtXgvs2abJTIDpDeuRSt2JXKiFeBfKhSrtvuTeaPbTjKpCAZZ2LFBPN7Wm5zAu Tig== X-Google-Smtp-Source: AGHT+IHUezc4IZyZaMpXPcKS4Nl6spB8OORkXUO8w6Eu/n8Z2MEF5bQ0emcAkaZbOCNg6hB5INRjxf830ZQ= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:a283:0:b0:de5:a44c:25af with SMTP id c3-20020a25a283000000b00de5a44c25afmr1319848ybi.5.1714573703105; Wed, 01 May 2024 07:28:23 -0700 (PDT) Date: Wed, 1 May 2024 07:28:21 -0700 In-Reply-To: Mime-Version: 1.0 References: <20240430193157.419425-1-seanjc@google.com> Message-ID: Subject: Re: [PATCH 0/4] KVM: Fold kvm_arch_sched_in() into kvm_arch_vcpu_load() From: Sean Christopherson To: Oliver Upton Cc: Marc Zyngier , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240501_072835_370962_20628013 X-CRM114-Status: GOOD ( 28.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, May 01, 2024, Oliver Upton wrote: > On Tue, Apr 30, 2024 at 12:31:53PM -0700, Sean Christopherson wrote: > > Drop kvm_arch_sched_in() and instead pass a @sched_in boolean to > > kvm_arch_vcpu_load(). > > > > While fiddling with an idea for optimizing state management on AMD CPUs, > > I wanted to skip re-saving certain host state when a vCPU is scheduled back > > in, as the state (theoretically) shouldn't change for the task while it's > > scheduled out. Actually doing that was annoying and unnecessarily brittle > > due to having a separate API for the kvm_sched_in() case (the state save > > needed to be in kvm_arch_vcpu_load() for the common path). > > > > E.g. I could have set a "temporary"-ish flag somewhere in kvm_vcpu, but (a) > > that's gross and (b) it would rely on the arbitrary ordering between > > sched_in() and vcpu_load() staying the same. > > Another option would be to change the rules around kvm_arch_sched_in() > where the callee is expected to load the vCPU context. > > The default implementation could just call kvm_arch_vcpu_load() directly > and the x86 implementation can order things the way it wants before > kvm_arch_vcpu_load(). > > I say this because ... > > > The only real downside I see is that arm64 and riscv end up having to pass > > "false" for their direct usage of kvm_arch_vcpu_load(), and passing boolean > > literals isn't ideal. But that can be solved by adding an inner helper that > > omits the @sched_in param (I almost added a patch to do that, but I couldn't > > convince myself it was necessary). > > Needing to pass @sched_in for other usage of kvm_arch_vcpu_load() hurts > readability, especially when no other architecture besides x86 cares > about it. Yeah, that bothers me too. I tried your suggestion of having x86's kvm_arch_sched_in() do kvm_arch_vcpu_load(), and even with an added kvm_arch_sched_out() to provide symmetry, the x86 code is kludgy, and even the common code is a bit confusing as it's not super obvious that kvm_sched_{in,out}() is really just kvm_arch_vcpu_{load,put}(). Staring a bit more at the vCPU flags we have, adding a "bool scheduled_out" isn't terribly gross if it's done in common code and persists across load() and put(), i.e. isn't so blatantly a temporary field. And because it's easy, it could be set with WRITE_ONCE() so that if it can be read cross-task if there's ever a reason to do so. The x86 code ends up being less ugly, and adding future arch/vendor code for sched_in() *or* sched_out() requires minimal churn, e.g. arch code doesn't need to override kvm_arch_sched_in(). The only weird part is that vcpu->preempted and vcpu->ready have slightly different behavior, as they are cleared before kvm_arch_vcpu_load(). But the weirdness is really with those flags no having symmetry, not with scheduled_out itself. Thoughts? static void kvm_sched_in(struct preempt_notifier *pn, int cpu) { struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn); WRITE_ONCE(vcpu->preempted, false); WRITE_ONCE(vcpu->ready, false); __this_cpu_write(kvm_running_vcpu, vcpu); kvm_arch_vcpu_load(vcpu, cpu); WRITE_ONCE(vcpu->scheduled_out, false); } static void kvm_sched_out(struct preempt_notifier *pn, struct task_struct *next) { struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn); WRITE_ONCE(vcpu->scheduled_out, true); if (current->on_rq) { WRITE_ONCE(vcpu->preempted, true); WRITE_ONCE(vcpu->ready, true); } kvm_arch_vcpu_put(vcpu); __this_cpu_write(kvm_running_vcpu, NULL); } _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel