From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65E43C433DB for ; Thu, 21 Jan 2021 01:02:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F77423788 for ; Thu, 21 Jan 2021 01:02:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730345AbhAUBCI (ORCPT ); Wed, 20 Jan 2021 20:02:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38980 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390817AbhAUATw (ORCPT ); Wed, 20 Jan 2021 19:19:52 -0500 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75D0EC0613C1 for ; Wed, 20 Jan 2021 16:19:09 -0800 (PST) Received: by mail-pg1-x536.google.com with SMTP id c22so54316pgg.13 for ; Wed, 20 Jan 2021 16:19:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=YuYBfklFhpscFI2MvmVjz7cptai+tJdc1LXs5C+cN9I=; b=IafP9aBBZYJbp61JNTSQCxx/cZip//xHYSQl2qsU5Kynwx4DekHZ1koZaSGyCYQMFf b4TbzloZGl5fEfIN8OvBloos6j8vJ+obqFuZ7iUSCtTwDbmU1Q2534WkZckzk4Z/w5mh M+9vOBhDz4kYxAI/3IUM768SAFP0KsoojWg3ByF6KtTMvz5+EC92HIyNMXcRfdIAS9NL e16OUwbAR3gsN+uCNeErhHlxupo2uc55CC4dFXI+1HBw0O8JyuIUEhw94a7yYYDycNc+ oARtcosxpAuTfoO//uVESSIEhWiy3M8uuAZCBEOsqvQgHiBGCgUxmpJy2xYj3JNrE6Ks M3qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=YuYBfklFhpscFI2MvmVjz7cptai+tJdc1LXs5C+cN9I=; b=P87JA/EYKI0eKLR0Ay+GgztT04qKCq00Q0q64K2l9fxU7uCQLN6kjRA85xCTFVIszK NaRMOtdvPgkTyI4YQZ5y6hcdZOoJF370G71vrls4IMCye+QsmqwboD9Vnurvvyo8n69O 59AVuqSApkEwK/ZCREpr0M17+hNLcAr2TUfTnl8H+UZcpnxij63ThZX0vxGmlGMldpvF C0b7noKEifXswuLPTgzMmx1p1wbwPsfJF99ETIHwO4JMOUHpNVVabKM0DGPhcbQP5gXe TDeaIVLmx8uhtj49xfXc6Na4RnK4ly5zhr/MUfUIHDi/zP5U+f7rwBfv/EBXEO/zuF07 2zyw== X-Gm-Message-State: AOAM533HqpFVcrfz2jBrFZvC/YN90lKYSKXI39DdrW9zu2WNtMTOYcTQ ppLL1v53ELOr8QCIHOTv/WOWOW2l2+Aq7Q== X-Google-Smtp-Source: ABdhPJxqlMiZULHRMbpFS7PiM1m5iPT8o4qCzlkYIzGD9aSJxTbTdbjUZIY0Qx7CN/exN4CXMGBEpg== X-Received: by 2002:aa7:80d5:0:b029:1a3:832a:1fd0 with SMTP id a21-20020aa780d50000b02901a3832a1fd0mr11627762pfn.6.1611188348723; Wed, 20 Jan 2021 16:19:08 -0800 (PST) Received: from google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) by smtp.gmail.com with ESMTPSA id 73sm3541205pga.26.2021.01.20.16.19.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Jan 2021 16:19:07 -0800 (PST) Date: Wed, 20 Jan 2021 16:19:01 -0800 From: Sean Christopherson To: Ben Gardon Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Peter Xu , Peter Shier , Peter Feiner , Junaid Shahid , Jim Mattson , Yulei Zhang , Wanpeng Li , Vitaly Kuznetsov , Xiao Guangrong Subject: Re: [PATCH 15/24] kvm: mmu: Wrap mmu_lock cond_resched and needbreak Message-ID: References: <20210112181041.356734-1-bgardon@google.com> <20210112181041.356734-16-bgardon@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210112181041.356734-16-bgardon@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 12, 2021, Ben Gardon wrote: > Wrap the MMU lock cond_reseched and needbreak operations in a function. > This will support a refactoring to move the lock into the struct > kvm_arch(s) so that x86 can change the spinlock to a rwlock without > affecting the performance of other archs. IMO, moving the lock to arch-specific code is bad for KVM. The architectures' MMUs already diverge pretty horribly, and once things diverge it's really hard to go the other direction. And this change, along with all of the wrappers, thrash a lot of code and add a fair amount of indirection without any real benefit to the other architectures. What if we simply make the common mmu_lock a union? The rwlock_t is probably a bit bigger, but that's a few bytes for an entire VM. And maybe this would entice/inspire other architectures to move to a similar MMU model. diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f3b1013fb22c..bbc8efd4af62 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -451,7 +451,10 @@ struct kvm_memslots { }; struct kvm { - spinlock_t mmu_lock; + union { + rwlock_t mmu_rwlock; + spinlock_t mmu_lock; + }; struct mutex slots_lock; struct mm_struct *mm; /* userspace tied to this vm */ struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM];