From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D5B5C433EF for ; Tue, 8 Mar 2022 07:42:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231974AbiCHHnl (ORCPT ); Tue, 8 Mar 2022 02:43:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245022AbiCHHnj (ORCPT ); Tue, 8 Mar 2022 02:43:39 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D4351EAD6 for ; Mon, 7 Mar 2022 23:42:41 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 92A1AB81670 for ; Tue, 8 Mar 2022 07:42:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 456FEC340EB; Tue, 8 Mar 2022 07:42:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1646725359; bh=bkgB4WGHvnhbEGJZzNWAqeGIUuX2xtrogk2tLOadLIo=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=aJuFzDDhQUqANpLulKIruyQEBX93QCUrnmk67JoqjIHNsa3T1vFmWZnsSRbzzzRVZ 8SYo3v32psUjXPbtB4k4HMkliedD+L8Ltm6Yd67729KDUEhqhJxap315eh4UA1TGHE qjU+DqauTlSAC5s1XBkx/nOQir69OCKS0P+XktHhmn0xY3brI0pIzFVDgVxqWmgaPE 0BUtRwdpiFNR1rZnQ9IXqUKh5l8OHP488juER4kfvguBR54LXo7KpDQThhRDXfbXt6 9X5Aul/q2fYAlyGSlIPCFkHF/qSKWrKLoIqA4ncXjSROyZD2FFnVTCv+tL8y8OMMAG K1kIJrrL1UNJQ== Received: from ip-185-104-136-29.ptr.icomera.net ([185.104.136.29] helo=billy-the-mountain.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nRUUO-00Czga-Mv; Tue, 08 Mar 2022 07:42:36 +0000 Date: Tue, 08 Mar 2022 07:42:23 +0000 Message-ID: <87bkyggaao.wl-maz@kernel.org> From: Marc Zyngier To: David Matlack Cc: Paolo Bonzini , Huacai Chen , leksandar Markovic , Sean Christopherson , Vitaly Kuznetsov , Peter Xu , Wanpeng Li , Jim Mattson , Joerg Roedel , Peter Feiner , Andrew Jones , "Maciej S. Szmigiero" , kvm list Subject: Re: [PATCH 19/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs In-Reply-To: References: <20220203010051.2813563-1-dmatlack@google.com> <20220203010051.2813563-20-dmatlack@google.com> <8735k84i6f.wl-maz@kernel.org> <878rtotk3h.wl-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.104.136.29 X-SA-Exim-Rcpt-To: dmatlack@google.com, pbonzini@redhat.com, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, seanjc@google.com, vkuznets@redhat.com, peterx@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, pfeiner@google.com, drjones@redhat.com, maciej.szmigiero@oracle.com, kvm@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Mon, 07 Mar 2022 23:49:06 +0000, David Matlack wrote: > > On Sat, Mar 5, 2022 at 8:55 AM Marc Zyngier wrote: > > > > On Fri, 04 Mar 2022 21:59:12 +0000, > > David Matlack wrote: > > > > > > On Thu, Feb 24, 2022 at 11:20 AM David Matlack wrote: > > > > > > > > On Thu, Feb 24, 2022 at 3:29 AM Marc Zyngier wrote: > > > > > > > > > > On Thu, 03 Feb 2022 01:00:47 +0000, > > > > > David Matlack wrote: > > > > > > > > > > > > [...] > > > > > > > > > > > > > > > /* Cache some mmu pages needed inside spinlock regions */ > > > > > > - struct kvm_mmu_memory_cache mmu_page_cache; > > > > > > + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); > > > > > > > > > > I must say I'm really not a fan of the anonymous structure trick. I > > > > > can see why you are doing it that way, but it feels pretty brittle. > > > > > > > > Yeah I don't love it. It's really optimizing for minimizing the patch diff. > > > > > > > > The alternative I considered was to dynamically allocate the > > > > kvm_mmu_memory_cache structs. This would get rid of the anonymous > > > > struct and the objects array, and also eliminate the rather gross > > > > capacity hack in kvm_mmu_topup_memory_cache(). > > > > > > > > The downsides of this approach is more code and more failure paths if > > > > the allocation fails. > > > > > > I tried changing all kvm_mmu_memory_cache structs to be dynamically > > > allocated, but it created a lot of complexity to the setup/teardown > > > code paths in x86, arm64, mips, and riscv (the arches that use the > > > caches). I don't think this route is worth it, especially since these > > > structs don't *need* to be dynamically allocated. > > > > > > When you said the anonymous struct feels brittle, what did you have in > > > mind specifically? > > > > I can perfectly see someone using a kvm_mmu_memory_cache and searching > > for a bit why they end-up with memory corruption. Yes, this would be a > > rookie mistake, but this are some expectations all over the kernel > > that DEFINE_* and the corresponding structure are the same object. > > That is a good point. And that risk is very real given that > kvm_mmu_topup_memory_cache() assumes the capacity is > KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE if the capacity field is 0. Exactly. I like being surprised as much as the next one, but I'm not sure about this particular instance ;-). > One way to mitigate this would be to get rid of the capacity hack in > kvm_mmu_topup_memory_cache() and require the capacity field be > explicitly initialized. That will make it harder to trip over this > and/or easier to debug because kvm_mmu_topup_memory_cache() can issue > a WARN() if the capacity is 0. Once you see that warning and go to > initialize the capacity field you'll realize why it needs to be set in > the first place. The diff will just be slightly larger to set capacity > for each cache. That'd be fine. I don't mind the extra diff if this can be made more or less foolproof. Thanks, M. -- Without deviation from the norm, progress is not possible.