From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A52C526FA64 for ; Tue, 6 May 2025 09:20:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=195.135.223.131 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746523244; cv=none; b=D4bIhZ658NFtPZ5YAciBLm7Cfu7c97RepTfwyE3L1dXj36zJ/VsIEZ+wWIsr6MDkT3X3JiChNhcT5pVZu70nlHTb46acJKdjIVO9XmmM0L5BODunaE33UxjBY9ojTOa/rVXKljXi0KwmqqcIwh0zCSdoQX4ncEchb7W3nhvQrpk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746523244; c=relaxed/simple; bh=4vDBTuxbpXs5OX9PuySbqXDrfoKponl4p3dkWrOXQDo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FEyOQG7jc6C/wW7tZ9iRMb3Hz0bag83Rg137oPaBp8Cj+Z7cBe6gyQr26S+L+q9UHDzKEdASCLlpK1ygu9TluqA5nx6scT6XuqG6cZLBAVI9j8EL9o1EzDVyTDTEVVFYRbYHskKvEi9MMz7wJSn1MajR283QS0HCE2enuNHBc3E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; arc=none smtp.client-ip=195.135.223.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id F06B01F391; Tue, 6 May 2025 09:20:40 +0000 (UTC) Authentication-Results: smtp-out2.suse.de; none Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 49800137CF; Tue, 6 May 2025 09:20:40 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id zH5wEGjUGWggbAAAD6G6ig (envelope-from ); Tue, 06 May 2025 09:20:40 +0000 From: Juergen Gross To: linux-kernel@vger.kernel.org, x86@kernel.org, linux-hyperv@vger.kernel.org, kvm@vger.kernel.org Cc: xin@zytor.com, Juergen Gross , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Paolo Bonzini , Vitaly Kuznetsov , Sean Christopherson , Boris Ostrovsky , xen-devel@lists.xenproject.org Subject: [PATCH 3/6] x86/msr: minimize usage of native_*() msr access functions Date: Tue, 6 May 2025 11:20:12 +0200 Message-ID: <20250506092015.1849-4-jgross@suse.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250506092015.1849-1-jgross@suse.com> References: <20250506092015.1849-1-jgross@suse.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spam-Level: X-Spamd-Result: default: False [-4.00 / 50.00]; REPLY(-4.00)[] X-Spam-Score: -4.00 X-Spam-Flag: NO X-Rspamd-Queue-Id: F06B01F391 X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Rspamd-Action: no action X-Rspamd-Server: rspamd2.dmz-prg2.suse.org In order to prepare for some MSR access function reorg work, switch most users of native_{read|write}_msr[_safe]() to the more generic rdmsr*()/wrmsr*() variants. For now this will have some intermediate performance impact with paravirtualization configured when running on bare metal, but this is a prereq change for the planned direct inlining of the rdmsr/wrmsr instructions with this configuration. The main reason for this switch is the planned move of the MSR trace function invocation from the native_*() functions to the generic rdmsr*()/wrmsr*() variants. Without this switch the users of the native_*() functions would lose the related tracing entries. Note that the Xen related MSR access functions will not be switched, as these will be handled after the move of the trace hooks. Signed-off-by: Juergen Gross --- arch/x86/hyperv/ivm.c | 2 +- arch/x86/kernel/kvmclock.c | 2 +- arch/x86/kvm/svm/svm.c | 16 ++++++++-------- arch/x86/xen/pmu.c | 4 ++-- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 09a165a3c41e..fe177a6be581 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -319,7 +319,7 @@ int hv_snp_boot_ap(u32 cpu, unsigned long start_ip) asm volatile("movl %%ds, %%eax;" : "=a" (vmsa->ds.selector)); hv_populate_vmcb_seg(vmsa->ds, vmsa->gdtr.base); - vmsa->efer = native_read_msr(MSR_EFER); + rdmsrq(MSR_EFER, vmsa->efer); vmsa->cr4 = native_read_cr4(); vmsa->cr3 = __native_read_cr3(); diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index ca0a49eeac4a..b6cd45cce5fe 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -196,7 +196,7 @@ static void kvm_setup_secondary_clock(void) void kvmclock_disable(void) { if (msr_kvm_system_time) - native_write_msr(msr_kvm_system_time, 0); + wrmsrq(msr_kvm_system_time, 0); } static void __init kvmclock_init_mem(void) diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 4c2a843780bf..3f0eed84f82a 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -482,12 +482,12 @@ static void svm_init_erratum_383(void) return; /* Use _safe variants to not break nested virtualization */ - if (native_read_msr_safe(MSR_AMD64_DC_CFG, &val)) + if (rdmsrq_safe(MSR_AMD64_DC_CFG, &val)) return; val |= (1ULL << 47); - native_write_msr_safe(MSR_AMD64_DC_CFG, val); + wrmsrq_safe(MSR_AMD64_DC_CFG, val); erratum_383_found = true; } @@ -650,9 +650,9 @@ static int svm_enable_virtualization_cpu(void) u64 len, status = 0; int err; - err = native_read_msr_safe(MSR_AMD64_OSVW_ID_LENGTH, &len); + err = rdmsrq_safe(MSR_AMD64_OSVW_ID_LENGTH, &len); if (!err) - err = native_read_msr_safe(MSR_AMD64_OSVW_STATUS, &status); + err = rdmsrq_safe(MSR_AMD64_OSVW_STATUS, &status); if (err) osvw_status = osvw_len = 0; @@ -2149,7 +2149,7 @@ static bool is_erratum_383(void) if (!erratum_383_found) return false; - if (native_read_msr_safe(MSR_IA32_MC0_STATUS, &value)) + if (rdmsrq_safe(MSR_IA32_MC0_STATUS, &value)) return false; /* Bit 62 may or may not be set for this mce */ @@ -2160,11 +2160,11 @@ static bool is_erratum_383(void) /* Clear MCi_STATUS registers */ for (i = 0; i < 6; ++i) - native_write_msr_safe(MSR_IA32_MCx_STATUS(i), 0); + wrmsrq_safe(MSR_IA32_MCx_STATUS(i), 0); - if (!native_read_msr_safe(MSR_IA32_MCG_STATUS, &value)) { + if (!rdmsrq_safe(MSR_IA32_MCG_STATUS, &value)) { value &= ~(1ULL << 2); - native_write_msr_safe(MSR_IA32_MCG_STATUS, value); + wrmsrq_safe(MSR_IA32_MCG_STATUS, value); } /* Flush tlb to evict multi-match entries */ diff --git a/arch/x86/xen/pmu.c b/arch/x86/xen/pmu.c index 8f89ce0b67e3..d49a3bdc448b 100644 --- a/arch/x86/xen/pmu.c +++ b/arch/x86/xen/pmu.c @@ -323,7 +323,7 @@ static u64 xen_amd_read_pmc(int counter) u64 val; msr = amd_counters_base + (counter * amd_msr_step); - native_read_msr_safe(msr, &val); + rdmsrq_safe(msr, &val); return val; } @@ -349,7 +349,7 @@ static u64 xen_intel_read_pmc(int counter) else msr = MSR_IA32_PERFCTR0 + counter; - native_read_msr_safe(msr, &val); + rdmsrq_safe(msr, &val); return val; } -- 2.43.0